All Collections
Integrations
Data Push Destination Example - Developing Automated Workflows Powered by UXI Data Using AWS Lambda
Data Push Destination Example - Developing Automated Workflows Powered by UXI Data Using AWS Lambda
J
Written by Josh Peters
Updated over a week ago

Once your UXI sensor test result or issue data is in S3, you can use AWS Lambda to process the data (extract, transform, load) and power automated workflows. In this example, you will create a Lambda function that adds new issues that are confirmed and removes issues when they are resolved from an AWS DynamoDB table. The end result is a DynamoDB table that has a record of ongoing issues you can use to see which ones have been ongoing the longest.

WARNING: When using AWS Lambda and event triggers, make sure you do not write any output to the same bucket you are reading from and avoid any other recursion.

Note: In this example, all services (S3, Lambda, DynamoDB) are in the same region.

Send Issue Data to S3

Create a table in DynamoDB

Next you will want to Navigate to the AWS DynamoDB application and create a new table.

Give the table a name and specify the partition key as ‘uid’. (The uid from the issue schema is unique for the issue so it will be a good primary key).

You may choose to modify other settings. In this example all other defaults are selected. When finished, select Create Table. After a few minutes the table status will become Active.

Create a Lambda Function

Navigate to the AWS Lambda application and create a new function.

Select Author from Scratch. Give the function a name and a runtime of whatever language you will use to write the function. In this example, you can select Python 3.9.

You may choose to modify other settings. In this example all other defaults are selected. When finished select Create Function.

Once the function is created, open the function and navigate to the Code and paste in the following. Make sure to adjust for your DynamoDB table name.

import os
import json
import boto3
from urllib.parse import unquote

s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')

def lambda_handler(event, context):
print('## ENVIRONMENT VARIABLES')
print(os.environ)
print('## EVENT')
print(event)

table = dynamodb.Table('uxi_issues')

for record in event['Records']:
bucket_name = record['s3']['bucket']['name']
key_name = record['s3']['object']['key']

decoded_key_name = unquote(key_name)

s3_object = s3.get_object(Bucket=bucket_name, Key=decoded_key_name)

# the convert the byte string to new line delimited json
data = s3_object['Body'].read().decode("utf-8").splitlines()

# process the test result. Add your code below
print('## DATA')
for line in data:
issue = json.loads(line)
if issue["status"] == "CONFIRMED":
print('## Issue Confirmed')
print(issue)
table.put_item(Item=issue)
else:
print('## Issue Resolved')
print(issue)
table.delete_item(Key={'uid' : issue["uid"]})


return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}

When finished select Deploy.

Edit Lambda Permissions

Under Configuration -> Permissions, make sure to assign the role or edit the role such that the function has access to both the S3 bucket and the DynamoDB table. It is recommended to configure this as narrowly as possible.

Add Event Trigger

At the top of the page under Function Overview select Add Trigger.

Configure the event trigger based off your S3 bucket. Choose event type S3 Object Created. Make sure the prefix matches the objects in your bucket. For example Prefix: Aruba-UXI/issues.s3.<customer_uid>/

Make sure to review your code and ensure you are not reading from and writing to the same bucket or avoiding any other types of recursions. Then acknowledge the risk and select Add.

Summary

Now when your sensors detect issues, the data is put in S3. This action causes the lambda function to process the S3 object and update the DynamoDB table accordingly. Your DynamoDB table has a record of ongoing issues and you can sort which ones have been going on the longest.

Did this answer your question?