← All Articles

Create a custom Alexa Skill with AWS Lambda - Part 3 (Lambda)

This is part 3 of a series on using AWS Lambda to build an Alexa Skill. Refer also to the overview of the server-less app (part 1), and configuring a custom Alexa Skill (part 2).

I chose to use a Lambda function for my skill's "backend", rather than using a traditional server. Lambda is marketed as server-less compute, meaning code is only spun up and executed upon a request, instead of a server always being "on" and listening for requests.

Why Lambda?

There are 3 main advantages for this:

  1. I didn't have to fiddle around with server administration and setting up firewalls, libraries, etc. Lambda allows you to just start coding in your chosen language environment (Python in this case).

  2. This is a simple app which didn't require a lot of compute power on an ongoing basis, lambda was perfect for the short infrequent tasks I wanted to perform.

  3. Deploying code changes was faster - I could just edit code in the online editor (or upload a new zip file with my code) and it would be available upon saving. There was no need to setup Ansible or other code deployment pipeline, or to SSH to the server to git pull, etc.

However there were also a couple of drawbacks, namely:

  1. Slower initialization times - every time a new "session" is started on lambda, there is a slight but noticeable lag before the request is served. I found it to be around 500ms, so not too terrible but definitely noticeable. Lambda functions appear to "hang around" for a few seconds at a time to process other related requests from the same user and there was less of a lag for subsequent requests.

  2. Logging was more complicated - since lambda functions do not operate with the concept of a filesystem to write logs to, all log messages are sent off to Cloudwatch. This required some configuration to get started and there is a delay of a few seconds after each request before logs appear, but once setup it went smoothly. Check that your lambda function has an IAM role which enables writing to Cloudwatch, without it your logs will not appear.

Code for your Lambda function

Your lambda function needs an entry point or "handler" whenever it receives a request. By default, this is set to lambda_function.lambda_handler although this can be set in the configuration section in the Lambda web console.

What this means is your lambda function expects a file called lambda_function.py, and it has a function called lambda_handler. This is the place to handle routing of the intents we discussed in configuring your Alexa skill. Note that it always receives an event and context object when requested.

For example:


# lambda_function.py

def lambda_handler(event, context):
    print("Received event %s" % event)

    # Check your lambda function is only being called by your Alexa skill
    if event['session']['application']['applicationId'] != "my-alexa-skill-id":
        raise ValueError("Invalid Application ID")

    if event['request']['type'] == "IntentRequest":
        intent = event['request']['intent']
        intent_name = intent['name']

        # Dispatch to your skill's intent handlers
        if intent_name == "IntentA":
            # ... do something

        elif intent == "IntentB":
            # ... do something else

    # Your lambda function can also serve HTML if accessed via an API Gateway
    return {
        'statusCode': '200',
        'body':       "<html><h1>Welcome</h1></html>",
        'headers':    "{'Content-Type': 'text/html'}"
    }

Notice the code blocks above relating to intents. These are specific to Alexa skills, and routes any voice commands from Alexa into the rest of your app.

This lambda_function.py file can either be:

  • A single file - in which you can edit online within the web console.

  • Just one file with other folders etc. In this case you can zip up the folder and upload in the web console. This is your only option if you use other python packages that are not part of the standard Python library.

Creating a deployment package

Assuming you are using other python packages and need to upload a zip file rather than only using a single file, you will need to create a Deployment package for your lambda function.

To do this, you must:

  1. On your local machine, install any 3rd party python packages to your base folder. eg pip install jinja2 -t /path/to/my/base/folder.

  2. The base folder is where you should keep your lambda_function.py file. All other code can be in the same folder or in subfolders, and called using standard imports like from mypackage import myfunction.

  3. Once your code is ready, just zip up this entire base folder (and any subfolders) into 1 zip file and upload via the web console.

Note that Boto3 is already included by default in all lambda functions, so just import it - there is no need to include in your zip file. Boto is the Python package for interacting with AWS services like DynamoDB.

Calling your Lambda function

Once your code has been uploaded into your lambda function, you can configure other services to make requests to this lambda. These are known as "triggers" for your lambda function, and are set in the web console.

Triggers could be:

  • An Alexa Skill - this is the first lambda function used in my server-less app, so that Alexa voice commands trigger my lambda function.

  • An API Gateway - this is used in the second lambda function used for authentication, so that the lambda function can be accessible via a standard URL.

Once your triggers are setup, your lambda function gets requested with a bunch of environment variables set automatically. You can include something like this in your code to see the full list:


import os

print(os.environ)

The most useful ones though are environment variables like AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION - these are used by Boto3 if you use any other AWS services like DynamoDB, S3, etc in your lambda function.

Another issue to be aware of is that Lambda does not currently handle x-www-form-urlencoded POST params (this appears to be a known issue at the time of writing). These POST params are passed in the event['body'] as a url-encoded string, eg "email=myemail%40gmail.com&otherparam=23".

To parse these, I usually include a small helper function which parses the string into a standard Python dictionary.


import urllib2

def parse_post(event):
    post = {}

    body = event.get('body', None)
    if not body:
        return post

    parsed = urllib2.unquote(body)
    key_pairs = parsed.split("&")
    for kp in key_pairs:
        pairs = kp.split("=")
        if len(pairs) == 2:
            post[pairs[0]] = pairs[1]

    return post

Your lambda function can send requests to other services, store data in a database, and anything else you like (just like a normal server). Now with your Lambda function receiving requests from your Alexa skill, you have everything you need to use voice commands for your app. If you have been following part 1 and part 2 of this series, just follow the submission guidelines to tidy up your Alexa skill and your Skill will be available to all Alexa users.

Good luck!

Made with JoyBird