The popularity of chatbots these days is so important and the advantages their offer are so wide, that lots of companies have started using them in the last few years, leaving behind the need of a full-service call center (in some cases leaving behind the whole callcenter). The capabilities they offer are endless, but to name some popular uses, you can: get support, order a pizza, book a flight, query about your schedule, and many more…

Among the advantages the chatbots offer, here are the main ones:

  • Automated responses: They can be programmed to give an automated response to the user, based on the user input (voice or text) and the data stored in the system.
  • Reduction of call center costs: Some questions and their responses can be fully automated, so there is no need of a person answering those questions on the other side.
  • Queries and transactions: Chatbots offer not only the possibility of asking questions, but also making transactions (e.g: placing an order, booking a flight, setting up a calendar event, etc.).
  • Direct interaction with APIs: The logic of the bot can be programmed in order for them to interact directly with our backend API or a third-party API or service. This gives them a lot of power to do great things.
  • Text and voice: Interaction with the chatbot is not limited only to text. They can also be developed in order to accept voice questions. This allows us to create beautiful virtual assistants.

One of the platforms that allows us to develop chatbots is Amazon Lex. Amazon Lex uses the same engine as Alexa, and it provides a very straightforward way to develop a chatbot in a few easy steps, having only to develop the logic that interacts with our backend or third-party APIs. This means that you won’t have to deal with text or speech recognition, but Amazon Lex will do it for you. This reduces a lot development time and costs. Amazon Lex also offers a SDK to allow us deploy the bot in a mobile app (iOS or Android).

The costs associated to this service vary depending on the number of requests made to the bot. There is a free-tier that allows a maximum of 10,000 text requests and 5,000 voice requests during the first year.

Chatbot structure in AWS Lex

Chatbots in Lex follow a structure that allows us create them in a very few easy steps. The strucure is as follows:

Amazon Lex bot structure

Here’s a brief explanation of a chatbot’s components:

Intents: These are the main components of a bot. They manage the different tasks that can be done with it (e.g: book a flight) and can be configured separately. Each intent is independent from the others.

Utterances: Here you can set the text expressions that will trigger the intents. As Amazon Lex uses a deep learning engine to recognize these expressions, the utterances don’t have to be the exact text you expect the user types/says. An intent can have multiple utterances, some of them can be used to increase the possibility for the intent to be triggered, by setting several similar expressions. On the other hand, utterrances can also contain slots, which are variable values given by the user when asking the chatbot (for example: if you expect the user to type questions similar to “when is Mike’s birthday?”, “Mike” will be a variable value, so you can configure it as a slot in the utterance, by indicating it with brackets: “when is {name} birthday?”). We’ll talk more about this further in this article.

Confirmation prompt: (optional) A question asked by the bot to confirm the intention of the user. If the user confirms the intention, then the intent is fulfilled, otherwise it will show the user a custom message.

Fulfillment: Here’s where the magic happens. After Amazon Lex processes the input of the user, the data gathered will be passed to your AWS Lambda function, where you can develop the business logic to fulfill the user’s request.

Response: After the request is fulfilled, the user can receive a response about the status of their request (either fulfilled or failed). This response can be configured in the Amazon Lex console in case you want a fixed response depending on the status, or you can develop a more customized real-time response in the AWS Lambda function that handles the fulfillment.

Session attributes: These are attributes that are set by the application that holds the chatbot, in order to establish some context for it. For example: if the user that uses the chatbot logs into the system, we may want to tell the chatbot the ID attribute of the current user. Session attributes last until the session expires or until the attribute is erased.

Request attributes: Unlike session attributes, request attributes last only until the current request is done.

Developing the Lambda function

In order to handle the requests to the chatbot, if we need to interact with our backend or a third-party service, we need to develop an AWS Lambda function that processes the input of the bot and makes the requests to the API/service.

For those who haven’t worked with AWS Lambda yet: it is an AWS service that deploys small pieces of code in a serverless infrastructure, what allows us to program our code and deploying it very fast, since we don’t have to worry about setting up servers and all that hassle. This is ideal if you intend to have an architecture based on microservices, for instance. The languages supported by AWS Lambda are Node.js, Java, Go, C# and Python, and the code can be deployed accompained by any third-party or own libraries you may need.

The code to be included in the Lambda function can be a single file (index) or multiple files compressed in a .zip file. Libraries must be included in the .zip file in order for them to be available for the AWS Lambda function.

The entry point to the Lambda function has to have the following format:

exports.handler = (event, context, callback) => { 
    //Do stuff here (call your API, connect with external services, etc.) 
    response = ... 
    callback(null, response); 

As you can see, it is very simple. Let’s see some of the parameters the handler function receives:

  • event: this object contains all the info related to the event, such as session attributes, request attributes, the name of the intent being fulfilled, the utterance that triggered the intent, the values of the slots in the uterance, among other details.
  • context: this parameter contains some info about the Lambda function, not very useful for our purpose.
  • callback: a function that will receive our response. The response will be what the bot will receive, and also may contain a message for the user.


Responses are used to tell the user the result of the intent, whether it is a succeed (fulfilled) response or a fail response, all responses must be a JSON object with the following structure:

    "sessionAttributes": { },
    "dialogAction": {
        "type": "Close", 
        "fulfillmentState": "Fulfilled",
        "message": {
            "contentType": "PlainText",
            "content": "Your response message here"

Depending on the type of the bot you are developing, you may want to adjust the elements of the response according to your needs. As this topic is very wide, we will explain only the most common types of responses and their components.

The most common response is the one that won’t expect any other interaction with the user, so the intent task will finish after the response is sent. This kinds of responses will have the dialogAction.type attribute value set to “Close” (see below).

sessionAttributes object

In case you need to set new values for session attributes, you will have to send them back to the user within the response. That can be done through the sessionAttributes element, which will contain a set of key-value pairs.

dialogAction object

This object contains details about the response, the type of the response and indicates the bot the next steps to be performed (if any). It can contain (for our purpose of a single call and single response intent) the following attributes:

  • type: Tells the bot the type of response and the actions to be performed by the bot (if any). The possible values are “Close”, “ConfirmIntent”, “Delegate”, “ElicitIntent”, “ElicitSlot”. We will cover only the “Close” type of response in this post, leaving the others for a future one.
  • fulfillmentState: Tells the bot whether the request has been fulfilled (“Fulfilled”) or that the bot has failed to fulfill the request (“Failed”).
  • message: Contains the info of the message to be shown/played to the user. In order to give the user a response, the following attributes must be set:
    • contentType: The type of the message to be sent to the user. Valid values are “PlainText”, “SSML” (text converted to voice) and “CustomPayLoad” (custom format)
    • content: The text to be shown to the user or to be converted to voice, or to be processed by the custom format defined

Creating a chatbot with Amazon Lex

Now that we know the basics of Amazon Lex and how to create an AWS Lambda function that handles the chatbot requests, let’s create a sample chatbot:

We will create a chatbot to ask it for the birth date of anyone at the office, so for example, if I ask “When is Mike’s birthday?”, the bot will answer “January 3rd”, for example. We will name our bot “OfficeBot”. Let’s see how to do this:

1) Create and set up a bot

Open the AWS main console and find the Amazon Lex console

Click on “Create” and select the “Custom bot” option. In the “name” field type “OfficeBot”, complete all the other fields at will. Here’s an example of the values you can put:

Bot creation screen

Now we have to create an intent in order to handle the requests of the user. We will name our intent “Birthday”. To do this, click on the button “Create Intent”

"Create an intent" button

In order for our intent to be triggered, we need to set up some utterances. Our utterance will have to have a slot, so the name of the person whose birthdate we want to ask is variable. Here are some examples of utterances we can use to achieve this:

“When is {name} birthday?”

“Can you tell me when {name} birthday is?

“Would you be so kind as to tell me {name} birthday?

Having this in mind, we proceed to create the utterances. We also have to define a name slot, so Amazon Lex can recognize it as such. The name of the slot will be “name”, and for the type of the slot we will choose AMAZON.Person, and we will define the slot as “required”. Since the slot is required, the prompt field will be useless in this case, but as Amazon Lex requires us to fill in something, lets type “whose birthday?”.

2) Create the AWS Lambda function

Now we have to create a Lambda function that handles the requests from the bot. We will develop it in Node.js.

In order to simplfiy the work, we will use some sample data to simulate the access to a database, backend or external service. The sample data will be a set of key-value pairs, in which the keys will be office member names, and the values will be the birth date. One approach to our solution could be the this:

First we develop our Lambda function entry point:

'use strict'

//Handler function. This is the entry point to our Lambda function
exports.handler = (event, context, callback) => {
    //We obtain the sessionAttributes and the intent name from the event object, received as a parameter.
    var sessionAttributes = event.sessionAttributes;
    var intentName =;

    //In order to use the same lambda function for several intents, we check against the intent name, which is unique.
    switch (intentName)
      case "Birthday": //In case we triggered the Birthday intent, we'll execute the following code:
        //We obtain the 'name' slot
        var name =;
        //In case the slot value contains a 's at the end, we take it off.
        name = name.replace("'s", "");
        //now we get the birth date of the person
        getBirthDate(name, function(error, birthDate) {
          var response = null;
          if (!error)
            //By default we create a message that states that we didn't find the birthday for the given name.
            var message = "I'm sorry, I couldn't find " + name + "'s birthday.";
            if (birthDate !== null) //In case we found a birthday, we generate a message with the birthday info
              message = name + "'s birthday is on " + birthDate.toLocaleDateString("en-US", {
                month: "long", 
                day: "numeric", 
                year: undefined,
            //We generate a response that has a 'Fulfilled' value for the attribute 'dialogAction.fulfillmentState' and we pass our message string
            response = createFulfilledResponse(sessionAttributes, message);
            //In case an error ocurred, we pass an error message in a response that has the 'dialogAction.fulfillmentState' attribute set to 'Failed'
            var message = "An error has occurred.";
            response = createFailedResponse(sessionAttributes, message);
          //Finally, we trigger the callback to notify the bot
          callback(null, response);

Now that we got the name of the person whose birthday we want to know, we can create a function that gets their birthday from a data source.

//Function used to get the birth date of someone's by providing their name. 
//The content of this function can be replaced in order for the data to be gotten from an API, database or other service.
function getBirthDate(name, callback)
  //We will use sample data instead of accessing an API or service, for practical purposes. This code can be reprogrammed in order to change the behavior.
  var memberBirthDates = {
    "juan": new Date(1985, 02, 12),
    "mike": new Date(1992, 04, 23), 
    "lisa": new Date(1994, 11, 01), 
    "maria": new Date(1986, 10, 24)
  var birthDate = null;
  name = name.toLowerCase(); //As our keys in the set are in lower case, we convert our 'name' parameter to lower case
  if (name in memberBirthDates)
    //If the name is in the set, we return the corresponding birth date.
    birthDate = memberBirthDates[name];
  callback(0, birthDate); //we return the value in the callback with error code 0.

Finally, we create the responses following the format that Amazon Lex provides. Remember we are using the “Close”-type response, and depending of the fulfillment we will have a Fulfilled response or a Failed one.

//Function used to generate a response object, with its attribute dialogAction.fulfillmentState set to 'Fulfilled'. It also receives the message string to be shown to the user.
function createFulfilledResponse(sessionAttributes, message)
  let response = {
    sessionAttributes: sessionAttributes,
    dialogAction: {
      type: "Close",
      fulfillmentState: "Fulfilled",
      message: {
        contentType: "PlainText",
        content: message
  return response;

//Function used to generate a response object, with its attribute dialogAction.fulfillmentState set to 'Failed'. It also receives the message string to be shown to the user.
function createFailedResponse(sessionAttributes, message)
  let response = {
    sessionAttributes: sessionAttributes,
    dialogAction: {
      type: "Close",
      fulfillmentState: "Failed",
      message: {
        contentType: "PlainText",
        content: message
  return response;

Once we have completed our Lambda function, we have to save it into Lambda. There are three ways to do this:

  • Copy/paste the code into the Lambda editor. This can be done only if you have just one file.
  • Compress the project folder contents (not the project folder) into a .zip file and upload it to Lambda.
  • Compress the project folder contents into a .zip file and upload it to AWS S3, then a S3 link to the file.

As our lambda function is very simple and we can develop everything in just one file, we will just copy and paste into the editor. However, we would highly recommend that in practice you splitted the code into a file structure, in order to make the code more legible and mainainable. In addition, as you will probably need to include libraries if your Lambda function increases its complexity, copying and pasting code won’t be enough, since you will need to compress the files into a .zip file.

3) Configure the bot fulfillment method

We now want our bot to use our recently developed Lambda function in order to fulfill the user requests for the Birthday intent, so what we only need to do now is to tell our intent to fulfill its requests by calling our Lambda function. This can be done in the intent’s console:

If we did everything OK, our console should look like this:

4) Build and test the bot

We are almost done! Now we have to build our bot, in order for it to be available to receve queries. In order to do that, we have to press the “Build” button. In case there is an error building the bot, we have to fix it and build the bot again.

Now it’s time to test our bot. That can be done by using the bot console, located at the very right of the screen of the bot (you may read “Test Chatbot” at the top right corner of the screen). Having in mind the utterances you created for the intent, you can use them with different values for the slot name, and see what happens. Here are some examples:

How about that? Nice, huh? Try typing your own text and see what happens.

Tips when creating a bot

  1. Use only one Lambda function. If you modularize the code wisely and keep your code maintainable, you won’t need more than one Lambda function to handle your bot’s requests. This has two major advantages: you can have only one project folder for the entire bot, having libraries and shared code only in one place. On the other hand, Lambda requests can be costly after certain number and, in case you need to make calls to other Lambda functions, that may increase your budget.
  2. Create a init/test intent to test more complex requests. You may have already noticed that, for example, session attributes cannot be set from the bot console (at least not yet), so a nice and easy way to set these attributes in order to test your bot is to have an extra intent that when called sets all the session attributes. This will be done in the Lambda function, of course. Remember to disable this intent when moving the bot to production.
  3. Test from the AWS Lambda console first. You can also simulate bot requests from the Lambda console. The main advantage of this is that allows you to test the Lambda function from the console, where you can visualize the errors, logs and other info you may want to log into the console.
  4. Create a script to compress and upload your project. As you may upload your project multiple times, it is a nice idea to create a script that does all the job for you, so having one that compresses your project folder contents and uploads the .zip file to AWS Lambda will save you a lot of time.

Wrapping up

Creating chatbots has become extremely easy because of the technologies that provide machine learning-based engines to process text or voice, leaving us only the task of developing the connection of the bot with our API, database or other external service. Amazon Lex is a good option to achieve this and developing a chatbot by using this technology is very straightforward.