The Ultimate AWS Lambda Tutorial for Node.js

April 2020

What is AWS Lambda and why should you use it?

AWS Lambda is a technology to run code in the cloud, without managing servers yourself. Because of this, it's called a "serverless" technology. Instead of provisioning a server yourself you just tell AWS Lambda "this is my code" and it will run your code whenever necessary. It's auto-scaling, meaning it should cost you exactly as much as needed. If your code is triggered 0 times, it should cost you 0$. The code your giving to AWS Lambda consists of functions. Each function can take some inputs, for example about how it was triggered, and provide an output to the caller. It's called "Lambda" because of the concept of Lambda functions (aka anonymous functions) in programming.

So why and when do we need AWS Lambda? There's actually a couple of good reasons to choose AWS Lambda:

In this tutorial will get some orientation where Lambda is in the AWS jungle, learn how to build our first function, and learn about the next steps, like installing node modules, developing locally and testing.

An orientation in the AWS jungle

At first it helps to see where AWS Lambda is in the larger AWS ecosystem. There are the following major categories of services that AWS is offering:

Soo... pretty much everyhing that you could potentially do in a cloud. It's obvious that in this jungle of services it's not always all too easy to find what you need. In order to use any of those services, you'll first need an account for AWS, so go ahead and create one if you haven't done so yet.

As you probably would have guessed, AWS Lambda are part of the compute family. The compute family isn't actually too extensive: EC2, Lightsail, Lambda, Batch, Elastic Beanstalk, Serverless Application Repository, AWS Outposts, EC2 Image Builder.

In case you're building a function that can be triggered by calling a URL, like we'll do in this tutorial, you'll also need another service from the "Networking & Content Delivery" family. And that service would be "AWS API Gateway". We'll learn about API Gateway in the process of getting to know AWS Lambda.

Also take a look at the icon for AWS Lambda, which is just the greek letter "Lambda": λ. Might be helpful for better recognizability. So now that we have some sort of compass in the overall AWS ecosystem, let's dive a little deeper into AWS Lambda.

Getting started - Create your first function

The first step is to go to the AWS Lambda page, either by the "Find Service" Search on the AWS dashboard, or by scrolling down a bit and selecting it from the full list of services under compute.

Then simply hit "Create function".

The defaults of the function creation interface are mostly fine, you just need to select a name for your function. Let's call it "tutorial-function".

After clicking "Create function", your function is already being created! You can test whether your function is working properly by creating a "Test". Click the "Select a test event"-Dropdown Menu and click configure test. Give your test a name, for example test1 and then hit "Create". Now "test1" will be selected in the dropdown menu and you can can hit "Test" to run the test. The log outputs will be printed on the screen:

Cool, everything is working already! But we're missing one part to a real life scenario. And that is: How are we going to call (also known as trigger) our function?! To call/trigger the function we have several options. We either could create a HTTP-endpoint, which could then be called by a server. This is certainly one of the most common use cases. You can also use other triggers, such as when something is happening to a S3 bucket or similar, but we'll focus on the HTTP-endpoint scenario here.

To call your function through a HTTP request, you'll need to leverage "AWS API-Gateway". But you don't need to leave the AWS Lambda interface to do so. Simply click on "Add trigger" and you can create an endpoint with API Gateway from there. Select HTTP endpoint instead of rest endpoint since they are simpler and cheaper.

Note: This whole process used to be way more complicated, but AWS has simplified it greatly. What used to be tons of settings and steps necessary to create this API can now be done with two clicks. So be careful, when you see any old tutorial on AWS Lambda with API Gateway. It's much simpler now.

Now you can verify that everything worked by just clicking the "API Endpoint" at the bottom of the page.

This should open a new tab and the text "Hello from Lambda!" should appear. Keep that tab open for later. Let's head back to the AWS Lambda function tab and click on "tutorial-function" in the designer. The bottom of the page changes and shows you the function code that was executed when you're calling the function. Now you know where that text "Hello from Lambda!" is coming from. You can change the body of the function to anything you like, for example change text to "Hello from Tutorial!" and then hit "Save". If you now hit "Test" or refresh the endpoint, you should see the new result.

Switching to a local development environment

For any changes that go beyond this simple "Hello World" example, let's switch to a better development environment. What I mean by this, is to create a folder locally on your machine and start working with your favorite IDE. So let's create the directory first with mkdir aws-lambda-tutorial and switch to that directory cd aws-lambda-tutorial. Next, you'll want to pull the function that we've created previously in the cloud to your local machine. You'll need to create a new user in IAM, if you don't have one already. Then create a new policy that looks like this and attach it to the user:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "lambda:UpdateFunctionCode",
                "lambda:GetFunction"
            ],
            "Resource": "arn:aws:lambda:eu-central-1:*:function:tutorial-function"
        }
    ]
}

AWS will need a minute until the changes are live. You could also give the user full admin access to AWS Lambda (which is a predefined policy), but I usually try to go by the principle of least privileges. Now you can register this user locally through aws configure --profile aws-lambda-tutorial-user and then set your current shell to use that user with export AWS_PROFILE=aws-lambda-tutorial-user. Now you can download your function like this:

aws lambda get-function --function-name tutorial-function --query 'Code.Location' | xargs wget -O tutorial-function.zip

You can get your function out of the zip with unzip tutorial-function.zip -d src. Let's make another small modification to the function and upload it again. For example nano src/index.js and change the response string to "Hello from Localhost!". Then try to push the changed function with the following code:

rm tutorial-function.zip
cd src
zip -r ../tutorial-function.zip .
cd ..
aws lambda update-function-code --function-name tutorial-function --zip-file fileb://tutorial-function.zip

You can check that everything worked by refreshing your browser tab with the endpoint. Now it's time to do something a bit more complicated with your function!

Installing npm packages for your AWS Lambda function

You can install npm packages as you normally would. Let's install the package immutable to illustrate this: npm init, hit enter a few times, then npm install immutable. Now let's change the code of the index.js file to to something with the immutable package. Let's just take the code from their readme:

const {List} = require('immutable');

exports.handler = async (event) => {
  const list1 = List([ 1, 2 ]);
  const list2 = list1.push(3, 4, 5);
  const response = {
    statusCode: 200,
    body: JSON.stringify(list2),
  };
  return response;
};

It's important to notice that the require is outside of the handler. This is a best practice, because like this AWS can reuse the execution context. This means, that for subsequent invocations, the same resources can be reused. This becomes more relevant for larger static objects. You can think of this as caching to improve performance and reduce costs.

Now you can repeat the steps from before to upload your function. At this point it becomes clear that it's best if you have a small bash script to do that since it won't be the last time you'll do it. I usually put it in a deploy.sh file and chmod +x deploy.sh it so I can run it with ./deploy.sh for small projects. For larger projects you might want to integrate it with your CI/CD. We need some adjustments to the previous script to make it work.

rm -rf ./dist && mkdir dist
cp -r ./src/* ./dist
cp package.json package-lock.json ./dist
cd dist
npm install --only=prod
zip -r ./tutorial-function.zip .
aws lambda update-function-code --function-name tutorial-function --zip-file fileb://tutorial-function.zip
cd ..

Again, you can check your endpoint to see if everything is working. You should now see: [1,2,3,4,5] when you go to your endpoint.

So that was all pretty easy so far and we've come quite a long way. We've managed to set up the function and the endpoint, develop and manage the code locally and install node modules. The next thing we're going to do is to write a unit test for our AWS Lambda function.

Unit Testing

Let's use Jest for testing, but basically any framework could be used. Start by installing Jest with npm install --save-dev jest. It's important that you install Jest as a dev dependency, so we don't ship it to production and blow up the size of our Lambda function. Let's create a first test, nano src/index.test.js:

const index = require('./index');

test('has size five', async () => {
  const resp = await index.handler();
  expect(JSON.parse(resp.body).length).toBe(5);
});

What's important to notice is that the test is asynchronous! We need to await the response of the handler. This is due to the nature of the the async function in the index.js. This was just a small example of a test, but I think you get the general idea. Theoretically you should also remove the test from your production distribution, but I don't think this weighs in here.

To be able to run the test, you must modify the package.json to include

"scripts": {
  "test": "jest"
}

You can now run the tests with npm test. We saw, that testing our code was a bit more complex than it needed to be because we had to think about the asynchronicity of the handler. To avoid this, it's a best practice to separate your handler logic from your core logic. In our case, we could rewrite the function like so:

const {List} = require('immutable');

function getList() {
  const list1 = List([ 1, 2 ]);
  const list2 = list1.push(3, 4, 5);
  return list2;
}

exports.handler = async (event) => {

  const myList = getList();

  const response = {
    statusCode: 200,
    body: JSON.stringify(myList),
  };
  return response;
};

exports.getList = getList;
and the test
const index = require('./index');

test('has size 5', () => {
  const resp = index.getList();
  expect(resp.size).toBe(5); // it's size instead of length, because we're using the immutable library.
});

test('handler responds correctly', async () => {
  const resp = await index.handler();
  expect(JSON.parse(resp.body).length).toBe(5);
});

It might be a bit eagerly optimized, but the test sources don't need to be shipped to production. So you can change your deployment script like so:

...
cp -r ./src/* ./dist
cp package.json package-lock.json ./dist
rm ./dist/**/*.test.js
...

This is just following the best practice of not shipping anything unnecessary to production. This is a different mindset than the backend development you might be used to when deploying to a server. There we don't care how large our libraries are and how big our application gets. It just needs to be loaded once during deployment and then the server keeps on running. This is different for Lambda functions, they have a cold start problem. The larger the Lambda is, the longer it will have to start up. And this is critical, because as opposed to manually provisioned servers that are always up and idling around, the infrastructure for a Lambda function is only provisioned by AWS automatically as soon as someone/something triggers your function. So you need to adopt the mindset, to only ship and include what's absolutely necessary to run your code.

Logging

In order to log, you can simply use console.log. Those logs will then be visible at AWS CloutWatch. CloudWatch is our third service of the AWS jungle we're getting to know today. The name is pretty self explanatory, it's purpose is to watch your cloud. This also includes viewing the logs that your lambda function is producing. In order to try this, add a console.log statement to your function code, for example you could log the incoming event:

exports.handler = async (event) => {
  
  console.log('Event:', JSON.stringify(event));

  ...

Then call your function through your API endpoint and afterwards head over to CloudWatch. Click on the latest log stream. You should then see something like this:

Using environment variables

Using environment variables is dead simple. You simply add them in the AWS GUI under the section "Environment variables". This is useful when you have something that you don't want to commit to your source code, like a key, or something that's changing depending on the environment. In node.js you can then access the environment variables through process.env, for example process.env.mykey.

Managing your Lambda functions

Over time you'll develop more and more Lambda functions. How can you organize them so it won't get a total mess? There's no such thing as folders for AWS Lambda functions, but what you can do, is to prefix your function name. For example, you could have a tutorial-function-hello and a tutorial-function-goodbye. Like this, the functions that belong together will also be listed together in the AWS console. In terms of your code, I suggest you have your functions in a git repository. This helps with organizing your code the way you want. It also helps with sharing common scripts, such as those for the deployment of functions. Last but not least, like this you can also run your unit tests on on your CI/CD pipelines.

Accessing the Input Data: Payload, Path Parameters and Query Parameters

So far our function didn't receive any input data, so it always returned the same meaningless output. Most real world cases would take some kind of input data. You can read the input data off of the event object.

The full list of accessible parameters can be found on the AWS docs, they're:

{
    "resource": "Resource path",
    "path": "Path parameter",
    "httpMethod": "Incoming request's method name"
    "headers": {String containing incoming request headers}
    "multiValueHeaders": {List of strings containing incoming request headers}
    "queryStringParameters": {query string parameters }
    "multiValueQueryStringParameters": {List of query string parameters}
    "pathParameters":  {path parameters}
    "stageVariables": {Applicable stage variables}
    "requestContext": {Request context, including authorizer-returned key-value pairs}
    "body": "A JSON string of the request payload."
    "isBase64Encoded": "A boolean flag to indicate if the applicable request payload is Base64-encode"
}

As an example, let's say the request is to the AWS API Gateway at https://...amazonaws.com/tutorial-function?listid=1234 and the request payload is {"hello": "world"}, then you can get "1234" through event.queryStringParameters.listid and {"hello": "world"} through event.body.

Summary

Creating, managing and deploying AWS Lambda functions got a lot easier over the years. There really shouldn't be much holding you back from trying it out in our of your projects! We've covered how we can create functions, create a HTTP-trigger, edit code on your local machine, install node modules, test your code and how to deploy your functions. Of course there's more to AWS Lambda functions, like for example getting to know about layers or how to run binaries that aren't part of npm. But we'll save that for another day and a tutorial part II. Until then: Have fun with your Lambdas!

Interested in web development?


Learn to Write Clean Code

Clean Code: A Handbook of Agile Software Craftsmanship (Robert C. Martin)