Since late 2020, AWS Lambda has started support the using of containerized images for AWS Lambda (see the December 2020 announcement by AWS: AWS Lambda now supports container images as a packaging format). This blog post is to
Container images, as according to Docker, is an executable software package that "includes everything needed to run an application: code, runtime, system tools, system libraries and settings." That's it.
The main benefit to using containers is therefore that you don't need to worry about missing or conflicting dependencies when you deploy your app.
Pros:
Cons:
Note: I assume you have your AWS credentials stored in your local environment. Further, I'm assuming you're using a Unix-clone (either Linux or Mac).
To create a new repository in Amazon ECR, you can do so using either the AWS console or AWS CLI. I personally prefer the CLI since I can record the commands I used for future purposes. And, so, that's what I use here.
Don't worry, it's pretty simple.
For the following, suppose
1234567890
,cloud
, andus-west-2
.First, you need to login to ECR:
aws ecr get-login-password --region us-west-2 --profile cloud | docker login --username AWS --password-stdin 1234567890.dkr.ecr.us-west-2.amazonaws.com
Then, to create a new repository with the name example-repo
, simply type into your terminal
aws ecr create-repository --repository-name example-repo --profile cloud --region us-west-2
This will create a new repository for and (important!) return the URI of the newly create repository (among other pieces of information). You'll need that URI later.
[TODO: Basic instructions on buidling Docker containers]
Suppose our file structure looks like this:
|- src/
| |- handler.py
| |- my_module/
| | |- __init__.py
| | |- module_file_a.py
| |- requirements.txt
|- Dockerfile
|- template.yml
Assuming you know how to build a Docker container, then the only question left is how to structure your Dockerfile. For reference, here is a Dockerfile template for python:
FROM public.ecr.aws/lambda/python:3.8
COPY src/ ./
RUN python3.8 -m pip install -r requirements.txt -t .
CMD ["handler.lambda_handler"]
To briefly explain each line: In line 1
, although I made use of AWS's python 3.8 image as my base image, you don't have.
You're free to use other images as your Dockerfile's base image.
Line 2
simply copies over the source code in my src/
directory over to the Docker container.
Note: I've assumed your python code to be contained inside a directory named src
.
If not, you'll need to substitue src/
with whatever directory houses your code.
Line 3
tells Docker to install the python modules listed in your requirements.txt
file
into the current directory in your container.
All it really means is that all your python modules are going to be in the same directory level as your source code.
Hence, your source code can make use of them freely.
If your python modules is actually a wrapper for some binary, then you'll also need to either copy over those binaries into their expected location or, if the base image allows, install those binaries.
Line 4
is truly important. It states what the default argument to your Docker image is.
In our case, it is "handler.lambda_handler," which, for AWS Lambda, means to use the function lambda_handler
in the python file handler.py
.
Note: that there is no RUN
statement in our Dockerfile.
To tell your SAM template to deploy a container image instead of a zip archive,
you'll need to set the function's PackageType
to Image
and provide some metadata
.
Here's an example snippet assuming the file structure provided above:
MyLambda:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
Metadata:
Dockerfile: Dockerfile
DockerContext: ./src
DockerTag: python3.8
To deploy our SAM template, we need two steps:
To build, go to your root directory and type
sam build
This will create an .aws-sam
directory that'll house the code and resulting Cloudformation template to be deployed.
Next, to deploy, type
sam deploy --profile cloud --guided
and provide the ECR repo's URI from step 1
when requested.
SAM will upload your image to the indicated ECR repo and will tell the lambda in your SAM template to use that image.
Hopefully, you now have enough information to deploy your own container to AWS Lambda using SAM. The deployment is not hard, but it is cumbersome and time-consuming.
Often, companies will make use of CI pipelining tools such as CodePipeline
or those of Gitlab
and Bitbucket
to automate this process.
Part of the reason is just because that's often the devops practice: unit tests and acceptance tests can be automated and, thereby, prevent developers from modifying test cases to fit their code.
And, well, once the code has been determined to be acceptable, then you might as well deploy the new, updated code.
As a bonus, by having Gitlab
or Bitbucket
do that for you, the upload won't take up your internet bandwidth, although it may incur costs with Gitlab or Bitbucket.