Building Lambda with terraform

Building Lambda Functions with Terraform


Many of us use Terraform to manage our infrastructure as code.

As AWS users, Lambda functions tend to be an important part of our infrastructure and its automation. Deploying - and especially building - Lambda functions with Terraform unfortunately isn’t as straightforward as I’d like. (To be fair: it’s very much debatable whether you should use Terraform for this purpose, but I’d like to do that - and if I didn’t, you wouldn’t get to read this article, so let’s continue)

I’m going to divide the steps we need to take in order to deploy lambda functions into three parts: Build, Compress and Use. Furthermore I’m going to call this our build pipeline, even though Terraform isn’t really a build tool - you can’t stop me.

Afterwards I’m going to show you two examples of build pipelines:

  1. A simplified version of the pipeline that only consists of the compress and use-steps
  2. … and a more complex version with the actual Build-Step.


This step can involve a variety of different things, that depend on your use case and runtime environment of choice:

  • Installation of dependencies (e.g. python packets)
  • Running tests
  • Compilation of code (e.g. go binaries)
  • Setting up configuration files
  • …and more

For our purposes we’re going to assume that it means running a script and continuing if that scripts' return code is 0.


AWS Lambda only accepts zip files that contain the code alongside the Lambda handler for deployments - that’s why we need to provide a zip archive in our build pipeline.


Now that we’ve got our deployment package/artifact we’re going to use it to create our lambda-function.

Example 1 - Simplified Build Pipeline

In our simplified build pipeline we’re going to skip the build step, which is not necessary, if you stick to the available libraries in the standard runtime environment.

We’re going to start out with a directory structure that looks like this:

├── code
│   └── my_lambda_function
│       └──

This resource represents our Compress step - we’re using the archive_file data source of the archive provider (If you’re using this for the first time in your project, you need to run terraform init afterwards to initialize the provider).

Where you store the compressed zip-file (output_path) doesn’t really matter - in any case I’d recommend you add the zip to your .gitignore file since there is no need to check in both the code and the build artifact.

data "archive_file" "my_lambda_function" {
  source_dir  = "${path.module}/code/my_lambda_function/"
  output_path = "${path.module}/code/"
  type        = "zip"

Now we can move on to the define step. In our function definition we’re referencing a role that’s not shown here - replace it with your own. The filename argument points to the output of the above mentioned data source - our compressed build artifact. The source_code_hash parameter references the sha256 hash of the zip archive and basically makes sure that the code of the lambda function only gets updated, when the code in the repository (more specifically the build output) is updated.

resource "aws_lambda_function" "my_lambda_function" {
  function_name    = "my_lambda_function"
  handler          = "handler.lambda_handler"
  role             = "${aws_iam_role.my_lambda_function_role.arn}"
  runtime          = "python3.7"
  timeout          = 60
  filename         = "${data.archive_file.my_lambda_function.output_path}"
  source_code_hash = "${data.archive_file.my_lambda_function.output_base64sha256}"

That’s it - you should be able to modify the code and see the changes reflected in AWS after running terraform apply (sometimes it takes a few seconds until you can see the new code in the console).

Example 2 - Complete Version of the Build Pipeline

Our directory structure looks like this - you might be able to see the similarities:

├── code
│   └── my_lambda_function_with_dependencies
│       ├──
│       ├──
│       ├── package
│       └── requirements.txt

The is very simple yet effective, it navigates to the scripts directory and installs all requirements into the package directory.

#!/usr/bin/env bash

# Change to the script directory
cd "$(dirname "$0")"
pip install -r requirements.txt -t package/

The looks like this - it’s a script that uses the requests library to find out the public IP of the lambda function (or some proxy):

# Tell python to include the package directory
import sys
sys.path.insert(0, 'package/')

import requests

def lambda_handler(event, context):

    my_ip = requests.get("").json()

    return {"Public Ip": my_ip["ip"]}

Let’s move to our build pipeline:

Our Build-Step is a null resource, that executes the whenever one of the following files change (see the triggers-section):

  • requirements.txt
resource "null_resource" "my_lambda_buildstep" {
  triggers {
    handler      = "${base64sha256(file("code/my_lambda_function_with_dependencies/"))}"
    requirements = "${base64sha256(file("code/my_lambda_function_with_dependencies/requirements.txt"))}"
    build        = "${base64sha256(file("code/my_lambda_function_with_dependencies/"))}"

  provisioner "local-exec" {
    command = "${path.module}/code/my_lambda_function_with_dependencies/"

The Compress-step is basically the same as above, with the exception of the depends_on-clause. Here we’re waiting for the build step to finish, before we compress the results.

data "archive_file" "my_lambda_function_with_dependencies" {
  source_dir  = "${path.module}/code/my_lambda_function_with_dependencies/"
  output_path = "${path.module}/code/"
  type        = "zip"

  depends_on = ["null_resource.my_lambda_buildstep"]

Last but not least we use the resulting build artifact to deploy the lambda-function as described above.

resource "aws_lambda_function" "my_lambda_function_with_dependencies" {
  function_name    = "my_lambda_function_with_dependencies"
  handler          = "handler.lambda_handler"
  role             = "${aws_iam_role.my_lambda_function_role.arn}"
  runtime          = "python3.7"
  timeout          = 60
  filename         = "${data.archive_file.my_lambda_function_with_dependencies.output_path}"
  source_code_hash = "${data.archive_file.my_lambda_function_with_dependencies.output_base64sha256}"

That’s it, you should be able to run terraform apply and see the result in the console after a few seconds.


This solution is inspired by a discussion on Github, thanks to @dkniffin and @pecigonzalo

Similar Posts You Might Enjoy

Managing multiple stages with Terraform

Managing multiple environments in Terraform Introduction I recently started learning Terraform. For those who haven’t encountered it: Terraform is in essence a framework to describe Infrastructure as code by Hashicorp. When I began doing that, I was struggling with the staging-concept of Terraform. I did my research and came upon numerous 1 articles and blogs that described ways to manage (multiple) environments or stages in Terraform2. Since I wasn’t really happy with the other solutions and there didn’t seem to be a canonical way to handle multiple environments, I decided to try and figure out my own solution. - by Maurice Borgmeier

Consistent Style Across Editors

Consistent Style Across Editors Sometimes, common themes occur if working on a project with multiple people and different development environments. One of the unexpected, time-consuming problems is related to editor configurations. But it is pretty easy to unify things, if you know where to look… - by Thomas Heinen

tRick: simple network 2 - Geschwindigkeit

Vergleich Infrastructure as Code (IaC) Frameworks - tRick Alle Posts Abstraktion und Lines of Code Geschwindigkeit Diversity (polyglott), Tooling, Fazit Benchmark Ausführungsgeschindigkeit Ausführungsgeschwindigkeit Direkt aus dem tRick Repository wird mehrfach (n=10) der Zyklus Build - Check - Deploy - Remove ausgeführt. Damit sollen Cache Effekte statistisch gemittelt werden. Dazu nehme ich das Tool hyperfine zur Hilfe. Es führt Kommandos automatisch mehrfach aus und mittelt die Ergebnisse. Meine Annahme ist es, dass Terraform vorne liegt, da das Programm selber statisch kompiliert in go geschrieben ist. Außerdem geht die Ausführung direkt auf die API. - by Gernot Glawe