Let’s compile your Aws Lambda in Java

Frank Afriat
8 min readDec 28, 2020

via Graalvm, docker and maven

Introduction

If you used Java for writing your Lambda code, you already noticed the response time can be impacted very badly by a cold start of the container behind your lambda including the startup time of the JVM.

Compiling your Lambda code to native code is a radical way to solve this problem while continuing to benefit from AWS Serverless key component — AWS Lambda. Additionally compiling to native bring other benefits like memory and cost reduction…

Unfortunately, it is not an easy task and I struggled several days and nights before succeeding. So I wrote this guide to help you avoid the same pitfalls…

Your Current lambda

Lambda Code

Let’s take a simple code example, using maven for the build.

  • src/main/java/example/App.java
  • pom.xml

This pom.xml is very simple.

Then only point is to generate one file (.zip or .jar) called deployment package including all dependencies (here there is no dependency…). One way of doing that is to generate a uber jar (also called fat jar) including all .class files. This is the purpose of the maven-shade-plugin above.

Note: This is the aws recommended way to create a deployment package for maven but I have a better way I will present in another post…

Now on a terminal, in same folder of the pom.xml, the command

mvn package

will generate the lambda deployment package : target/hello-lambda-java11–1.0-SNAPSHOT-shaded.jar

AWS configuration

  • Give a name to your lambda, for example Hello-Java
  • Choose Java 11 (Corretto) as the Runtime
  • upload your deployment package (hello-lambda-java11–1.0-SNAPSHOT-shaded.jar) from the console or via S3
  • Define example.App::sayHello as the Handler (where example.App is the full classname and sayHello is the method name).

Current Performance

Cold Start

Let’s get an overview of a cold start from the console (I just waited 10 min before calling it):

The log output gives some information about the execution in the REPORT line:

Duration: 20.13 ms — Billed Duration: 21 ms — Memory Size: 512 MB Max Memory Used: 90 MB — Init Duration: 360.97 ms

In this very simple example, the Init Duration is 361 ms and execution Duration is 21 ms, so for this call the user will see 382 ms.(Note: The Init Duration is not included in the Billed Duration!)

Hot Start

Let’s try again, to get an overview of a hot start:

Duration: 0.84 ms Billed Duration: 1 ms Memory Size: 512 MB Max Memory Used: 90 MB

  • There is no Init Duration indicating that it is effectively a hot start
  • In that case, the user will see a total time of 1 ms only! (Note: We should include network time + de-serialization time...)

Before seeing the solution, I would like to introduce 4 elements:

  • the lambda execution environment
  • the custom lambda runtime and Java RIC
  • graalVM native-image
  • docker

that will help you understand better the full picture.

Understanding the lambda execution environment

Architecture diagram of the execution environment

As you can see in the diagram, there are 3 main distinct parts in the lambda execution environment:

  • The Lambda Function to execute the business code (your code) containing the Handler
  • The Lambda Runtime responsible for providing the Lambda Function the event to use and controlling its. execution. It consists of an execution loop waiting for a new event and retrieving it from the Runtime API, then posting the return value from the Lambda Function to the Runtime API (see more here).
  • The Runtime API providing a REST API mainly GET /runtime/invocation/next and POST /runtime/invocation/<AwsRequestId>/response respectively for retrieving the AwsRequest and to send the response (see more here).

For testing purpose, AWS is providing an AWS Lambda Runtime Interface Emulator (or Lambda RIE) to be able to run your lambda function locally (see more here).

The custom lambda runtime and Java RIC

In order to execute some native code, we need to create a deployment package respecting the custom lambda runtime specifications:

  • Bundle the native code in a zip containing a file named bootstrap that will be the entry point.
  • But also bundle our custom lambda runtime inside the native code!
  • So the native code must be the compilation of our lambda function + the custom lambda runtime (for linux x86–64).

In order to ease the implementation of a custom runtime, AWS is providing a reference implementation in many languages and among them in java:

The Java Runtime Interface Clients (RIC) or Java RIC (released as Apache 2 License, github repository here) may be included in our code by including its dependency:

The bootstrap file, maybe an executable shell script, will have the responsibility to give control to the Java RIC by calling its own entry-point, the main() method in class:

com.amazonaws.services.lambda.runtime.api.client.AWSLambda

Note: The Lambda Handler configured in the console is available for the Java RIC via an environment variable _HANDLER.

This will be the content of our function.zip file:

bootstrap : our bootstrap scriptfunc : our native executable

Ideally the bootstrap script may include support for running our Lambda also inside the Lambda RIE locally.

GraalVM Native-Image

The project graalVM is providing 2 compatibles JDK: version 8 and version 11

After installation, it is also possible to install the native-image command line tool, that you can use like the java command for generating executable code.

for example:

native-image -jar hello-lambda-custom-1.0-SNAPSHOT.jar

native-image is producing Ahead-Of-time Compilation (AOC) of the java classes and is compiling only the needed classes by dependency starting from the main class.

In order to not skip important classes loaded dynamically at runtime, it is possible to provide some configuration files to include the needed classes. If your code include some native code loaded via JNI (Java Native Interface), you will need also to configure another file (more info here).

Note: There is also a very clever way to generate these configuration files during java execution by using a provided java agent.

In order to keep the native-image command line simple, it is also possible to provide for each jar independently these configuration files inside the jar in the /META-INF/native-image/ folder (files jni-config.json, reflect-config.json, proxy-config.json and resource-config.json).

This is the approach I took here. As explained before we need to not only compile the Lambda Function code but also the Java RIC, and compiling this jar is not straightforward because of the use of reflection and of JNI…

But I did the hard work for you and generated these files for you :-) we will include them in the /META-INF/native-image/ folder of our own jar.

Note: I did create a pull request on the Java RIC project to include these files directly in its jar (see here). If my pull request is included, you will be able to remove these files.

Docker

One of the requirement of the custom runtime is to provide the native code for linux x86–64 and it is possible that the developer machine is on Mac or Windows… One way to deal with this problem is to use a docker container technology to run a linux x86–64 and inside it call the native-image command line.

If you are new to docker, what you need to know is that from a Dockerfile you can build an image, and you can run this image as a container.

When placing the Dockerfile, at the root of the project,

docker build -t hello-lambda:latest .

This command will build an image tagged with the name hello-lambda:latest.

docker create hello-lambda:latest

This command create a container (without running it) and is returning the created <container_id>, then

docker cp <container_id>:/function/function.zip .

this command will copy the file function.zip from the container (our deployment package) to the current directory, then

docker container rm <container_id>

can be used to remove this container.

The Solution with the custom runtime

The new pom.xml:

the Dokerfile:

My bootstrap script:

Just (install and) run the docker commands as explained in the Docker section to build and extract the function.zip file, then upload it:

  • choose the custom runtime (AL2)
  • set the Handler to: example.App::sayHello

Let’s test the performance

Cold Start

Duration: 38.56 ms — Billed Duration: 205 ms — Memory Size: 128 MB Max Memory Used: 54 MB — Init Duration: 166.30 ms

In this very simple example, the Init Duration is 167 ms and execution Duration is 39 ms, so for this call the user will see 205 ms vs 382 ms without compilation almost half the time! (Note: The Init Duration is included that time in the Billed Duration! which make it more expensive in our example)

Hot Start

Duration: 0.72 ms — Billed Duration: 1 ms Memory Size: 128 MB Max Memory Used: 54 MB

Bonus: Run the lambda inside a container

AWS introduced recently the possibility to run a lambda from a docker container (see more here).

The Dockerfile that has been provided previously is generating an Image hello-lambda:latest that can be used directly for running the lambda inside the container.

You may use AWS Elastic Container Registry (ECR) as your docker image repository and push your image hello-lambda:latest there when you can configure your lambda to access it.

Cold Start: Duration: 1.86 ms — Billed Duration: 1336 ms Memory Size: 128 MB — Max Memory Used: 46 MB — Init Duration: 1333.19 ms (Note: Init Duration included in Billed Duration)

Hot Start: Duration: 0.82 ms — Billed Duration: 1 ms Memory Size: 128 MB Max Memory Used: 46 MB

As we also included inside the image the RIE (available only on linux/x86–64), you will be able to test it locally, by using the command:

docker run -p 9000:8080 hello-lambda:latest

And from another terminal, you can try this command:

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'

to test your function! (See more here)

All the code presented here is available on github here.

--

--

Frank Afriat

Founder and CTO of Solusoft.Tech, Java Expert & Architect, AWS Serverless happy user and flutter early adopter.