May 01, 2023

Best Practices To Speed Up Your Serverless Applications

Fast performance of an app is crucial for delivering a great user experience! In this article, we'll look at the pitfalls and best practices for optimizing cold starts and handler performance in serverless applications.

Table of contents

Introduction

The serverless deployment paradigm via Functions-as-a-Service (FaaS) allows developers to easily deploy their applications in a scalable and cost-effective way. This convenience and flexibility, however, comes with a set of complexities to be aware of.

In earlier deployment models that used long-running servers, your execution environment was always available so long as your server was up and running. This allowed your applications to immediately respond to incoming requests.

The new serverless paradigm requires us as developers to find ways to ensure your function becomes available and responds to requests as quickly as possible.

Performance pitfalls in serverless functions

In a serverless environment, your functions can scale down zero. This allows you to keep operational costs to a minimum, but does come with a technical costs. When you have no instances of your function available to respond to a request, a new one must be instantiated. This is referred to as a cold start.

Note: For a detailed explanation of what cold starts are and how we have worked on keeping them as short as possible when using Prisma ORM, read our recent article: How We Sped Up Serverless Cold Starts with Prisma by 9x.

Slow cold starts can lead to a very poor experience for your users and ultimately degrade their experience with your product. This is problem #1.

Along with the cold start problem, the performance of your actual handler function is also extremely important. Serverless applications are typically composed of many small, isolated functions that interact with each other via protocols such as HTTP, event busses, queues, etc...

This inter-communication between individual functions creates a chain of dependencies on each request. If one of these functions is super slow, it will affect the rest of the chain. Because of this, the handler performance is problem #2.

Best practices for optimizing performance in FaaS

At Prisma, we have spent the last few months diving into serverless environments and optimizing the way Prisma behaves in them. Along the way we found many best practices that you can employ in your own applications to keep performance as high as possible.

For the rest of this article, we'll take a look at some of the best practices we found.

Host your function in the same region as your database

Any time you host an application or function that needs access to a traditional relational database, you will need to initiate a connection to that database. This takes time and comes with latency. The same is true for any query you execute.

Your goal is to keep that time and latency to an absolute minimum. The best way to do this at the moment is to ensure your application or function is deployed in the same geographical region as your database server.

The shorter the distance your request has to travel to reach the database server, the faster that connection will be established. This is a very important thing to keep in mind when deploying serverless applications, as the negative impact that results from not doing this can be significant.

Not doing so can affect the time it takes to:

  • Complete a TLS handshake
  • Secure a connection with the database
  • Execute your queries

All those factors are activated during a cold start, and hence contribute to the impact using a database with Prisma can have on your application's cold start.

When researching the impact this has on a cold start we, embarrasingly, noticed that we had done the first few runs of our tests with a serverless function at AWS Lambda in eu-central-1, and a RDS PostgreSQL instance hosted in us-east-1. We quickly fixed that, and the "after" measurement clearly shows the tremendous impact this can have on your database latency, both for the creation of the connection, but also for any query that is executed:

BeforeAfter

Using a database that is not as close as possible to your function will directly increase the duration of your cold start, but also incur the same cost any time a query will be executed later during handling of warm requests.

Run as much code as possible outside the handler

Consider the following serverless function:

// Outside
console.log("Executed when the application starts up!")
export const handler = async (event) => {
// Inside
console.log("Not executed when the application starts up.")
return {
statusCode: 200,
body: JSON.stringify({ hello: "world" })
}
}

AWS Lambda, in certain situations, allocates much more memory and CPU to the virtual environment during the initial startup of the function's execution environment. Afterwards, during your warmed function's invocation, the memory and CPU available to your function is actually guaranteed to be the configured values from your function configuration — and that can be less than outside the function.

Note: If you are curious, here are a few resources that explain the resource allocation differences mentioned above:

This knowledge can be used to improve the performance of your function by moving code outside the scope of the handler. This ensures that code outside the handler is executed while the environment has more resources available.

For example, you may be doing something like this in your serverless function:

function fibonacci(n) {
return n < 1 ? 0 : n <= 2 ? 1 : fibonacci(n - 1) + fibonacci(n - 2)
}
export const handler = async (event) => {
const fib40 = fibonacci(40)
return { statusCode: 200, body: fib40 };
}

The handler function above calculates the 40th number in the fibonacci sequence. Once that calculation is complete, your function will continue to process the request and finally return a response.

Moving it to the outside of the handler allows that calculation to be made while the environment has much more resources available, and causes it to only run once rather than on every invocation.

The updated code would look like this:

function fibonacci(n) {
return n < 1 ? 0 : n <= 2 ? 1 : fibonacci(n - 1) + fibonacci(n - 2)
}
let fib40 = fibonacci(40);
export const handler = async (event) => {
return { statusCode: 200, body: fib40 };
}

Another thing to keep in mind is that AWS Lambda supports top-level await, which allows you to run asynchronous code outside of the handler.

We found that explicitly running Prisma Client's $connect function outside of the handler has a positive impact on your function's performance:

import { PrismaClient } from '@prisma/client'
// Create database connection outside the handler
const prisma = new PrismaClient()
await prisma.$connect()
export const handler = async () {
// ...
}

Keep your functions as simple as possible

Serverless functions are meant to be very small, isolated pieces of code. If your function's JavaScript and dependency tree are large and complex or spread across many files, you will find it takes longer for the runtime to read and interpret it.

The following are some things you can do to improve startup performance:

  • Only include the code your function actually needs to do its job
  • Don't use libraries and frameworks that load a lot of stuff you don't need

The general sentiment here is: the less code there is to interpret and the simpler the dependency tree, the quicker the request will be processed.

Don't do more work than is needed

Any calculations of values or costly operations that may be reused on each invocation of the function should be cached as variables outside the scope of the handler. Doing so will allow you to avoid performing those costly operations every time the function is invoked.

Consider a situation where a value stored in your database is fetched that doesn't often change, such as a configurable redirect:

import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()
export const handler = async (event) => {
const redirect = await prisma.redirect.findUnique({
select: {
url: true
},
where: { /* filter */ }
})
return {
statusCode: 301,
headers: { Location: redirect?.url || "" },
};
}

While this code will work, the query to find the redirect will be run every time the function is invoked. This is not ideal as it requires a trip to the database to find a value you have already found during the previous invocation.

A better way to write this is to first check for a cached value outside of the handler. If it is not found, run the query and store the results for next time:

import { PrismaClient } from "@prisma/client";
const prisma = new PrismaClient();
// Create the variable outside the handler so it
// "survives" across function invocations
let redirect;
export const handler = async (event) => {
if (!redirect) {
redirect = prisma.redirect.findUnique({
where: { /* filter */ },
});
}
if (!redirect) {
return {
statusCode: 500,
body: "Redirect Not found",
};
}
return {
statusCode: 301,
headers: { Location: (await redirect)?.url || "" },
};
};

Now the query will only be run during the first time your function is invoked. Any subsequent invocations will use the cached value.

Provisioned concurrency

One last thing to consider is using provisioned concurrency to keep your lambdas warm if you are using AWS Lambda.

According to the AWS documentation:

Note: Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond immediately to your function's invocations. Note that configuring provisioned concurrency incurs charges to your AWS account.

This allows you to maintain a specified number of available execution environments that can respond to requests without a cold start.

While this sounds great, there are a few important things to keep in mind:

  • Using provisioned concurrency costs extra money
  • Your application will never scale down to 0

These are important considerations because the added costs may not be worth it for your particular scenario. Before employing this measure, we recommend you take a look at the value it brings to your application and consider whether or not the added costs make sense.

Conclusion

In this article we took a look at some of the best practices we suggest for developers building and deploying serverless functions with Prisma ORM. The enhancements and best practices mentioned in this article are not an exhaustive list.

To quickly recap, we suggest you:

  • Host your database as close as possible to your deployed function
  • Run as much code as possible outside of your handler
  • Cache re-usable values and calculation results where possible
  • Keep your function as simple as you can
  • Consider using provisioned concurrency if you are willing to deal with the financial tradeoffs

Thanks for following along, and we hope this information helps!

Don’t miss the next post!

Sign up for the Prisma Newsletter

Key takeaways from the Discover Data DX virtual event

December 13, 2023

Explore the insights from the Discover Data DX virtual event held on December 7th, 2023. The event brought together industry leaders to discuss the significance and principles of the emerging Data DX category.

Prisma Accelerate now in General Availability

October 26, 2023

Now in General Availability: Dive into Prisma Accelerate, enhancing global database connections with connection pooling and edge caching for fast data access.

Support for Serverless Database Drivers in Prisma ORM Is Now in Preview

October 06, 2023

Prisma is releasing Preview support for serverless database drivers from Neon and PlanetScale. This feature allows Prisma users to leverage the existing database drivers for communication with their database without long-lived TCP connections!