1. Adjust memory settings
Lambda costs depends on memory and execution time (and number of requests). If you need to reduce execution time, you can try increasing memory (and by extension, CPU) to process it faster. However, when you increase memory for a function past a certain limit, it won’t improve the execution time as AWS currently offers a maximum of 6 vCPU cores.
If your application is CPU intensive, increasing the memory makes sense as it will reduce the execution time drastically and save on cost per execution. You can check how much memory function uses in your in AWS Console.
For best cost or performance optimization use AWS Lambda Power Tuning or AWS Compute Optimizer.
2. Use saving plans
Savings Plans allows you commit to specific usage for one/three years in exchange for lower prices – 15% discount rate for a 1-year term and a 17% discount rate for a 3-years term. There is no discount offered on requests. If you are using two or more compute services, the plans are applied in order of highest to lowest discount percentage – so first on EC2 and Fargate.
If you Lambda function usage is constant, you should consider making a commitment with Saving plans.
AWS Cost Explorer is providing recommendations to purchase Savings Plan to cover Lambda usage.
3. Use Step Functions for processing delays
With AWS Step Functions, you can poll for the status of tasks more efficiently. Long polling or waiting has the effect of increasing the costs of Lambda functions as they are waiting idle. Step Functions are state machines with a visual workflow, allowing you to coordinate various activities and tasks, like calling different Lambda functions. Important information here is that you pay for the number of state transitions required to execute your application, rather than execution time of a workflow.
Check example implementation of a timer task.
4. Reduce cold starts
Lambda automatically scales the number of workers that are running your code based on requests. A “cold start” is the 1st request that a new Lambda worker handles. This request takes longer to process because the Lambda service needs to initialise worker and function modules.
To reduce cold start you can:
- Select compiled language, when possible. The language and frameworks you are using play a large role in determining how fast your instances start up. In general, compiled languages are going to start up faster than interpreted. For example, Go and Python both initialize faster than C# or Java.
- Reduce dependencies. When you initialize a function, all the dependencies you call in the function are imported. If you are using many libraries, each library you include adds more time.
- Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the
/tmp
directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves cost by reducing function run time.
5. Write efficient code
It may seems obvious since execution duration is directly proportional to how much you’ll pay, it’s important to optimize you code execution time. Not that obvious may be how to do it. You can use Amazon CodeGuru Profiler, which helps improve application performance and reduce costs by pinpointing an application’s most expensive line of code, and providing recommendations on how to improve code to save money.
6. Reduce CloudWatch Logs costs
By default, AWS Lambda stores logs in CloudWatch service. Depending on the amount of data it can get costly. For example, in Ireland region, data ingestion costs $0.57 per GB and $0.03 per GB for storage. Always log only necessary information and set CloudWatch retention policy for Lambda log groups.
7. Make sure your functions are executed at the right frequency
There are factors that can affect how frequently your Lambda function is triggered. For example, if you’re using Kinesis as a Lambda function trigger, you could adjust the batch size. It controls the maximum number of records that can be sent to your function with each invoke. A larger batch size can often more efficiently absorb the invoke overhead across a larger set of records, increasing your throughput.
8. Avoid using recursive code
Wherein the function automatically calls itself until some arbitrary criteria is met. This could lead to unintended volume of function invocations and escalated costs. If you do accidentally do so, set the function reserved concurrency to 0
immediately to throttle all invocations to the function, while you update the code.
9. Limit data transfer
Data transfers are charged at the standard EC2 rate, so consider what data goes in and out your Lambda function.
Data transferred between Amazon S3, Amazon Glacier, Amazon DynamoDB, Amazon SES, Amazon SQS, Amazon Kinesis, Amazon ECR, Amazon SNS, Amazon EFS, or Amazon SimpleDB and AWS Lambda functions in the same AWS Region is free.
10. Set Lambda timeout
Lambda timeout is an amount of time that Lambda allows a function to run before stopping it. The maximum allowed value is 15 minutes. You are billed only for amount of time that your function actually runs, with 1ms billing granularity. You may ask why to set then anything less than 15 minutes? Imagine a situation when you function calls external services and they are not available at that time. If you don’t handle timeout in application logic you function will run full 15 minutes and you will be billed for it.