Druva performs more than 5 millions backups every day. This requires meticulous resource allocation to handle billions of backup events every year with a variety of workloads. To facilitate this continuous growth, we asked ourselves, “How can we utilize resources in a more cost-efficient manner?”
At Druva Labs, we have been working in uncharted territories of web application designs, including serverless architecture design, to optimize the resources we use as part of the Druva Cloud Platform. We’ve developed a powerful, truly serverless search engine using serverless components that aligns with AWS serverless best practices.
AWS Lambda (a serverless framework) is one of the core technologies we deploy at Druva as part of our own well architected framework. Let’s explore more about this technology and how it benefits our developers here at Druva.
Serverless application development has evolved by running stateless, scale out/in services which can spawn more machines quickly during peak loads. One of the key benefits of serverless application development is cost — metered billing means you only pay for the compute time used by running your application. This gives a cost advantage over reserved or spot computing instances, where you could end up paying more for those instances even when your machines are not in use.
Most public cloud providers have a container service which allows more granular compute capacity with quicker boot time. Serverless applications allow you to run a single function and pay only for the compute capacity and the time consumed by it. Serverless goes beyond compute capacity, it offers serverless storage, databases, email service, API gateway, messaging, ML services, big data ETL, big data query processing, and the list continues to grow. You can practically develop a web application without running any servers.
AWS Firecracker — a technology behind AWS Lambda
AWS Lambda, a serverless framework, is powered by a technology called Firecracker, a new open source virtualization technology that makes use of a Kernel-based virtual machine (KVM). KVM allows the linux kernel to work as a hypervisor which can run virtual machines on host machines. Firecracker can launch multiple micro-virtual machines called microVM(s) — which combine the security and isolation properties provided by hardware virtualization technology with the speed and flexibility of containers. It provides benefits of security and workload isolation just like traditional VMs along with resource efficiency attached with containers. Below are a few points which makes it a very powerful tool:
- High performance: microVM can be launched with very little time ~125 ms (best suited for application workloads which are short-lived and requires almost zero latency in response).
- Low overhead: Each microVM consumes as little as 5 MB which can enable a single instance to launch thousands of secure microVMs.
- Simple guest model: Firecracker guests are presented with a very simple virtualized device model in order to minimize the attack surface: a network device, a block I/O device, a programmable interval timer, the KVM clock, a serial console, and a partial keyboard (just enough to allow the VM to be reset).
These improve security, decrease the startup time, and increase hardware utilization by running multiple low overhead microVM(s) — single-core CPU, and 128 MiB of RAM, which supports a steady mutation rate of 5 microVMs per host core per second.
AWS released the AWS Well-Architected Framework to compare applications against AWS architectural best practices with advice on how to improve the general aspects of an application. A lens adds additional questions for specific technology areas, which focus on what is different from the generic advice which includes a serverless architecture lens to focus while developing serverless applications. It focuses on seven areas when building a serverless application:
- Compute layer
- Data layer
- Messaging and streaming layer
- User management and identity layer
- Edge layer
- Systems monitoring and deployment
- Deployment approaches
Each layer has been supported by an AWS serverless component. Below is the list of serverless components used by our serverless application at each layer:
- Compute layer — Lambda, API Gateway
- Data layer — DynamoDB, RDS Aurora Serverless
- Messaging and streaming layer — SNS (Simple Notification Service)
- User management and identity layer — OAuth 2.0 Server on Lambda
- Edge layer — CloudFront
- Systems monitoring and deployment — CloudWatch
- Deployment approaches — CloudFormation
Dealing with serverless limitations
A serverless web application architecture also has limitations like any other architecture. We found that the compute layer requires significant attention while working on serverless components as it will have critical business logics, and will be a driving factor when it comes to total response time, boot time, cost, monitoring parameters, etc. As we used Lambda at the compute layer, it displayed many critical limitations which range from deployment package size, memory size, code size, payload (request and response), request timeout limit, and concurrent executions. You should always keep these in mind while working on a serverless architecture, especially while designing the compute layer.
When and how to use AWS Lambda
AWS Lambda is best suitable for an application with a short processing time, moderate memory requirements (Lambda has 3 GB limit), and not too big of a code base (works with less than 250 MB). The AWS Lambda free usage tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month. This is best suitable for small lambda functions with short processing time. Below are a few points to take into account while trying to get maximum benefits dealing with its limitations.
Code size: Try to keep it moderate because large code size adds time to boot time of lambda containers which is called cold start time. Larger the code size bigger the cold start time which will add seconds to your billing cycle every time it is started.
AWS Lambda runtime: Choose your run time wisely. Lambda supports a range of runtimes like Node.js, Python, Ruby, Go, Java, and .NET. We have observed that Go and Python are the best among these options because they have less of a cold start time.
Memory size: Memory size allocated Lambda plays a major role in keeping your costs down. Overestimating the Lambda function memory size can increase your costs at the end.
If you have a requirement to run multiple AWS Lambda functions with the output of one-step acting as an input into the next, then AWS Step Functions is the way to go.
Welcome to the world of serverless.
Learn more about the benefits of a serverless approach and why lift-and-shift cloud backup does not deliver the full benefits of the cloud.