Serverless computing allows scalability from zero to practically infinity. Inside AWS, Lambda is the key service to run functions.
But while the code is in a familiar language, the runtime environment, the connected services, and the best practices are all different than in server-based architectures.
Want a sneak peek? Sign up for free chapters here:
A Lambda function is just one piece of the serverless puzzle, as it needs other services to be useful. It can't even store its logs but sends them to a separate service inside AWS.
It gets its permissions via an IAM Role, puts its metrics and logs to CloudWatch and X-Ray, and integrates with API Gateway to provide an HTTP endpoint. S3 is the choice to store files, and for secret configuration values, you can use SSM Parameter Store.
A Lambda function integrates with these services in different ways and each of them brings complications to the mix. But a serverless success story depends on using the full ecosystem.
The serverless runtime environment is radically different than traditional, on-server, architectures. It brings the promise of infinite scalability, but also new problems to solve.
The problem of cold starts, the retrying model, the ephemeral local storage, and other properties of the Lambda model are all problems you need to be aware of to develop and deploy functions with success.
The Lambda execution model
The cost model
Caching in /tmp
The Lambda permission model
How to define code
Adding packages with npm
Async programming patterns
Input and output
The event object
The context object
The AWS SDK
Cause of timeouts
AWS SDK timeouts
Promise-based timeout handling
I'm a software developer focusing mostly on cloud computing and web technologies. I'm especially interested in how to handle edge cases to end up with dependable software.
One of my main focus is security and how each part affects the whole system. I'm an AWS-certified security specialist.
The book is available from these stores: