Go Kit (3) : Go kit for AWS lambda



AWS Open sources


Announcing Go Support for AWS Lambda

This post courtesy of Paul Maddox, Specialist Solutions Architect (Developer Technologies).

Today, we’re excited to announce Go as a supported language for AWS Lambda.

As someone who’s done their fair share of Go development (recent projects include AWS SAM Local and GoFormation), this is a release I’ve been looking forward to for a while. I’m going to take this opportunity to walk you through how it works by creating a Go serverless application, and deploying it to Lambda.


This post assumes that you already have Go installed and configured on your development machine, as well as a basic understanding of Go development concepts. For more details, see https://golang.org/doc/install.

Creating an example Serverless application with Go

Lambda functions can be triggered by variety of event sources:

  • Asynchronous events (such as an object being put in an Amazon S3 bucket)
  • Streaming events (for example, new data records on an Amazon Kinesis stream)
  • Synchronous events (manual invocation, or HTTPS request via Amazon API Gateway)

As an example, you’re going to create an application that uses an API Gateway event source to create a simple Hello World RESTful API. The full source code for this example application can be found on GitHub at: https://github.com/aws-samples/lambda-go-samples.

After the application is published, it receives a name via the HTTPS request body, and responds with “Hello <name>.” For example:

$ curl -XPOST -d "Paul" "https://my-awesome-api.example.com/"
Hello Paul

To implement this, create a Lambda handler function in Go.

Import the github.com/aws/aws-lambda-go package, which includes helpful Go definitions for Lambda event sources, as well as the lambda.Start() method used to register your handler function.

Start by creating a new project directory in your $GOPATH, and then creating a main.go file that contains your Lambda handler function:

package main

import (

var (
 // ErrNameNotProvided is thrown when a name is not provided
 ErrNameNotProvided = errors.New("no name was provided in the HTTP body")

// Handler is your Lambda function handler
// It uses Amazon API Gateway request/responses provided by the aws-lambda-go/events package,
// However you could use other event sources (S3, Kinesis etc), or JSON-decoded primitive types such as 'string'.
func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {

 // stdout and stderr are sent to AWS CloudWatch Logs
 log.Printf("Processing Lambda request %s\n", request.RequestContext.RequestID)

 // If no name is provided in the HTTP request body, throw an error
 if len(request.Body) < 1 {
  return events.APIGatewayProxyResponse{}, ErrNameNotProvided

 return events.APIGatewayProxyResponse{
  Body:       "Hello " + request.Body,
  StatusCode: 200,
 }, nil


func main() {

The lambda.Start() method takes a handler, and talks to an internal Lambda endpoint to pass Invoke requests to the handler. If a handler does not match one of the supported types, the Lambda package responds to new invocations served by an internal endpoint with an error message such as:

json: cannot unmarshal object into Go value of type int32: UnmarshalTypeError

The lambda.Start() method blocks, and does not return after being called, meaning that it’s suitable to run in your Go application’s main entry point.

More detail on AWS Lambda function handlers with Go

A handler function passed to lambda.Start() must follow these rules:

  • It must be a function.
  • The function may take between 0 and 2 arguments.
    • If there are two arguments, the first argument must implement context.Context.
  • The function may return between 0 and 2 values.
    • If there is one return value, it must implement error.
    • If there are two return values, the second value must implement error.

The github.com/aws/aws-lambda-go library automatically unmarshals the Lambda event JSON to the argument type used by your handler function. To do this, it uses Go’s standard encoding/json package, so your handler function can use any of the standard types supported for unmarshalling (or custom types containing those):

  • bool, for JSON booleans
  • float64, for JSON numbers
  • string, for JSON strings
  • []interface{}, for JSON arrays
  • map[string]interface{}, for JSON objects
  • nil, for JSON null

For example, your Lambda function received a JSON event payload like the following:

  "id": 12345,
  "value": "some-value"

It should respond with a JSON response that looks like the following:

  "message": "processed request ID 12345",
  "ok": true

You could use a Lambda handler function that looks like the following:

package main

import (

type Request struct {
  ID        float64 `json:"id"`
  Value     string  `json:"value"`

type Response struct {
  Message string `json:"message"`
  Ok      bool   `json:"ok"`

func Handler(request Request) (Response, error) {
 return Response{
  Message: fmt.Sprintf("Processed request ID %f", request.ID),
  Ok:      true,
 }, nil

func main() {

For convenience, the github.com/aws/aws-lambda-go package provides event sources that you can also use in your handler function arguments. It also provides return values for common sources such as S3, Kinesis, Cognito, and the API Gateway event source and response objects that you’re using in the application example.

Adding unit tests

To test that the Lambda handler works as expected, create a main_test.go file containing some basic unit tests.

package main_test

import (
 main "github.com/aws-samples/lambda-go-samples"

func TestHandler(t *testing.T) {
 tests := []struct {
  request events.APIGatewayProxyRequest
  expect  string
  err     error
    // Test that the handler responds with the correct response
    // when a valid name is provided in the HTTP body
    request: events.APIGatewayProxyRequest{Body: "Paul"},
    expect:  "Hello Paul",
    err:     nil,
    // Test that the handler responds ErrNameNotProvided
    // when no name is provided in the HTTP body
    request: events.APIGatewayProxyRequest{Body: ""},
    expect:  "",
    err:     main.ErrNameNotProvided,

  for _, test := range tests {
   response, err := main.Handler(test.request) 
   assert.IsType(t, test.err, err)
   assert.Equal(t, test.expect, response.Body)

Run your tests:

$ go test
ok      github.com/awslabs/lambda-go-example    0.041s

Note: To make the unit tests more readable, this example uses a third-party library (https://github.com/stretchr/testify). This allows you to describe the test cases in a more natural format, making them more maintainable for other people who may be working in the code base.

Build and deploy

As Go is a compiled language, build the application and create a Lambda deployment package. To do this, build a binary that runs on Linux, and zip it up into a deployment package.

To do this, we need to build a binary that will run on Linux, and ZIP it up into a deployment package.

$ GOOS=linux go build -o main
$ zip deployment.zip main

The binary doesn’t need to be called main, but the name must match the Handler configuration property of the deployed Lambda function.

The deployment package is now ready to be deployed to Lambda. One deployment method is to use the AWS CLI. Provide a valid Lambda execution role for  –role.

$ aws lambda create-function \
--region us-west-1 \
--function-name HelloFunction \
--zip-file fileb://./deployment.zip \
--runtime go1.x \
--tracing-config Mode=Active \
--role arn:aws:iam::<account-id>:role/<role> \
--handler main

From here, configure the invoking service for your function, in this example API Gateway, to call this function and provide the HTTPS frontend for your API. For more information about how to do this in the API Gateway console, see Create an API with Lambda Proxy Integration. You could also do this in the Lambda console by assigning an API Gateway trigger.

Lambda Console Designer Trigger selection

Then, configure the trigger:

  • API name: lambda-go
  • Deployment stage: prod
  • Security: open

This results in an API Gateway endpoint that you can test.

Lambda Console API Gateway configuration

Now, you can use cURL to test your API:

$ curl -XPOST -d "Paul" https://u7fe6p3v64.execute-api.us-east-1.amazonaws.com/prod/main
Hello Paul

Doing this manually is fine and works for testing and exploration. If you were doing this for real, you’d want to automate this process further. The next section shows how to add a CI/CD pipeline to this process to build, test, and deploy your serverless application as you change your code.

Automating tests and deployments

Next, configure AWS CodePipeline and AWS CodeBuild to build your application automatically and run all of the tests. If it passes, deploy your application to Lambda.

The first thing you need to do is create an AWS Serverless Application Model (AWS SAM) template in your source repository. SAM provides an easy way to deploy Serverless resources, such as Lambda functions, APIs, and other event sources, as well as all of the necessary IAM permissions, etc. You can also include any valid AWS CloudFormation resources within your SAM template, such as a Kinesis stream, or an Amazon DynamoDB table. They are deployed alongside your Serverless application.

Create a file called template.yml in your application repository with the following contents:

AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
    Type: AWS::Serverless::Function
      Handler: main
      Runtime: go1.x
      Tracing: Active
          Type: Api
            Path: /
            Method: post

The above template instructs SAM to deploy a Lambda function (called HelloFunction in this case), with the Go runtime (go1.x), and also an API configured to pass HTTP POST requests to your Lambda function. The Handler property defines which binary in the deployment package needs to be executed (main in this case).

You’re going to use CodeBuild to run your tests, build your Go application, and package it. You can tell CodeBuild how to do all of this by creating a buildspec.yml file in your repository containing the following:

version: 0.2
    # This S3 bucket is used to store the packaged Lambda deployment bundle.
    # Make sure to provide a valid S3 bucket name (it must exist already).
    # The CodeBuild IAM role must allow write access to it.
    S3_BUCKET: "your-s3-bucket"
    PACKAGE: "github.com/aws-samples/lambda-go-samples"

      # AWS Codebuild Go images use /go for the $GOPATH so copy the
      # application source code into that directory structure.
      - mkdir -p "/go/src/$(dirname ${PACKAGE})"
      - ln -s "${CODEBUILD_SRC_DIR}" "/go/src/${PACKAGE}"
      # Print all environment variables (handy for AWS CodeBuild logs)
      - env
      # Install golint
      - go get -u github.com/golang/lint/golint

      # Make sure we're in the project directory within our GOPATH
      - cd "/go/src/${PACKAGE}"
      # Fetch all dependencies
      - go get -t ./...
      # Ensure that the code passes all lint tests
      - golint -set_exit_status
      # Check for common Go problems with 'go vet'
      - go vet .
      # Run all tests included with the application
      - go test .

      # Build the go application
      - go build -o main
      # Package the application with AWS SAM
      - aws cloudformation package --template-file template.yml --s3-bucket ${S3_BUCKET} --output-template-file packaged.yml

  - packaged.yml

This buildspec file does the following:

  • Sets up your GOPATH, ready for building
  • Runs golint to make sure that any committed code matches the Go style and formatting specification
  • Runs any unit tests present (via go test)
  • Builds your application binary
  • Packages the binary into a Lambda deployment package and uploads it to S3

For more details about buildspec files, see the Build Specification Reference for AWS CodeBuild.

Your project directory should now contain the following files:

$ tree
├── buildspec.yml    (AWS CodeBuild configuration file)
├── main.go          (Our application)
├── main_test.go     (Unit tests)
└── template.yml     (AWS SAM template)
0 directories, 4 files

You’re now ready to set up your automated pipeline with CodePipeline.

Create a new pipeline

Get started by navigating to the CodePipeline console. You need to give your new pipeline a name, such as HelloService.

Next, select the source repository in which your application code is located. CodePipeline supports either AWS CodeCommit, GitHub.com, or S3. To use the example GitHub.com repository mentioned earlier in this post, fork it into your own GitHub.com account or create a new CodeCommit repository and clone it into there. Do this first before selecting a source location.

CodePipeline Source location configuration

Tell CodePipeline to use CodeBuild to test, build, and package your application using the buildspec.yml file created earlier:

CodePipeline Console Build Configuration

Important: CodeBuild needs read/write access to the S3 bucket referenced in the buildspec.yml file that you wrote. It places the packaged Lambda deployment package into S3 after the tests and build are completed. Make sure that the CodeBuild service role created or provided has the correct IAM permissions. For more information, see Writing IAM Policies: How to grant access to an Amazon S3 bucket. If you don’t do this, CodeBuild fails.

Finally, set up the deployment stage of your pipeline. Select AWS CloudFormation as the deployment method, and the Create or replace a change set mode (as required by SAM). To deploy multiple environments (for example, staging, production), add additional deployment stages to your pipeline after it has been created.

CodePipeline Console Deploy configuration

After being created, your pipeline takes a few minutes to initialize, and then automatically triggers. You can see the latest commit in your version control system make progress through the build and deploy stages of your pipeline.

You do not need to configure anything further to automatically run your pipeline on new version control commits. It already automatically triggers, builds, and deploys each time.

CodePipeline Console Created Pipeline

Make one final change to the pipeline, to configure the deployment stage to execute the CloudFormation changeset that it creates. To make this change, choose the Edit button on your pipeline, choose the pencil icon on the staging deployment stage, and add a new action:

CodePipeline Console Add Action

After the action is added, save your pipeline. You can test it by making a small change to your Lambda function, and then committing it back to version control. You can see your pipeline trigger, and the changes get deployed to your staging environment.

See it in Action

After a successful run of the pipeline has completed, you can navigate to the CloudFormation console to see the deployment details.

In your case, you have a CloudFormation stack deployed. If you look at the Resources tab, you see a table of the AWS resources that have been deployed.

CloudFormation Resources tab

Choose the ServerlessRestApi item link to navigate to the API Gateway console and view the details of your deployed API, including the URL,

API Gateway Stage Editor

You can use cURL to test that your Serverless application is functioning as expected:

$ curl -XPOST -d "Paul" https://y5fjgtq6dj.execute-api.us-west-1.amazonaws.com/Stage
Hello Paul

One more thing!

We are also excited to announce that AWS X-Ray can be enabled in your Lambda runtime to analyze and debug your Go functions written for Lambda. The X-Ray SDK for Go works with the Go context of your Lambda function, providing features such as AWS SDK retry visibility and one-line error capture.
x-ray console waterfall diagram
You can use annotations and metadata to capture additional information in X-Ray about your function invocations. Moreover, the SDK supports the net/http client package, enabling you to trace requests made to endpoints even if they are not X-Ray enabled.

Wrapping it up!

Support for Go has been a much-requested feature in Lambda and we are excited to be able to bring it to you. In this post, you created a basic Go-based API and then went on to create a full continuous integration and delivery pipeline that tests, builds, and deploys your application each time you make a change.

You can also get started with AWS Lambda Go support through AWS CodeStar. AWS CodeStar lets you quickly launch development projects that include a sample application, source control and release automation. With this announcement, AWS CodeStar introduced new project templates for Go running on AWS Lambda. Select one of the CodeStar Go project templates to get started. CodeStar makes it easy to begin editing your Go project code in AWS Cloud9, an online IDE, with just a few clicks.

CodeStar Go application

Excited about Go in Lambda or have questions? Let us know in the comments here, in the AWS Forums for Lambda, or find us on Twitter at @awscloud.


Alexa Skills with Go

The introduction of Go for AWS Lambda provides significant advantages for writing Lambdas. In particular, Go for AWS Lambda has strong cold-start and runtime performance.

Because Alexa skills are called unpredictably, the cold start benefits make Go an attractive language for writing Alexa skills. As I’ve been playing around with both Go and Alexa, I wanted to write an end to end implementation of a reasonably sophisticated Alexa skill in Go with automated deployment. Unfortunately, due to the lack of tutorials, I had to figure out much of the mechanics myself. This guide documents what I discovered.

The goal here is to build an Alexa skill that says “Hello, world” in multiple languages, and also reponds to the Alexa help intent. This will require both understanding the Alexa request, and making the appropriate Alexa responses. Because this focuses on the AWS Lambda Go hander side of the Alexa skill, this tutorial does not document the Alexa Skill Kit Developer Console experience, for which there are plenty of tutorials.

To wire up automated deployment, I first wrote the simplest Go-based Lambda, and used the CloudFormation Serverless Application Model to automate deployment. There were a few wrinkles I discovered along the way. The Go code for the AWS Lambda is:

AWS Lambda with Go automatically marshals response structs via encoding/json which makes Go-based Lambdas quite clean.

Deploying this manually is pretty simple — you follow the AWS instructions for compiling an AWS Lambda-compliant Go executable, zip the executable, and upload to the Lambda in the console. However, from experience, doing this repeatedly is a chore and automating this is helpful.

I created the following CloudFormation Serverless Application Model template:

Most of this is Go independent. However, the Handler and the CodeUriproperties require explaination. The Hander is the name of the Go executable (unlike in other AWS Lambda language implementations where this might be a method), and the CodeUri is the on-disk zip file that contains the Go executable. In the package CloudFormation step, the zip file is uploaded to the S3 bucket hello_lambda (which must first be created), and the deploy template is mapped to the zip file in S3. When deployed, the created AWS Lambda uses the S3 deployed zipped executable.

The zip file must contain only the hello executable file, with no directory paths. If the zip file is created incorrectly (for example, if you accidentally zip the executable with a directory path), you will not receive a friendly error, either on the package or deploy step, and if you test the resulting AWS Lambda you will get an extremely cryptic error message:

  "errorMessage": "fork/exec /var/task/hello: no such file or directory",
  "errorType": "PathError"

Note that the default package step without an explicit path will zip a directory. While that might be helpful for other languages, for Go, even if that directory contains your executable, this will not work.

Here’s the script I created to automate compling, zipping, packaging and deploying the AWS Lambda:

To make hello an Alexa skill, we need to add two things:

  1. Return the appropriate Alexa skill response
  2. Add the Alexa skill event to the CloudFormation template

The response needs to accord to the specification of the Alexa response JSON protocol. The simplest possible response will return just outputSpeech and end. As an interim hack, I’ve modified the Response struct to conform to the JSON Alexa response model for the simplest case. This is obviously inflexible, but will work for now. Adding the Alexa skill event to CloudFormation is much simpler.

The resulting code and CloudFormation template are now:

If we create a simple Alexa skill and wire the skill to the resulting Lambda, it all works.

To handle intents and locales, we need to add the request object. This gets pretty verbose to handle inline in the Lambda, so I created an alexa package to hold both the request and response, and a few helper functions. The code I created (modified from an earlier package created before AWS supported Go on Lambda) is hosted on Github.

In this version, the handler takes an alexa.Request and returns an alexa.Response:

At this point, we’ve done all we set out to do.

  • Automate deployment of Go-based AWS Lambdas
  • Create and return Alexa Skill Responses
  • Handle a variety of Alexa Skill Request attributes (Locale and Attributes)

Hope this helps other folks come up the learning curve faster!

After the tour-de-force of Serverlessconf in October, I decided my entire company would be going serverless. I spent the first couple of months beating my head against the wall trying to migrate a Python Flask app to Lambda — these efforts helped me find a better way.

Six months later, we are now deploying our fourth major project serverlessly. This is how we did it — including the lessons learned (and strong opinions) formed along the way.

Lesson #1 — Ditch Python

Flask is a nice little framework for the old-time request-response style of a website with a session managed by the server. That’s quaint — but in the new world of the interactive web, it’s like trying to build a house with a rubber band and a squeegee.

The old way: Python Flask app runs on Elastic Beanstalk, storing data in RDS

As you start to move more work to the client-side to support interactions, you’ll have no choice but to use JavaScript. This usually leads to inlining into your Python templates while the Demons of Technical Debt open another line of credit for you.

Increasingly, Flask solutions become a kluge of different languages. I concluded fairly quickly that this approach was a horrific mess — and I wasn’t sure why I was using Python anymore.

After switching over to Node, everything was much more maintainable and logical, and there was no need to use more than one language. With a simple Node/Express configuration on Webpack, you can also use ES6 to eliminate the terrible JavaScript constructs poked fun at by Python developers.

Trying the same thing in Zappa/Flask was worse than doing my taxes. But in about 5 minutes, you can build a fully-fledged Node/Express app that works on Lambda like it’s the 1040EZ — it’s no big deal. So we ditched Python and joined the cool kids in the JavaScript camp.

Lambda Function As Monolith

What did we give up? Pythonistas will wax lyrically about all the cool language features, but these are mere toys compared with the practical async charms of JavaScript. And now we no longer worry about Python version 2 or 3 (is it ever upgrading?). At least for our projects, it was a very easy switch.

Of course, Ben Kehoe offers a compelling alternative sensationalist trap with his perspective and insights about using Python versus Node for serverless!

Lesson #2 — Burn the middle layer to the ground

It took us a surprising amount of time to realize an obvious benefit of serverless — possibly because we’ve been building web apps forever, or maybe it’s just that I’m getting old.

Some of our first web apps still had a Node Express layer that remembered session state either (1) by accident hoping the user hit the same Lambda container over and over, or (2) by tragedy of design where we abused DynamoDB to make it remember session IDs. What the hell were we doing?

In phase one of “The Transition”, our middle layer acted like a web server on Lambda, which is both wrong and terrible. Then we ended up with html pages filled with JavaScript calling REST APIs. This approach was thrillingly raw, desperately unmaintainable, and quickly became brittle — but we’d killed the middle layer. In serverless, the middle layer has to go.

State moves to the client, logic moves to Lambda

Lesson #3 — Enjoy the Vue

It’s great being able to jam everything into the front-end, but it’s quickly an appalling mess. You eventually stop checking in code because you’re too embarrassed to share the Rube Goldberg machine magic you’ve been building. And ‘not checking in code’ is not a good job objective for developers.

Entering the world of Single Page Applications (SPAs) exposed me to React — the most popular approach to building user interfaces. React is great but comes with a steep learning curve, lots of Webpack/Babel set up, and the introduction of JSX. While it might be something we eventually use, it was too heavyweight for our immediate needs so explore the alternatives.

Fortunately, I soon discovered Vue.js and my serverless life turn to absolute bliss. Here’s the thing: You can learn Vue in a day!

Vue’s approach to design fits nicely with our design model — everything is a component that manages its own content, design and code. This makes it very easy to manage our multiple client-projects and dispersed teams, and also work very well for a serverless mindset.

The open-source JavaScript framework gives you powerful debugging tools, great organization, and a Webpack build out of the box that will save hours. Slap on the router and store management plugins — and you can churn out realtime sexy apps like you’re a Facebook engineer. Who knew Single Page Apps could be so easy?

From a serverless perspective, Vue compiles all your goodness into an index.html and bundle.js files, primed for uploading to S3. Typing npm run build is the new compile command.

Take a moment to consider this — in the old world, we would be deploying apps via Elastic Beanstalk and monitoring for utilization, autoscaling when needed, and managing a reasonable chunk of infrastructure.

The true magic of SPAs is when you “deploy” an application, you’re simply copying index.html, bundle.js, and a handful of file dependencies to an S3 bucket front-ended by a CloudFront distribution. This gives you rock-steady distribution and loading behavior, and also enables multi-version management and any deployment methodology you prefer — just by managing text files.

We have effectively unlimited scale and only pay for what we use — there is zero app infrastructure management.

Vue essentially allows you to build a desktop application within the browser — which means you can dramatically improve the user experience. All the state can be managed here without endless request/response, you can hide latency with standard UI tricks like transition effects, and the whole application now behaves properly.

Lesson #4 — Learn to love DynamoDB

In many respects, the hardest part of getting to serverless has been truly coming to grips with DynamoDB. You definitely make a few mistakes in the first few iterations, and its tempting to ditch the whole thing and go back to RDS where everything is known and comfortable.

SQL has been my crutch for a long time, and I’ll confess to putting way too much business logic into databases. But RDMS systems are just another monolith — failing to scale well and they don’t support the idea of organically evolving agile systems.

DynamoDB is a completely different animal. When you get it right, the NoSQL database provides blistering performance, massive scale, and practically no administrative overhead. But you really have to invest the time in exploring how it works — and the initial phases are full of gotchas aplenty.

Dynamo table fields can’t contain empty strings. The point in time backup isn’t automatic. If you get the partition and sort keys wrong, you have to start from scratch with your tables. You can go from having too few tables to way too many if you try to emulate SQL queries too closely. And all the while it just feels very alien coming from RDS.

After many tutorials, trying, failing and eventually succeeding with DynamoDB, I learned …

  • You need to understand the way DynamoDB works, spend some time to understand indexing strategies, and how you intend to query the data. It’s very easy to jump into it without knowing all you need to know, so many people get burned and then move back to RDMS at exactly the wrong moment. Make mistakes and push through them.
  • One of the least-discussed joys of DynamoDB is the way you can attach code to table events using streams — like an SQL trigger that can do anything. These are extremely powerful. A very simple pattern we use is to always push table updates to an SNS topic where the changes can be ingested by other serverless code you might not have written yet.
  • Don’t forget that DynamoDB can feed other storage systems (RDMS, Redshift or just flat text files) and can be used to effectively smooth out the traffic spikes or protect another database from huge volumes of data. DynamoDB has a TTL feature which allows you to expire rows — which is great for staging data you want to push somewhere else.

Lesson #5 — Serverless Framework FTW

My early experimentation with Lambda was a clunky affair of coding directly into the AWS console and getting frustrated that it took a lot of work and error messages to do some trivial things. The bridge that connects your IDE to a production environment is missing.

Well, its’ missing until you discover serverless framework which is honestly the most exciting thing I’ve found in ages.

A simple sls deploy wields enormous power, bundling up your precious code and shipping it directly to Amazon’s brain in the sky. And if you need to check logs because your code is misbehaving just type sls logs -f functionname -t and you can tail your CloudWatch logs like a pro without ever opening a browser.

This. Changes. Everything. The serverless people should be showered with accolades for doing something that every cloud provider should have offered from day one. Simply brilliant. And so much win.

Lesson #6 — Authorization is the new sheriff in town

In traditional apps, you authenticate a user once and then track this person by following around a session ID. We like it because you only need to do the hard work once, and then the ID lets you cheat for the lifetime of the user’s login for however long you want that to be.

In the old world, the session ID controls access

But this approach has problems. It only works if you have that server in the middle — and we we just burned that server to the ground. It also potentially exposes you to some nasty attacks — like Cross-Site Request Forgeries (CSRF) — and doesn’t let you pass around identity to other services very easily. So this approach basically Supports The Monolith (boooo!).

We hate the Monolith and CSRF attacks — but we do like our new friend, the JWT token. I had a moment of zen-like euphoria when I learned how this works but I need a diagram to do it justice.

Step 1, get a JWT, step 2, use it to communicate with any service you write:

The first step looks familiar: the authorization process gets a JWT token
The second step is magic: any lambda function can accept and validate the token

The basic nut is that every single request is authenticated, and the client can even talk to multiple serverless activities. It’s wickedly secure, it’s anti-monolith and CSRF doesn’t even exist in JWT-land. All that’s required from your serverless code is use a Custom Authorizer to check if the JWT in the header is valid (using boilerplate code) and we are done.

JWT makes all other types of auth look overcomplicated. We switched everything to Auth0 (and Cognito in some cases) and never looked back. Serverless auth is both beguilingly simple and insanely effective, so yeah, go team.

It’s a brave new world

While I’ve worked with AWS for a long time, I’ve never been this close to the ground floor. Even in EC2 land, there was plenty of help because I was comparatively late to the party. After leaving A Cloud Guru’s serverless conference, this felt like genuinely unexplored territory and there was significantly more discovery in the dark.

In our first few experiments, we had some misfires trying to use existing tools and techniques — and the results weren’t great. After a few months getting the right stack in place, we have officially started delivering projects in a 100% serverless way. I’m confident that our migration hiccups and early exploring were well worth the journey.

We are building slick, real-time SPA apps that use exclusively serverless infrastructure, scale effortlessly and cost 70–90% less. I’m both delighted and shocked by the payoff. I’ve never been more convinced that serverless tech is going to revolutionize application delivery in the cloud.

The results are transformational.

Like what you read? Give James Beswick a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.

Conversation between Wyatt Anderson and James Beswick.

If you haven’t already, check out create-react-app; it handles all the Webpack and Babel set up for you and gives you an amazing out-of-the-box experience for React with practically zero setup.

Hi Wyatt — thanks for the link! I’ll definitely check this out. We would like to learn more about React so this is a big help.

Good read.

Had a dilemma about a year ago between React and Vue. Went with React, and I haven’t regretted it yet, but I keep hearing whispers about Vue in the wind… if it can truly be learned in a day, I’ll give it a look.


Leave a Reply

Your email address will not be published. Required fields are marked *