All Articles

Serverless 101

By focusing only on the running code and managing the hosting platform separately, the serverless pattern offers a number of advantages. First, developers can spend less time coding against the network. This reduces the time it takes to complete a project and can therefore reduce costs, too. Another advantage of separating the platform from the code is that you can more easily deal with problems such as scaling (adding machines) or reliability (removing/restarting faulty servers). A side benefit of automating reliability is that hosting costs can be adjusted at a more fine-grained level. Companies don’t need to have lots of idle server capacity anymore—they can just spin up (and down) servers to match demand.

But what does the serverless platform look like and how is it different from VMs and containers?

In the last few years, there has been an increase in adapting event driven architectures and streaming data storage. These two trend appear in major serverless options too. Asynchronous coding patterns and streaming data storage are not only common when using serverless hosting platforms; they work well together in general. As components get smaller, the messages do, too. And there are more of them. The very nature of serverless platforms makes them ideal for dealing with small messages from lots of different locations.

Here are a few characteristics of serverless platforms.

Event-Driven

Event-driven software architecture for machine interfaces can take several forms. Event notification (or EN) is the work of sending notification messages to one or more components (“UserX completed a transaction”). These notifications are usually offered as subscriptions (e.g., you can subscribe to the login notifications), and they are nothing more than that—notices that something happens.

Another pattern is event-carried state (or ECS). In this case, the message contains more than an announcement. It also carries pertinent data and metadata about the event. Using the login example, an ECS message might look like this:

{
  "eventName" : "checkout",
  "userLogin" : {
      "username" : "UserX",
      "datetime" : "2021-01-06T19:40:10",
      "location" : "Austin, TX",
  }
}

ECS messages contain specific information about the event—often enough data for recipients to act upon or manipulate the state of the application itself.

Finally, there is event-sourcing (or ES). These messages are much more generic than the ECS-style messages and are often meant to be sent directly to streaming storage systems for later processing and access. Sticking with the example we’ve discussed so far, an event-sourcing version of our login message might look like this:

{
  "topic" : "checkout",
    "properties" : [
      {"name" : "messagetype", "value" : "checkout"},
      {"name" : "username", "value" : "UserX"},
      {"name" : "datetime", "value" : "2021-01-06T19:40:10"},
      {"name" : "location", "value" : "Austin, TX"}
    ]
  }
}

If your architecture relies on these kinds of event-driven messages (EN, ECS, and ES), a serverless solution may be a good choice for you.

AutoScaling

Another important element of serverless platforms is the ability to scale the size and capacity of your solution automatically. That means as request load increases, the platform will automatically add more machines to help process the load, and as traffic reduces, unused computing power can be taken offline automatically.

Auto-scaling exists to provide a high degree of availability for your solution. Essentially, your implementation grows (or shrinks) to meet the needs of your request traffic. Serverless platforms typically provide this high availability automatically—it is built into the way serverless instances are defined and created at runtime.

This process of auto-scaling means architects and programmers can supply metadata on processing capacity such as setting a minimum load (smallest number of machines/processes that should always be running), maximum capacity (the highest number of machines/processing allowed to run at one time), the metric to use for making the auto-scaling decisions, the rate of monitoring the health of the system, and so forth.

There is a hidden challenge to this kind of feature. The components need to be designed and built in a way that allows for the number of components running in production to be increased or reduced without adversely affecting data integrity for computational reliability. A serverless approach can make this easier as long as individual components are kept small and their run life (the time it takes for a component to accomplish a task) is kept very short.

Systems where individual components have a short run life and can be implemented to work in a stateless way are good candidates for a serverless solution.

Fault Tolerance

The role of fault-tolerant systems is to continue to successfully operate even in the face of failures.

Similar to the challenge of designing to support auto-scaling, architects and developers need to do some additional work to get the benefits of fault tolerance in a serverless platform. The good news is that any component that supports auto-scaling is quite likely able to support fault tolerance too. And, as with auto-scaling features, you can use platform tooling to arrange fault-tolerant features like circuit breakers, time-outs, and other patterns without the need to change your component code.

If your solution needs a “zero downtime” approach, a serverless platform can be a good option for you to consider.

Other characteristics

With serverless platforms, they have a great deal of control over the design and definition of the solution but very little control over the system once it is deployed to the platform.

You cannot, for example, do much to control where your server instances are located beyond a general region (Western US, Central Europe, etc.). You can define data storage models, but you can’t be sure where these instances will be located or how often they will be moved during the lifetime of your application. For the most part, this is good news. Developers usually do not want to be responsible for creating new server instances, mounting them in production, deploying components to them, and scaling them up and down over time.

If your current solution depends upon your ability to exert fine-grained control over the location and identity of servers on your platform, serverless systems are not going to be a good fit. To state it another way, as you move to serverless solutions, you need to be sure to focus on the design, define phases of system development, and learn to rely upon platform providers for making runtime decisions.