It’s so ubiquitous these days that it’s easy to take for granted, but Cloud-based infrastructure has come a long way in the last few years.

Even just a few years ago it was hard to imagine highly regulated Enterprise-level organizations like Northwestern Mutual (NM) feeling comfortable with adopting a Cloud-first architecture, and yet it’s 2018, the Cloud ecosystem has matured to meet the standards of many Enterprise-level organizations, and NM is indeed in the process of moving to a Cloud-based operating model. Waning are the days of on-prem server rooms handling most IT and computing tasks, we’re outsourcing that computing load to the Cloud!

It’s a very in-demand business too—Cloud providers like AWS, Microsoft Azure, Google Cloud, and IBM Cloud have been practically tripping over themselves in the pursuit of providing the best experience for engineers around new capabilities like Big Data/Advanced Analytics, Machine Learning/AI, and the Internet of Things (IoT) with the end goal of getting developers and engineers back to building the products they were trying to build in the first place. This paradigm evolved from IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and containerization into a complete software architecture pattern people these days just call Serverless or Serverless Architecture.

As you might guess, Serverless architecture is a very broad topic…so broad in fact that we felt it might be a little too much to fit into one article. In this first installment in our Serverless Series, we’ll take a deeper dive on what Serverless really means, how we got here, where Serverless Functions fit in, and the things (good and bad) you should take into consideration when deciding if a Serverless architecture is right for your next application.

So what is Serverless?

Serverless is a Cloud-based software architecture pattern, that’s it.

Well, a little more detail is probably helpful…Serverless generally refers to a collection of services where you don’t have to manage the actual servers running your applications. You focus on building your apps and services without having to worry about provisioning servers, orchestrating containers, thinking about scaling, or availability. Generally because you’re paying for only what you need, the various pieces that make up a traditional back-end stack are de-coupled and abstracted into their own components that you can include in your architecture as needed.

If you’ve spent a day or two sorting the headache of provisioning and scaling your own servers you might see the appeal (there are downsides so keep reading).

Before we dive in, you’ll notice a lot of proper nouns in this article like Cloud and Functions, this is mostly done to differentiate between the concepts we’ll be exploring and nouns with the same name. Cloud refers to the ecosystem of decentralized server infrastructure, and Functions refers to the callback registered to an endpoint in a Serverless Functions service.

Ok…so Serverless Functions?

Because of the quick rise of Serverless Functions or FaaS over the last couple of years, it’s easy to think of Serverless as another term for FaaS, but that’s not quite the case, it’s the opposite really.

Today, Cloud providers are offering database services, messaging, event streaming, microservices, websites, web-apps and many other capabilities that can generally be called Serverless. FaaS is actually just one piece under Serverless Architecture umbrella.

That said, it is useful to think of Serverless Functions as the entrypoint for your Serverless Architecture: you register a URI endpoint with an accompanying function (similar to how most routing works), you hit that endpoint to trigger the function, maybe pass some data, and that function in turn makes requests to your database and makes calls to other microservices. It’s also important to keep in mind that just like a classical server’s router, there will likely be quite a few Serverless Functions that make up your product.

Some of the public Cloud providers supporting Serverless include AWS Lambda, Azure Functions, Google Cloud Functions and IBM Cloud Functions which is based on Apache OpenWhisk.

How did we get here?

After gradually de-coupling pieces of the back-end ecosystem and offering them as separate products like Secure Simple Storage (S3) (2006), Elastic Compute Cloud (EC2) (2006), and Relational Database Service (RDS) (2009) for years, AWS launched Lambda as a product at its 2014 re:Invent conference.

At the outset, AWS Lambda let you execute code without provisioning or managing servers (same as it does today). Devs were required to upload a zip file that included some configuration files alongside their function, but in return they only paid for the compute time consumed as the function ran. No charge when the function was inert (the process is much more automated now, but we’ll get into that when we go over how to build a Serverless app).

For somebody looking to quickly spin up functionality without having to worry about the classic infrastructure issues of a normal server, it was a compelling idea…and still is!

As AWS Lambda gained more traction, the other major Cloud providers joined the fray to offer Serverless Functions on their platform. A year later Google, Microsoft, and IBM had included beta-versions of Serverless Fucntions on their Cloud platform.

And with such a dynamic industry growing so quickly, it became evident there needed to be some sort of governence, or at least some sort of stewardship to ensure the Serverless Architecture community grew in a way that was good for everyone. So in 2015 (no coincidence I’m sure), the Cloud Native Computing Foundation (CNCF) was founded as an open source software foundation dedicated to making cloud-native computing universal and sustainable.

Serverless Timeline

Seems obvious to say but 2014 isn’t that long ago…and yet just 4 short years since the launch of AWS Lambda’s launch from the time of this writing (August 2018), Serverless Functions as a Service has grown to be supported by every single major public Cloud provider, probably close to all Cloud providers generally, and some frameworks even allow you to build and deploy Serverless applications on-premises. And with a strong product, a strong after-market follows…third-party vendors have also joined the Serverless Functions revolution, offering monitoring, debugging, and security services, to name a few.

It’s pretty safe to say we’ve at least reached mid-morning in the Dawn of Serverless Architecture.

So…where are we now? The CNCF recently published an infographic of the Serverless landscape that captures the platforms, frameworks and vendors in the Serverless ecosystem. The latest landscape and more information is available at at http://s.cncf.io.

Cloud-Native Landscape

Benefits & Challenges of Serverless Applications

Now, with all of the hype around Serverless Architecture and all of the cool things you can do under the paradigm, it’s still not right for every use case—when deciding if Serverless is right for your next project (or even your current one), there are some important pros and cons that need factoring into your decision:

To summarize a lot of what I’ve gone over above: with Serverless, there is no need to provision or manage servers or engineer scaling and high availability. You get to focus on the business logic of your application which leads to increased velocity and shortens the time taken to execute on an idea or a business priority.

But while Serverless is the hot buzzword for providers right now, they have yet to reach feature parity with a classical Cloud or Bare Metal model. Porting to Serverless from one of those models would require more than a fair amount of additional work and headcount—if you’re sitting on a very large, complex legacy app, it may not be worth the investment to convert…or at least be prepared for the time and process to do so (I’ve got an article coming up specifically for you, stay tuned).

On top of that, monitoring, local testing, secret management, and debugging are relatively immature depending on the platform you are using, and so you may need to rely on third-party solutions to fill those gaps. Serverless also comes with new best-practices and gotchas that engineers need to keep in mind—an example that bites a lot of newbies is that depending on usage, Serverless apps may incur a start-up penalty if they’ve been idle for a while.

TL;DR

Pros Cons
  • Zero Administration
  • Auto Scaling
  • High Availability
  • Pay as you use
  • Increased velocity
  • Immature monitoring
  • Potential startup penalty
  • Local testability inconsistent
  • New best practices
  • Can’t fully replace Cloud feature set
  • Complex Debugging

Conclusion

Not so bad right? I hope this has piqued your interest in the patterns that make up Serverless Architecture, because there’s so much more to tell you! This is a series on Serverless, in upcoming entries, we’ll build a Serverless application, explore the emerging marketplace for Serverless apps and learn more about Cloud portability.

I’m also presenting on this topic at the upcoming That Conference, so if you’re attending, stop by for my session! I promise I won’t get too mad if you heckle me 🙂

If this post has piqued your interest and you’d like to read more introductory stuff about Serverless: