Tuesday, January 2, 2018

What & Why of "Serverless Computing" in Plain English

Guinea Pigs are NOT pigs & they are definitely NOT from the country named Guinea (they are rodents from South America!). Fireflies are NOT flies (they are rodents!). Arabic Numerals are NOT from Arabian Peninsula (they originated in India!). And French Fries are probably NOT from France! Historically, Indians were NOT actually from India for many centuries! Yes the term originally referred to indigenous Americans (a historical blunder). 

Similarly, Serverless doesn't mean absence of servers or no servers. So, is it simply one of the most popular tech-misnomers?

The term Serverless Computing was buzzing around throughout the year 2017. For the uninitiated amongst you, Serverless Computing is a gigantic shift in the way we approach Cloud Computing servers. As of now it's hot, trending and everyone in tech-circle loves talking about it. However, whether it's going to make into the mainstream usage in 2018 or will slide into oblivion, only time will tell. 

To understand Serverless we might need to understand a bit of history of evolution & usage of servers. Here we go.

👈Flashback

Hard Times - Until late 1990s 

I've heard that to procure a server in those days was a real pain in the neck. The server hardware, OS, networking, storage & provisioning etc. had a lead time of up to years! There were only dedicated servers. Again if you want to scale, the same long cycle should be repeated. No wonder, it could take years of budgeting & planning to setup servers. It was a luxury - sort of. To import a computer, it took me three years and I went about 50 times to Delhi 

Virtual Days - 2003 

Virtualization was an apparent evolution from dedicated servers. One physical machine can now support various operating systems. You can split a single physical server into multiple virtual machines. Here you were able to abstract the hardware. The on-premise virtualization lead to a simplified infrastructure management. The provisioning time & cost were reduced.

Cloud Magic - 2007 - Beginning of Cloud IaaS

IaaS was a new paradigm & it democratized the server usage. Now you can programmatically access / provision servers / storage network within minutes via API calls. Quickly provision your server & terminate it when you don't need them.  It empowered even the small & mid-sized companies, independent developers to build & deploy their applications in a state-of-the art infrastructure without much planning & most important, within a small budget. 

This lead to cost reduction, agility, reliability, scalability & elasticity (i.e. ability to grow or shrink on the fly). Unlike past when most of the tissue paper prototypes were forgotten due to initial server cost (CapEx) now many of them were actually build & shipped to the market. Boom ... beginning of the Startup era.

PaaS - 2008

PaaS like Google App Engine, MS Azure & AWS Elastic Beanstalk are designed to support the complete web application lifecycle viz. building, testing, deploying, managing & updating. So, PaaS was something which 'almost' abstracted the IaaS & additionaly provided the OS, Development tools, BI tools etc. Actually, this had the potential to change the we usually consumed infrastructure. 

But unfortunately it didn't click with end users. PaaS is almost dead. Vendor lock-in (no portability), choice of technology limited to what the vendor provided... more than enough reasons for failure?

The pioneers of PaaS like Google & Microsoft no longer emphasis on their respective flagship PaaS services & has subsequently evolved more as an IaaS provider.

Containers - Write-once Run any where - 2013

When you ship a software with docker (or any container) you package everything that is needed to run the software - code, runtime, libraries, tools & settings. In a nutshell - Containerized software will always run the same, regardless of the environment.

To a great extent, Containers (read Docker) removed the major issue of PaaS & i.e. portability. Containers abstracted the IaaS layers of various cloud providers. Write-once Run any where...move them around from cloud to cloud. In way this lead to a true independence between applications & infrastructure.

Scaling is provided by orchestration tools - Kubernetes, Docker Swarm, Mesos.

Note: Containers created on AWS ECS - are vendor locked-in i.e. they can't be run on other clouds. It is recommended to use Amazon EKS, (Amazon Elastic Container Service for Kubernetes) -- no vendor lock-in. Also watch-out for AWS Fargate in 2018 - support for Amazon EKS will be available soon.

Serverless / FaaS - 2014


Serverless completely abstracted the underlying details of cloud infrastructure. The servers are still there, however you no longer require to manage it. FaaS is the way to achieve Serverless. 

Serverless / FaaS got popularity after AWS Lambda launch in 2014. Other popular FaaS are Azure Functions, Google Compute Functions. The key difference between PaaS & FaaS is the way Scaling is handled. In PaaS you have to plan & define a Auto-Scaling strategy whereas in FaaS you just don't care about how/what's happening behind the scene - i.e. complete abstraction! More on this in an excellent article by Martin Fowler.

Basically a Function (in FaaS) spins-up servers with an event/trigger and cleans it up (the resources etc.) once the request/task is completed.

Summarizing the above Milestones 



Here all the above are still going strong and are in mainstream usage. However, now it seems, PaaS has lost a bit of popularity.

Why Serverless?
  1. No servers to manage - i.e. Zero-Administration.
  2. It's the real pay as you go. No need to pay for idle time.
  3. Cost effective - sub second billing.
Conclusion

Greenfield projects (i.e. a fresh project, which is not constrained by prior work), Microservices/Nanoservices architecture and a few tasks from brownfield projects can be tried with this new approach of consuming compute resources. As of now, there are a few best-fit-use-case-patterns to begin with Serverless/FaaS. My favorite being the use case where you require instant scaling for a brief amount of time. As auto-scaling couldn't serve this requirement because by the time auto-scaling responds & spins up new server the brief spike phase is already over!

Despite all the attractions for Serverless now, there are hiccups too like maturity of the tools / ecosystem around serverless. Also, writing stateless code could be a challenge. Go ahead & start experimenting with it as 2018 might see the serverless going mainstream.