Wednesday, February 28, 2018

How to Save up to 70% on Your AWS Bill with Instance Scheduler?

AWS recently (Feb. 7) shared a quick and simple solution to automatically STOP and START EC2 and RDS instances. As of now, to achieve this functionality you were forced to use a custom or a 3rd party solution which were neither simple nor cost-effective. This solution is simple, low cost & can be easily deployed in 5 minutes!

P.S. - As per the AWS latest guidelines, if you're using the old EC2 Scheduler then you must migrate to the new AWS Instance Scheduler.

Use Cases

You want to schedule your Dev / Test server instance to run only during the business hours on week days. Per instance basis, this could lead to a savings of up to 70% on your AWS bill (on those instances which are tagged with this solution).


[Image Credit: AWS website]
Architecture of the Solution

This solution is deployed using a CloudFormation template shared by AWS. The said template deploys a stack with the following 3 components: 
  1. AWS Lambda, 
  2. CloudWatch  
  3. DynamoDB



AWS Instance Scheduler
AWS Instance Scheduler [Image Credit - Amazon AWS website]
Prerequisites
  1. An Active AWS Account.
  2. An EC2 (or RDS) instance. We'll use an EC2 instance for this demo.
  3. Beginner's level knowledge of CloudFormation & DynamoDB is expected but not required as the implementation is mostly abstracted. Only a few clicks & a few configuration changes are needed based on your requirement.
Steps to Deploy
  1. Sign-in to your AWS account.
  2. Click on Launch Solution button on this web page --> See Step# 1 of the shared link.
  3. It'll open CloudFormation --> Select Template page.
  4. In the above page, Select the Region of your choice. (Select from Top Right, drop-down list). By default it's US East, N. Virginia.
  5. Leave the default options as is & click Next.
  6. In this page i.e. - Specify Details - Write the name of your stack in Stack Name text-box. In the same page under Parameters Section - enter your Default Time Zone. You can change this later as well in the DynamoDB table.
  7. Click Next.
  8. Now you're in Options page. Click Next.
  9. You're in Review page. Scroll down & click on the checkbox at the bottom of the page. [The checkbox reads I acknowledge that AWS CloudFormation might create IAM resources].Click Create. Wait for a couple of minutes for the stack to be created.
If you've followed the above instructions carefully the CloudFormation template will launch 2 DynamoDB tables, 1 Lambda function, A few CloudWatch alarms.

Linking Your EC2 with the Instance Scheduler 

This is a simple step but can prove a bit tricky if you are reading the official documentation. I think, the official documentation is not explicitly mentioning this step in details along with screenshot, the clarity is missing. So here we go.
  1. Go to Services. Select EC2. Select the instance you want to test for automatic START / STOP. Create a new instance if you already don't have one.
  2. Select the instance. Go to Tags. Enter Key as Schedule and Value as uk-office-hours.

Now observe your EC2 instance based on the time settings & the Time Zone that you've selected. By default the Time Zone is UTC. If you've not made any changes in DynamoDB tables then the tagged EC2 instance should be in running state only between 9 AM to 5 PM from Monday to Friday. Apparently, in a week your instance will run only 1/3 of the time! 

To Change the Schedule / Time / Weekdays
  1. Go to the DynamoDB service.
  2. Select the (radio button) table named *-ConfigTable-*. 
  3. Select Items (right pane). 
  4. Select period with name office-hours.
  5. Make a change to begintime & endtime. Click Save.
You can play around by changing the various parameters in the ConfigTable. Feel free to share your observation in comments.

Happy Cloud Computing!

  

Tuesday, January 2, 2018

What & Why of "Serverless Computing" in Plain English

Guinea Pigs are NOT pigs & they are definitely NOT from the country named Guinea (they are rodents from South America!). Fireflies are NOT flies (they are rodents!). Arabic Numerals are NOT from Arabian Peninsula (they originated in India!). And French Fries are probably NOT from France! Historically, Indians were NOT actually from India for many centuries! Yes the term originally referred to indigenous Americans (a historical blunder). 

Similarly, Serverless doesn't mean absence of servers or no servers. So, is it simply one of the most popular tech-misnomers?

The term Serverless Computing was buzzing around throughout the year 2017. For the uninitiated amongst you, Serverless Computing is a gigantic shift in the way we approach Cloud Computing servers. As of now it's hot, trending and everyone in tech-circle loves talking about it. However, whether it's going to make into the mainstream usage in 2018 or will slide into oblivion, only time will tell. 

To understand Serverless we might need to understand a bit of history of evolution & usage of servers. Here we go.

👈Flashback

Hard Times - Until late 1990s 

I've heard that to procure a server in those days was a real pain in the neck. The server hardware, OS, networking, storage & provisioning etc. had a lead time of up to years! There were only dedicated servers. Again if you want to scale, the same long cycle should be repeated. No wonder, it could take years of budgeting & planning to setup servers. It was a luxury - sort of. To import a computer, it took me three years and I went about 50 times to Delhi 

Virtual Days - 2003 

Virtualization was an apparent evolution from dedicated servers. One physical machine can now support various operating systems. You can split a single physical server into multiple virtual machines. Here you were able to abstract the hardware. The on-premise virtualization lead to a simplified infrastructure management. The provisioning time & cost were reduced.

Cloud Magic - 2007 - Beginning of Cloud IaaS

IaaS was a new paradigm & it democratized the server usage. Now you can programmatically access / provision servers / storage network within minutes via API calls. Quickly provision your server & terminate it when you don't need them.  It empowered even the small & mid-sized companies, independent developers to build & deploy their applications in a state-of-the art infrastructure without much planning & most important, within a small budget. 

This lead to cost reduction, agility, reliability, scalability & elasticity (i.e. ability to grow or shrink on the fly). Unlike past when most of the tissue paper prototypes were forgotten due to initial server cost (CapEx) now many of them were actually build & shipped to the market. Boom ... beginning of the Startup era.

PaaS - 2008

PaaS like Google App Engine, MS Azure & AWS Elastic Beanstalk are designed to support the complete web application lifecycle viz. building, testing, deploying, managing & updating. So, PaaS was something which 'almost' abstracted the IaaS & additionaly provided the OS, Development tools, BI tools etc. Actually, this had the potential to change the we usually consumed infrastructure. 

But unfortunately it didn't click with end users. PaaS is almost dead. Vendor lock-in (no portability), choice of technology limited to what the vendor provided... more than enough reasons for failure?

The pioneers of PaaS like Google & Microsoft no longer emphasis on their respective flagship PaaS services & has subsequently evolved more as an IaaS provider.

Containers - Write-once Run any where - 2013

When you ship a software with docker (or any container) you package everything that is needed to run the software - code, runtime, libraries, tools & settings. In a nutshell - Containerized software will always run the same, regardless of the environment.

To a great extent, Containers (read Docker) removed the major issue of PaaS & i.e. portability. Containers abstracted the IaaS layers of various cloud providers. Write-once Run any where...move them around from cloud to cloud. In way this lead to a true independence between applications & infrastructure.

Scaling is provided by orchestration tools - Kubernetes, Docker Swarm, Mesos.

Note: Containers created on AWS ECS - are vendor locked-in i.e. they can't be run on other clouds. It is recommended to use Amazon EKS, (Amazon Elastic Container Service for Kubernetes) -- no vendor lock-in. Also watch-out for AWS Fargate in 2018 - support for Amazon EKS will be available soon.

Serverless / FaaS - 2014


Serverless completely abstracted the underlying details of cloud infrastructure. The servers are still there, however you no longer require to manage it. FaaS is the way to achieve Serverless. 

Serverless / FaaS got popularity after AWS Lambda launch in 2014. Other popular FaaS are Azure Functions, Google Compute Functions. The key difference between PaaS & FaaS is the way Scaling is handled. In PaaS you have to plan & define a Auto-Scaling strategy whereas in FaaS you just don't care about how/what's happening behind the scene - i.e. complete abstraction! More on this in an excellent article by Martin Fowler.

Basically a Function (in FaaS) spins-up servers with an event/trigger and cleans it up (the resources etc.) once the request/task is completed.

Summarizing the above Milestones 



Here all the above are still going strong and are in mainstream usage. However, now it seems, PaaS has lost a bit of popularity.

Why Serverless?
  1. No servers to manage - i.e. Zero-Administration.
  2. It's the real pay as you go. No need to pay for idle time.
  3. Cost effective - sub second billing.
Conclusion

Greenfield projects (i.e. a fresh project, which is not constrained by prior work), Microservices/Nanoservices architecture and a few tasks from brownfield projects can be tried with this new approach of consuming compute resources. As of now, there are a few best-fit-use-case-patterns to begin with Serverless/FaaS. My favorite being the use case where you require instant scaling for a brief amount of time. As auto-scaling couldn't serve this requirement because by the time auto-scaling responds & spins up new server the brief spike phase is already over!

Despite all the attractions for Serverless now, there are hiccups too like maturity of the tools / ecosystem around serverless. Also, writing stateless code could be a challenge. Go ahead & start experimenting with it as 2018 might see the serverless going mainstream.