SERVERLESS COMPUTING
Serverless computing, also known as the
Function-as-a-Service (FaaS) model, has dramatically
risen in popularity since the introduction of the Lambda platform by AWS in 2014. Although not the first cloud
services provider to offer such a platform, the scale and
scope of the AWS offering meant that, for many, this was
when the serverless paradigm hit the big-time.
The “serverless” name clearly does not suggest that no
compute resources underpin the services, but instead
that, to the consumer of the service, there are no virtual
machines exposed. This means: no provisioning of
infrastructure and no ongoing management of compute
resources. The execution of users’ functions is abstracted
away from the machines on which it takes place, and as
such it can be seen to offer a “pure” execution platform.
Serverless computing: An
evolution of cloud computing
Serverless computing is an evolution
of cloud computing service models
–from Infrastructure-as-a-Service
(IaaS) to Platform-as-a-Service (PaaS)
to Function-as-a-Service (FaaS).
While IaaS abstracts the underlying
infrastructure to provide virtual
machines for ready consumption and
PaaS abstracts the entire operating
system and middleware layer to
provide the application development
platform, FaaS goes one step further
in terms of abstracting the entire
programming runtime to provide
options to readily deploy a piece of
code and execute without worrying
about its deployment.
Leading cloud providers like Amazon,
Microsoft, Google and IBM have
launched serverless services in the
last 2 years. While Amazon’s service
is called AWS Lambda (launched
in 2014), the respective services
of Microsoft and Google are called
Azure Functions (launched in 2015)
Evolution:
The approach to digital transformation representing monolithic —> microservices —> serverless architecture(FaaS) is driven by the need for greater agility and scalability. To keep up with the competition, the organizations need to update their technology stacks quickly, eventually making software a differentiating factor.
Thus microservices architecture emerged as a key method of providing development teams with flexibility and other benefits, such as the ability to deliver applications at warp speed using infrastructure as a service (IaaS) and platform as a service (PaaS) environments.
The concept of this idea was to break the monolithic applications into smaller services each with its own business logic. With a monolithic architecture, a single faulty service can bring down the entire app server and all the services running on it.
While in a microservice each service runs in its own container and thus application architects can develop, manage, and scale these services independently.
The microservices can be scaled and deployed separately and written in different programming languages. But a key decision many organizations face when deploying their microservices architecture is choosing between IaaS and PaaS environments.
Microservices involve source code management, a build server, code repository, image repository, cluster manager, container scheduler, dynamic service discovery, software load balancer, and a cloud load balancer. It also needs a mature agile and DevOps team to support continuous delivery.
Advantages of Serverless Model:
This section will discuss the advantages of serverless architecture compared to the earlier pieces of the architectural chain. Some of the following advantages will overlap with the MSA advantages as they are close relatives.
- Choice of technology — You can pick and choose different languages for different functions as it suites.
- Choice of architecture — You might have decided to use serverless, but it doesn’t have to be serverless only. You may decide to implement a set of services as serverless, another set in on premise microservices, or using your legacy system.
- Faster turnaround and best for marketing — Think about changing an existing service to add functionality and adding the function by creating a completely new service, independently. The later will save you lots of time. It avoids worrying about the whole service, and lets you get the function ready faster. You can market your new feature within days, if not in hours.
- Never pay for idle — You will not need to pay anything if there were no traffic. No cold servers will be maintained. Servers can be spawned milliseconds when the traffic starts.
- Faster and law-cost scaling — Simple/small functions and MSA-like architecture helps scale the services horizontally, quickly. Low boot time helps make capacity available within milliseconds and improve your availability numbers of the platform. You can handle steep spikes smoothly without worrying about availability.
- Simplified team responsibilities — Different teams working on different parts of a complex application will bring in complicated and time consuming discussions and interactions. With the clear borders in defined APIs, teams will breathe a little easier on those occasions.
- Offloading infrastructure worry — Before serverless, engineers overlook all the aspects of running the service online. This means that they have to optimize the node resources according to the capacity planning, setup analytics, handle scaling and many more. Serverless providers usually accommodate all in one bundles including all spare parts of the ecosystem.
- Saving the world — As you strive to keep your platform available with most 9s possible, you may have to run servers every region in the world and keep a resource margin to cover sudden traffic spikes. In the process, the world has to pay the price with the energy waste and resource allocation which may be idle at times. Serverless utilizes the resources best and saves energy without wasting for idled servers.
Disadvantages of Serverless Model:
We can see a lot of advantages over disadvantages in this architecture. However, we can identify several drawbacks when it is compared to it’s architectural ancestors.
- Less control As the services are managed in a cloud belongs to a 3rd party cloud provider, we might have to depend on them. Moving to a new platform would be costly and you might have to stick to the vendor despite of the higher price or API changes. One could use workarounds such as getting your code in a container and apply AWS Lambda.
- Architectural complexities — People might mix and match architectures, but they will get their share of complexity in that package. It will not be a smooth ride handling different architectures and different parties involved.
- Not for long-running applications — As these functions are configured as short lived and dynamic functions, it might not suit your application if long running operation is required.
- Privacy and multi-tenancy — Mostly these functions may run in a same application server for several different customers. There could be attackers exploiting any security holes in the system.
What is new about Serverless?
Let us consider an example where a FCC website collapsed when
it was unable to handle comments about net neutrality. That is good example where
serverless could be making an immediate real difference - if the FCC used a serverless
platform that would have a better chance to handle the scale of traffic generated. Trying to
decide how many servers to deploy and then maintain their scaling is hard job and unless
substantial expertise is available in-house it is easy to make mistakes. This example brings
up the support of elasticity and cloud-bursting to reach larger capacity sites; scheduling
technology needs to be improved to support this.
What is also making serverless attractive is a cloud offering of an ecosystem of supporting
middleware and artificial intelligence services that integrate seamlessly with the serverless
platform to enable natural language processing, image recognition, manage state, record
and monitor logs, send alerts, trigger events, or perform authentication and authorization.
The use of such services not only present another revenue stream for the cloud provider,
but also enables application dependency on the provider’s ecosystem and vendor lock-in.
Characteristics:
There are a number of characteristics that help distinguish the various serverless platforms. Developers should be aware of these properties when choosing a platform.
• Cost: Typically the usage is metered and users pay only for the time and resources used when serverless functions are running. This ability to scale to zero instances is one of the key differentiators of a serverless platform. The resources that are metered, such as memory or CPU, and the pricing model, such as off-peak discounts, vary among providers.
• Performance and limits: There are a variety of limits set on the runtime resource requirements of serverless code, including the number of concurrent requests, and the maximum memory and CPU resources available to a function invocation. Some limits may be increased when users’ need grow, such as the concurrent request threshold, while others are inherent to the platforms, such as the maximum memory size.
• Programming languages: Serverless services support a wide variety of programming languages including Javascript, Java, Python, Go, C#, and Swift. Most platforms support more than one programming language. Some of the platforms also support extensibility mechanisms for code written in any language as long as it is packaged in a Docker image that supports a well-defined API. • Programming model: Currently, serverless platforms typically execute a single main function that takes a dictionary (such as a JSON object) as input and produces a dictionary as output.
• Composability: The platforms generally offer some way to invoke one serverless function from another, but some platforms provide higher level mechanisms for composing these functions and may make it easier to construct more complex serverless apps.
• Deployment: Platforms strive to make deployment as simple as possible. Typically, developers just need to provide a file with the function source code. Beyond that there are many options where code can be packaged as an archive with multiple files inside or as a Docker image with binary code. As well, facilities to version or group functions are useful but rare.
• Security and accounting: Serverless platforms are multi-tenant and must isolate the execution of functions between users and provide detailed accounting so users understand how much they need to pay.
• Monitoring and debugging: Every platform supports basic debugging by using print statements that are recorded in the execution logs. Additional capabilities may be provided to help developers find bottlenecks, trace errors, and better understand the cicumstances of function execution.
Security Aspects:
Using serverless computing, the developer is not responsible for ensuring the security of servers or VMs. It is not necessary to maintain security patches since this is done by the provider. But this, of course, requires the user to trust the corresponding provider to maintain and secure their infrastructure as well as keeping it up to date. Most providers enable restricting access to functions through permissions such as users,roles and policies. Each of them can be assigned a key or other security credentials in order to be able to properly distinguish them. This kind of access management allows for a fine grained access policy whereas each function can be assigned its own security entity. Meaning that, while one function is assigned to only accept 10000 requests per second, another one may only respond to 5000 requests per second. Furthermore, such access management can usually not only be applied to function invocation. For example, it is possible to assign specific rights to certain users such as editing function code or to allow access to other services for that provider. A gateway can be allowed to invoke functions whereas it would be more secure to explicitly limit this access to specific functions rather than all of them. However, it is much easier for the developer to establish security permissions on gateway level while setting access restrictions on function level would be more secure.
![]() |
AWS Serverless web app security |
All the providers introduced below apply the pay per use model for serverless functions which avoids paying for idle resources.
1. AWS Lambda:
Within the product range of AWS, Amazon offers a serverless compute service called AWS Lambda. The duration is calculated from the time the code execution begins until it returns or otherwise terminates. The duration is hereby rounded up to the nearest 100ms. Similar to EC2, the user is charged for data transfer related to the execution of the function. This correlates to the current charge of the EC2 data transfer rate and also applies to Lambda. The costs for transferring data are also shown in table 3.2 whereas the inbound transfer is depending on the availability zones and AWS region. The costs for outgoing data transfer are stacked and depend on the amount of bytes. For example, having transferred more than a certain amount of data out of AWS, the next 10 TB are becoming more expensive. Amazon offers a free tier which provides 1,000,000 invocations and 400,000 GB-s of execution time per month free of charge. Note, that this does not cover data transfer from or to other AWS services.
2. Microsoft Azure Functions:
Following the trend of serverless computing Microsoft introduced its serverless offering called Azure Functions. This offering also is realized as event-driven FaaS approach. Naturally, it enables the user to trigger functions in response to events within the Microsoft Azure environment. Furthermore, Microsoft offers to trigger functions according to a specific schedule based on timers or upon events from a webhook. The functions runtime is open-source and made available on GitHub.
Similar to AWS Lambda, Microsoft Azure functions provides 1 million requests and 400,000 gigabyte-seconds of compute time for free each month . Other than that the customer is charged for what he is actually using following the pay per use model.
3.Google Cloud Functions:
As part of its Cloud Platform, Google offers Cloud Functions as event-driven serverless compute service. However, this offering is still in beta state [Goo18]. Other than Microsoft or Amazon, Google Cloud Functions support GitHub or Bitbucket as source repository for deploying functions.
Unlike the previous providers Google Cloud Functions are not only billed for allocated memory and the amount of invocations. They charge the customer additionally for CPU usage in terms of GHz per second. In contrast to the allocated memory, CPU usage is not fixed for specific functions. The actual allocation of CPU clock cycles may vary across function invocation
Conclusion:
Serverless platforms today are useful for important (but not five-nines mission critical) tasks, where high-throughput is key, rather than very low latency, and where individual requests can be completed in a relatively short time window. The economics of hosting such tasks in a serverless environment make it a compelling way to reduce hosting costs significantly, and to speed up time to market for delivery of new features.
Serverless Computing is a computing code execution mannequin the place the builders are relieved of a number of time-consuming actions in order that they'll concentrate on different vital duties. This is great blog. If you want to know more about this visit here Internet of Things.
ReplyDeleteYou have written an excellent blog.. keep sharing your knowledge...
ReplyDeleteGoogle Cloud Training in Chennai
Google Cloud Online Training
GCP Training in Chennai
The blog is very informative. Thanks for sharing this wonderful article. You can visit best serverless cloud solution provider company for more details.
ReplyDelete