Journey to a serverless architecture

Table of Content

    What is serverless architecture?

    AWS defines serverless architecture as “building and running applications without thinking about servers''. Developers can now build and deploy the6ir applications without thinking about the backend infrastructure. They can also concentrate on the functionality they want to achieve through their code, and the rest will be taken care of by the service provider. 

    Here is an example to understand serverless architecture better:

    In a village, you own a mill. You choose your mill machine, reserve a space for it, pay for the electricity bill, pay the operator for running it, and pay the technicians whenever there is a machine breakdown. Services are stopped in case there is a power cut. The mill is probably not running at night, but you have reserved space for that whole setup.

    What if you take the grains whenever there is a harvest to someone who owns a mill, and he grinds the grains for you. You only pay a service amount for the grinding. Then, if there is a power cut or a machine breakdown, it is not your headache. You can do it from another miller.

    Think of the second case as the serverless stack. The mills here are the servers. They are still involved, but you do not manage it.

    This is pretty similar for computers, not precisely, as these machines are much more complex than a mill. To better understand this, we have to look at how traditional infrastructure was set up.

    Backend systems before serverless computing took over:

    Initially, organizations had to buy physical computers. Then they had to be moved to their respective data centres. IT administrators had to install the software before they could be used as servers. These physical hardware were prone to maintenance costs. Also, with more and more people using the internet, negligible downtime became desirable. Systems had to be upgraded every few years as the old machines were not efficient enough. With the advent of cloud providers, a great relief came to these organisations. Most organisations were freed from the baggage of infrastructure management. They started migrating their infra to the cloud. Instead of setting servers on-premise, dedicated servers could now be purchased with dedicated memory, CPU and storage. With this, the investment organisations had to buy data centres, and physical servers were drastically reduced. Furthermore, servers could be now purchased, turned on-off when needed with a single click.

    Drawbacks of cloud-based server-centric approach:

    Though dedicated servers are still predominant in the industry, some drawbacks are associated with this approach.

    Setting up and managing environment:
    Setting up infrastructure with dedicated servers includes logging in via SSH and manually installing and configuring all the required automated and reproducible processes.

    Security:
    With dedicated servers, the security layer has to be taken care of at the instance level. The security layer is essential as it decides and controls the traffic allowed to communicate with each instance.
    Creating and managing policies with correct permissions is quite tiresome and works on a trial and error basis. This becomes even more difficult with a growing team. In addition, handling the licenses for every specific business need means changing policies often. With every changing configuration, things have to be adequately tested before replicating them onto production environments.

    Scalability:
    People used to working on dedicated servers will admit that they always have to monitor their performance and continuously tweak their servers for better performance.
    For example, an application hosted on a server designed to cater to 200 users at any given time won’t run smoothly if suddenly the user base jumps to 2000. Some cloud providers have developed solutions like autoscaling groups where IT engineers have to choose between minimum, desired and maximum capacity. Selecting the correct numbers can be a bit challenging if user loads are not predictable. Also, since there is freedom of hosting any service, scaling services horizontally can be difficult if not properly architected. There can be a complete rewrite of the services.

    Inefficient usage:
    Even if we use auto-scaling groups, a minimum of 1 server instance has to be active all the time. This means that even if there is no request throughout the day, you are still billed for the dynamic resource. Also, the providers set default limits to the maximum instances that can be added to the group. Also, for running a small application, an entire OS is loaded, and the disk is mounted.

    Before we jump into how serverless architecture addresses these problems, we have to understand some of the core components of the stack.

    The serverless stack can be broadly divided into two components FaaS and BaaS.

    FaaS: Function as a service (FaaS) is a cloud computing service category that allows customers to develop, run, and manage their code without thinking of the backend infrastructure. FaaS are event-based. By events, we mean API calls, addition/deletion of files to BLOBStore, etc. service providers give user interface and SDKs to define which function to be executed on what event.
    For example, let's assume an e-commerce aggregator which has numerous users visiting it’s website, they look for products and buy them. To present them with the best deals, I query thousands of e-Commerce websites on the fly and provide them with the best deals available. Here, I am just searching, sorting and returning results. The results are no longer required for another user. Here I will use FaaS.

    BaaS: Backend-as-a-Service (BaaS), on the other hand, is a cloud service model in which developers are provided with ready-made backend services so that they can focus more on their frontend infrastructure. Taking the above example, if I want to register those users, we need an OTP service to trigger OTPs. We would also require some database to store their details and probably some BLOBStore to capture profile pictures. The BaaS system architect family is pretty big and mainly will comprise the following services.

    1. Social Media Post Images (63)

    Most of the time, both these services will be used together to build a complete end-to-end solution. Every organisation wants to retain their customers and wants to keep them engaged with its platform. Processing files, storing them in the database, running analytics, generating reports, trigger campaigns, and a lot more can be achieved using this powerful combo.

    The below diagram shows an overview of the infrastructure.

    1. Social Media Post Images (62)

    Advantages of serverless architecture:

    It’s time to understand how these ready-made services can help reduce time-to-market ideas quickly and save us from the hassle of managing things other than our product.

    Setting up and managing the environment: This step is not needed here. Usually, service providers provide consoles or SDKs to upload a piece of code. Then, this code is provisioned and deployed onto their servers. Also, other BaaS services like database, BLOB storage can be seamlessly accessed via some authentication mechanism.

    Security: Unlike traditional approaches, the network configurations and security layer are inherently managed by service providers. These network configurations are well tested and not exposed to the end-user. Any security flaws are easily patched without us even noticing.

    High level of granularity: With serverless architecture, applications are broken down into smaller pieces. This makes it easier to fix bugs and faster code deployments without downtimes.

    With serverless, the organisation is not involved in the management of servers or databases. Hence organisations can save on the considerable investment they had to make earlier for Internal architecture administration.

    Scalability: Scalability is one of the most attractive features of the serverless stack. It scales dynamically with increased traffic and also increases the number of concurrent executing functions. This is possible as operations are smaller and can be loaded into any of the free executors available. In contrast, this was difficult earlier as the whole application needed to be loaded on memory.

    Efficient usage: You pay only for the amount of computing time. So you are not charged a penny if there were zero usage in a month.

    Candidates for migrating services into serverless architecture

    Though serverless seems too appealing, not all applications can be run on serverless architecture. Here are some parameters which will help us select which applications can be used well on serverless architecture.

    Short running workloads: Functions that are small and are expected to run quickly can be deployed on serverless architecture. For example, an image processing application will be a good fit for serverless architecture. More computational power can be provided or modified for faster execution. There is no need for a separate image processing server.

    Rest APIs: Basic CRUD applications can be built serverless. We can’t directly build web services around the serverless stack. However, we can create API endpoints and map them to functions. These functions get triggered when users hit these endpoints. We can also map responses that these functions return.

    Periodic triggers: Schedulers or CRON jobs can be scheduled without a dedicated server. The schedulers are accurate and trigger functions that execute code at reliable intervals or a specific time. An organisation that needs to generate a daily report at a particular time at night will be a good example—or doing periodic health checks of the servers.

    Communication: Cloud providers provide notification services that can deliver event-based notifications to other applications or messages to users via SMS, email, mobile push notifications. So applications that maintain their servers can do away and readily use these services.

    Storage: Storage options allow developers to store documents as per demand. Hence developers can tweak their applications to store documents using these services rather than storing files on a dedicated server.

    Drawbacks of serverless architecture:

    Long-running functions: Running longer operations could be more costly on serverless. Also, cloud providers impose a timeout on executing functions, thus enforcing developers to write their code to complete it within a limited time window. With dedicated servers to complete it, developers have this flexibility.

    Dependencies: Today, we are all dependent on external libraries. However, external libraries increase the package size. Cloud providers sometimes impose restrictions on this. Often we want to communicate with a separate process and delegate work. This is not possible on FaaS but is easily achievable if we use a dedicated server.

    Platform lock-in: With serverless, we are more reliant on the provider. As said previously, cloud providers provide SDKs to use these services. It can be difficult to port from one provider to another if needed. Things might have to be rewritten to run the application on the new provider. Also, all service providers do not provide all sets of BaaS services. Choosing the correct programming language can also be problematic because various providers support various application runtimes. Though most providers support common programming languages widely, we may still never know when the support is removed. With dedicated servers, the developers are free to install any runtime of their choice. If an organisation decides to move to another provider, this setup can be quickly done, and the application server will be up and running in a couple of days.

    Cold start: This is one of the most significant issues and arises if the function has not been invoked for some time. The delay occurs at the cloud provider’s end as provisioning a runtime container takes time before the function can be executed. This can be avoided by periodically sending requests. However, this comes at the cost of increased bills.

    Debugging: Debugging the code is complex as replicating the serverless environment is difficult without visibility of backend architecture. Hence it is a bit difficult to assess how it will perform once deployed. Also, since functions are broken into small pieces, debugging the right part of code can take time.


    Every architecture comes with its own set of pros and cons. It’s an engineer’s job to select the right set of tools to make work easier, manageable and cost-effective. If we think of the mill as computers, it will be incorrect to say owning a mill is a bad idea. However, owning a fixed resource comes with added responsibilities of managing and maintaining it. Probably it's a good idea for small farmers to go to a miller, but giants specialising in this chain would set up their factories.

    Similarly, giants do outsource some of their workloads to smaller factories. They just do the packaging and distribute it to the market. The same goes with serverless completion. Choosing the right set of applications that can easily fit into this stack will successfully help the product grow at a tremendous rate.

    About Neebal:

    Neebal, a technology solutions provider, has delivered top of the line solutions across Agro, Pharma, and BFSI verticals. Neebal aims to provide top tier services for API Integration, RPA, and advanced mobility with prime focus on Hyperautomation. Founded in 2010, Neebal is a proud recipient of the Deloitte Technology Fast 500 Award (APAC) and the Deloitte Fast 50 Award (India) for four consecutive years (2017-20).

    Topics: Expert Insights, Neebal Insights