One of the common questions that is asked is how to deploy your web application to production. In this three part tutorial, we are going to take a look at one of the newer deployment options: Serverless Functions.

What are serverless functions?

Serverless functions is one of the new ways to deploy cloud based applications. Before we understand serverlesss functions, let us take a moment to understand traditional deployment methods.

Traditionally, deploying an app to production involves getting access to a part of a web server. That might mean buying and building your own hardware to put in a data center or getting a virtual machine on a hardware that is owned by the hosting provider. Either way, you now own a machine that you can proceed to deploy your app into.

The model works pretty well, but has two main downsides

  1. The first downside is that you have to maintain the machine. Things like applying the latest security patches, configuring firewalls, updating dependencies and so on. This can quickly become time consuming.
  2. The second involves pricing. Once you have the machine, you are paying for the machine for the full duration that it is running, irrespective of whether it is serving traffic or just lying idle.

So lets say you have a hobby project that gets only a little traffic. Then wouldn't it be nice to pay only for those hours instead of paying for the whole month?

Conversely, some apps get spikes and variations in traffic. An app that generates a monthly report is likely to see a spike in traffic on the days that the report is released. The app might need more that one server in order to maintain performance during such spikes. But those extra servers will lie idle and incur cost for the rest of the month.

This is where the serverless functions deployment model comes in.

The name is a little misleading - serverless functions still need a server to run the application. The difference is that the deployment platform will find a server in the cluster that has some spare resources and will choose where to deploy the app. We don't need to own a server (real, or virtual) to run the app - hence the name serverless.

Serverless solves the first downside of the traditional deployment model - we no longer need to worry about maintaining any server or doing any of the patching, upgrading and general sysadmin maintenance tasks.

What about the second downside about incuring cost when the app is idle?

That is the "functions" bit of serverless functions. You see, in the serverless functions model, the app is only deployed and running as long as requests are coming in for the app. When the app starts to idle, it is shut down and uninstalled. The next time a request comes in for the app, it gets re-deployed and it will handle the request. So your app behaves less like a traditional persistent server app, and more like a function that gets called, does some processing and then quits.

Consequently, we get billed only for the time period when the app is actually running. If we get a bunch of traffic for an hour and then nothing for the next few hours, we only need to pay for that one hour of traffic. Not only does this make it much more convenient for hobby projects which don't get a whole lot of traffic 24 x 7, but also for production apps with highly variable traffic pattern. When traffic spikes, the platform will deploy multiple instances of the app only for the time period when the surge is occurring. You will pay extra during that time, but at least you don't need to have multiple servers running for the whole month just to handle surges that happen now and then.

There are some downsides too:

  1. If the app is actually running and serving traffic through the day for the whole month, then serverless functions model can be more expensive. Better to switch to a traditional deployment model in that case.
  2. As the app gets constantly re-deployed to handle new requests after an idle period, better make sure that the deployment process and app startup time is fast - really fast. Not more than a second. Otherwise performance will suffer.
  3. Finally, there are challenges when you need persistent connections - to a database for example. Typically we create a connection pool and connect to a database on app startup and that persists for the lifetime of the server. In the model where the app is constantly shut down and redeployed, you are constantly connecting and disconnecting from the database. There are new database models also coming up to manage these scenarios - more on those in a future article.

Now that we know what serverless functions are, in the next article we will do a tutorial on deploying an app onto a serverless function platform.

Did you like this article?

If you liked this article, consider subscribing to this site. Subscribing is free.

Why subscribe? Here are three reasons:

  1. You will get every new article as an email in your inbox, so you never miss an article
  2. You will be able to comment on all the posts, ask questions, etc
  3. Once in a while, I will be posting conference talk slides, longer form articles (such as this one), and other content as subscriber-only