Going Serverless With Kubeless

Developers want to see their code running so the easier it is to deploy code, the better. The cloud enables you to deploy your code without having to deal with physical hardware, only virtual servers. The next step is not having to worry about the server and having an easy way to deploy your code in just one click. And that’s the starting point for serverless solutions.

In this article we’re going to show you how you can deploy your code in a serverless environment using Kubernetes as a backend, saving yourself a lot of time and resources when developing custom applications.

We’ll also show you how to set up some of the tools developed by Bitnami in collaboration to make it simpler to deploy serverless code. These are:

  • Kubeless as the serverless backend for your code.
  • The Serverless framework and the Kubeless plugin as the CLI tool for deploying your code.
  • VSCode as a code editor which makes it even easier to develop your serverless functions.

Let's start with Kubeless

Serverless and Kubeless

Kubeless is a Bitnami serverless solution that allows you to run your code using Kubernetes as the backend. It provides a native way of storing your code using CustomResources and exposing it as a scalable Deployment between other features.

In this article, we assume that you have a Kubernetes cluster set up and working. Not the case? Don't worry: you can set up a Kubernetes cluster locally using Minikube in a matter of minutes.

To get started with Kubeless, you’ll need to install two pieces of software — the Kubeless Controller and a CLI tool to contact the Controller from your local environment:

  1. The Kubeless Controller runs in your Kubernetes cluster and is responsible for allocating (and cleaning up) the resources you need to deploy the requested functions.
  2. The job of the CLI tool is to contact the Kubernetes API in order to upload the function content and details. The Kubeless project provides a native CLI tool to work with the Controller but we’re going to work instead with an alternative CLI that uses the Serverless framework. We’ll elaborate on that in the next section.

These are the basic commands you need to run to install Kubeless quickly:

$ export RELEASE=v0.2.1
$ kubectl create ns kubeless
$ kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-$RELEASE.yaml

To find out more about installing and setting up Kubeless, you can check out these instructions in the GitHub official repository.

And that's it, you’re ready to use Kubeless! You just need to make sure the Kubeless Controller is ready before continuing:

$ kubectl get pods -n kubeless
NAME                                   READY     STATUS    RESTARTS   AGE
kafka-0                                1/1       Running   0          3m
kubeless-controller-1046320385-1wmdm   1/1       Running   0          3m
zoo-0                                  1/1       Running   0          3m

The Serverless Framework

Serverless is the most widely used (+19K starts on Github) CLI tool available to work as the gateway between the developer code and the backend infrastructure. It has support for the major serverless platforms like AWS Lambda, Azure, IBM OpenWhisk, Google Cloud and now Kubeless. One of the benefits of using Serverless is that you can abstract the backend platform and easily switch between them.

So let’s walk through the process of installing, configuring, and deploying your code using Serverless in your cluster.

Installation

The tool is written in Node.js so you need node and npm in your system in order to use it. Once you have it, the installation is pretty simple:

$ npm install -g serverless
$ serverless help

Function code

Now we have all the elements required to deploy our function, let's write some code. For this example, we’re going to deploy a simple Node.js function. If you’re not too familiar with Node.js or you simply prefer another language, you can check the examples available in the plugin repository.

Go to your workspace and write the following file as definition.js:

module.exports = {
  hello: (req, res) => {
    res.end(`Hello ${res.body.user}`);
  },
};

Function metadata

As well as the code, we need to write some metadata. This metadata is used by Serverless to know what to deploy and how to do it. It’s also very simple. The file should be named serverless.yml:

service: new-project
provider:
  name: kubeless
  runtime: nodejs6

plugins:
  - serverless-kubeless

functions:
  hello:
    handler: definition.hello

In a nutshell, we’re telling Serverless that we want to deploy a new service called new-project using kubeless. Specifically, we’ll deploy a function named hello that is defined in the file definition and exported as hello. You can read more about the Serverless metadata and its possibilities here.

Notice that in the serverless.yml file we are also setting, in the “plugins” section, the value serverless-kubeless. This is how easy it is to extend the serverless code base to use different platforms. This allows you to switch your function backend and use another provider if you want.

To use plugins, you need to install them locally:

$ npm install serverless-kubeless

That’s everything we need, so let’s work some magic!

Deployment

$ sls deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Deploying function hello...
Serverless: Function hello successfully deployed
NOTE: sls is an alias for serverless command

Our function is alive! Now it’s time to invoke it. Serverless provides a command for doing just that:

$ sls invoke -f hello
Serverless: Calling function: hello...

  Error --------------------------------------------------

  Internal Server Error
       For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
  ...

Apparently, something went wrong. The error description advises us to execute the same command prepending the variable SLS_DEBUG=* but that will only help if the error is on the client side, and this error seems to be related to the server side. Luckily, we can retrieve the logs of the function as easily as calling the function:

$ sls logs -f hello
Loading /kubeless/handler.js
...
::ffff:172.17.0.1 - - [14/Sep/2017:15:01:03 +0000] "GET / HTTP/1.1" 500 21 "-" "-"
Function failed to execute: TypeError: Cannot read property 'user' of undefined
    at Object.hello (/kubeless/..9989_14_09_14_56_06.685614654/handler.js:3:30)
    at app.all.err (/kubeless.js:38:39)
...

That’s it. If we go back to our function, there is a typo in the response. We are trying to access to res.body.user but the body object is defined in the request (req). Let’s fix the function and redeploy it:

module.exports = {
  hello: (req, res) => {
    res.end(`Hello ${req.body.user}`);
  },
};
$ sls deploy function -f hello
Serverless: Redeploying hello...
Serverless: Function hello succesfully deployed

Our function should be ready now, so let’s try again:

$ sls invoke -f hello --log --data '{"user": "Andres"}'
Serverless: Calling function: hello...
--------------------------------------------------------------------
Hello Andres

Much better! Note that we are using the flag --data to specify the JSON object that will be sent and --log to print the function output.

Since we have access to the Kubernetes cluster, if necessary, we can use all the tools available to debug our function. We could list the running pods, find the one executing our function and retrieve the logs or open a shell script on it and do some local debugging. If there were errors when starting the pod, we could use kubectl describe pod POD_NAME to retrieve the build log or check the Kubeless Controller logs.

If you want to know more about the Kubeless architecture for doing a deep debug you can find the documentation here.

VSCode Plugin

VSCode is becoming a popular open source code editor. If you are a VSCode user, you can use the Kubeless Plugin to write, deploy, and execute your function, all without switching applications.

This plugin is not using the serverless CLI so, when working with your function, it creates a JSON file under the .vscode folder of your project. You can tweak this file to specify the different properties of the function and the deployment.

You can watch the demo and contribute!

Ready for more?

In this article, we’ve gone through the installation steps for Kubeless and Serverless, deployed and debugged our first serverless function, and discovered a useful plugin for VSCode that will help us to develop our functions.

This technology helps developers to focus on their code as much as possible making it really easy to deploy them in a Kubernetes environment that can be easily configured to scale up. And thanks to Serverless, we’re not committed to using a certain technology or provider as the backend.

That’s it—you’re now ready to start developing serverless code! Take a look at our examples to get more ideas and check out some different use cases.

Want to reach the next level in Kubernetes?

Contact us for a Kubernetes Training