Serverless service mesh with Kubeless and Istio

In the last few months you may have heard the term service mesh. A service mesh is nothing more than an infrastructure in which multiple microservices can interact with each other in a fast and reliable way. This architecture proposes a very interesting approach for designing new applications in cloud native scenarios. In this article, we will demonstrate how using Kubeless, a serverless framework for Kubernetes, and Istio, an open source platform to connect, manage and secure Kubernetes services, you can easily deploy your first service mesh in a matter of minutes.

At Bitnami, we have developed Kubeless: a serverless platform that allows you to easily create microservices (usually called functions) that run on a Kubernetes cluster. If this is the first time you’ve heard about Kubeless, and you’d like to learn more about the basics, see my previous post or just visit the official web page.

In addition to Kubeless, Istio is an open source platform that provides networking for microservices. Istio allows you to manage, monitor and secure microservices in an easy way.

As a developer, you may know that maintaining services with different versions and authorization policies within a cluster can be difficult and prone to errors. You must carefully manage all possible routes between all of the services.

Creating a service mesh in conjunction with Kubeless and Istio simplifies a lot the deployment and network management. Kubeless allows you to deploy functions in just one command and Istio can manage requests routing and policies with descriptive files. In this article I am going to show how to do the following:

  • Set up the environment to deploy a service mesh.
  • Deploy several serverless functions that will from an application.
  • Route user requests to show different versions of the service.
  • Protect certain parts of the application from unauthorized sources.

Architecture goal

To demonstrate Kubeless and Istio, I am going to deploy an application that will simulate the temperature management of a cold room. See the diagram of the proposed architecture:

architecture diagram

The application will be composed of three different services:

  • A thermometer service (called temp) that measures and reports the temperature of the room. I will have two different versions of this service. Version v2 will also report the humidity of the room.
  • A thermostat service that, depending on the temperature of the room, increases or decreases the output of the system. This service will be protected and only the control service will be able to access it.
  • A control service that retrieves the temperature from the thermometer service and sends the result to the thermostat. In a pure service mesh architecture, this element would not be necessary because the thermostat would receive the temperature directly from the thermometer. In any case, for doing this example a little bit more complex and interesting, I will include it.

Preparing the environment

For this exercise I will use a Kubernetes cluster hosted in the Google Cloud Platform (GCP) using the Google Kubernetes Engine (GKE). If you don't have a GCP account, or you prefer to work locally, you can complete this exercise using Minikube.

To deploy a cluster on GKE, I use the gcloud tool from the Google Cloud SDK. If you want to follow this guide using GKE, ensure you have this tool installed:

gcloud container clusters create \
  --cluster-version=1.8.6-gke.0 \
  --zone us-east1-c \
  --num-nodes 5 \
  --enable-kubernetes-alpha \

I will enable alpha features (causing the cluster to be marked for deletion after 30 days) since they are necessary to use the autoinjection feature. Autoinjection save us from having to manually inject Istio details in the deployment and service manifests. Apart from that, to ensure that I don’t run out of resources, I will use 5 nodes.

I will now deploy Istio. I am using the latest version from its release page. The following are the commands to follow:

tar zxf istio-0.4.0-osx.tar.gz
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account)
kubectl apply -f istio-0.4.0/install/kubernetes/istio.yaml
kubectl apply -f istio-0.4.0/install/kubernetes/istio-initializer.yaml
NOTE: If you are following this guide in a Linux or Windows system, use a different tarball. If you are having trouble creating the ClusterRoleBinding, you can follow this guide.

Let's deploy Kubeless in this new cluster. If you are following this guide, you must perform the steps to deploy Kubeless in GKE and install the kubeless CLI tool.

At this point Kubeless and Istio should be deployed in the cluster:

$ kubectl get pods -n kubeless
NAME                                  READY     STATUS    RESTARTS   AGE
kafka-0                               2/2       Running   1          1m
kubeless-controller-151920402-3qtxz   2/2       Running   0          1m
zoo-0                                 2/2       Running   0          1m
$ kubectl get pods -n istio-system
NAME                                READY     STATUS    RESTARTS   AGE
istio-ca-1363003450-bdfrl           1/1       Running   0          2m
istio-ingress-1005666339-xchw9      1/1       Running   0          2m
istio-initializer-563520905-dmh55   1/1       Running   0          1m
istio-mixer-465004155-tk0x5         3/3       Running   0          2m
istio-pilot-1861292947-8rkh3        2/2       Running   0          2m

Deploy the service mesh

It's time to deploy the application services. All the files used in this guide are available in this Github repository. You can clone the repository so that the files are locally available:

git clone
cd kubeless-istio-sample

Let's deploy the two versions of the thermometer functions that will return the temperature (and humidity) of the room:

kubeless function deploy \
  --from-file functions/tempv1.js \
  --handler temp.sample \
  --runtime nodejs6 \
  --label version=v1 \
  --label app=temp \
kubeless function deploy \
  --from-file functions/tempv2.js \
  --handler temp.sample \
  --runtime nodejs6 \
  --label version=v2 \
  --label app=temp \

Note that I am setting the label version=v1 and version=v2 so I can differentiate them by version. I am also setting the label app=temp so both functions have a common label.

In addition to these deployments, I will create a service that will be used to access both functions using a single endpoint:

kubectl create -f manifests/temp-svc.yaml

Finally, I am going to create the thermostat and control functions:

kubeless function deploy \
  --from-file functions/thermostat.js \
  --handler thermostat.handler \
  --runtime nodejs6 \
kubeless function deploy \
  --from-file functions/control.js \
  --handler temp.control \
  --dependencies functions/package.json \
  --runtime nodejs6 \

After a few seconds, these services should be accessible using the kubeless CLI:

$ kubeless function call tempv1
$ kubeless function call tempv2
$ kubeless function call thermostat --data '{"temp": "10"}'
$ kubeless function call control
Temperature measured: 16 degrees. Action: Decreasing!. Humidity: 20%

Note that sometimes, when calling the control function, it returns the humidity but others don't. It depends on the version of the temp service called internally.

Adapting services to Istio

Kubeless automatically creates all the resources required to access a function which includes the function service. But, in order to use them with Istio, the name property of the service's port must be a well known value. Therefore, I must change the current value for one of their list of named ports. I can change it by editing the current services:

kubectl get svc -l created-by=kubeless -o yaml | sed 's/name: function-port/name: http/g' | kubectl apply -f -

Now that all the services have been updated, I need to restart the function pods. This is because the pods register their information in an initContainer. The initContainer is only executed when starting a pod, so, to register the service information properly, I need to restart all the pods:

kubectl delete pods -l created-by=kubeless

After a couple of minutes the pods will be running again and registered properly in the Istio Mixer.

Balancing requests

In this step I am going to use the Request Routing Configuration that Istio provides. This allows me to manage the requests for the different versions of the temp service. I am going to deploy a very simple rule that will redirect 90% of the requests to the version v1 of the service and the other 10% to the version v2. This feature is very helpful when testing new features or when working with A/B testing since it allows you to have several version of the same service running without affecting the functionality for all the users:


kind: RouteRule
  name: temp-rule
  ## Used by services inside the Kubernetes cluster
    name: temp
  - labels:
      version: v1
    weight: 90
  - labels:
      version: v2
    weight: 10
kubectl apply -f manifests/route-rule-10-v2.yaml

Now, if I execute the kubeless function call control command several times, it will only print the humidity a few times. You can change these numbers (for example redirecting all the traffic to the version v2) and see the changes when calling the control service.

Basic Access Control

For this exercise I am going to restrict the access to the thermostat service. This is just an example so I am going to use basic access control. Note that if I deploy Istio using TLS authentication, I could use service accounts and configure secure access control, but for simplicity, I won’t do it here.

As a rule, the thermostat should only be accessible through the control service. To ensure this, I am going to deploy the following manifest:


apiVersion: ""
kind: denier
  name: denyhandler
    code: 7
    message: Not allowed
apiVersion: ""
kind: checknothing
  name: denyrequest
apiVersion: ""
kind: rule
  name: denytemp
  match: source.labels["function"] != "control" && request.path != "/healthz" && destination.labels["function"] == "thermostat"
  - handler: denyhandler.denier
    instances: [ denyrequest.checknothing ]
kubectl apply -f manifests/deny-rule.yaml

What I am doing with this rule is blocking any request that doesn’t come from the control function from the thermostat (unless the request is for the /healthz path that is used by Kubernetes to check if the service is alive).

We can ensure that the rule is working by executing the following:

$ kubeless function call control
Temperature measured: 5 degrees. Action: Increasing!
$ kubeless function call thermostat
FATA[0000] PERMISSION_DENIED:denyhandler.denier.default:Not allowed (get services thermostat:http)

Note that this rule is not protecting the thermostat service from cluster users with permissions to create pods. Any of those users could create a pod with the label function=control and it will have access to the thermostat service. To secure this service properly, I should use secure access control and assign a serviceAccountName (which other users cannot use) to the control deployment.

Ready for more?

I’ve just covered a few basic features from Kubeless and Istio, but you should now be able to start your service mesh using these two tools. The next step would be to use service accounts and setup a cluster with secure access control. That way I could provide tighter security for the functions.

Apart from that, there are a lot of features available to handle, for example, adding monitoring and metrics, custom routes, autoscaling… To discover more possibilities, take a look at the Istio and Kubeless websites and documentation.