Enterprise Software Development with Kubeless

August 07, 2020 / Bryan Reynolds

Reading Time: 10 minutes

Kubeless is a Functions-as-a-Service (FaaS) platform that runs with Kubernetes. It’s entirely open-source, so it has no affiliation with any commercial organization. Kubeless is a serverless framework that allows developers to deploy small units of code without considering its underlying infrastructure. This capability means that Kubeless can leverage Kubernetes resources to perform tasks vital to enterprise software development such as API routing, auto-scaling, monitoring and troubleshooting.

Kubeless fully supports Kubernetes beginning with version 1.9 and has been tested through Kubernetes version 1.15 as of June 2020, according to GitHub. These tests used the GKE 1.12 and Minikube 1.15 Kubeless platforms, although other Kubeless platforms also support Kubernetes. However, they may not be completely compatible.

Severless Computing

Serverless computing is a cloud computing model in which the provider operates the server by dynamically managing the allocation of computing resources such as processing, memory and storage. The provider bases the prices for these services on the resources that applications actually consume, as opposed to the traditional model of buying resources before using them. This model makes serverless computing a form of utility computing.

Serverless computing also simplifies the process of deploying code by hiding tasks such as capacity planning, scaling and maintenance from developers. They can also use serverless code in conjunction with code they deploy via traditional methods like microservices. They can also write applications that are completely serverless, meaning they have no provisioned servers. This model is distinct from computing and networking models like peer-to-peer that don’t require a server at all.

Most vendors of serverless platforms include computer runtimes in their solutions, which are known more specifically as FaaS platforms. These platforms only execute application logic without storing data.

Zimki, released in 2006, was the first FaaS platform, although it was never commercially successful. Google released Google App Engine in 2008, which uses a custom Python framework and features metered billing. However, it can’t execute arbitrary code. PiCloud, released in 2010, provides FaaS support for Python. Kubeless, released in 2018, is an open-source FaaS platform that runs with Kubernetes.


Kubernetes is an open-source container management system that automates the deployment, scaling and management of software applications. Kubernetes provides a platform for application containers across clusters of hosts and works with a variety of container tools such as Docker. Many cloud services now include a Kubernetes-based platform or Infrastructure-as-a-Service (IaaS) that they can deploy a Kubernetes on as a service. Cloud vendors also provide their own Kubernetes distributions as part of their platform.

Google originally developed Kubernetes, but the Cloud Native Computing Foundation (CNCF) is currently maintaining it. The CNCF is the result of a partnership between Google and the Linux Foundation, which offered Kubernetes as a seed technology. Google engineers founded Kubernetes in mid-2014. Google’s Borg system heavily influenced Kubernetes design and development since many of the contributors to Kubernetes had previously worked on Borg. However, Borg was originally written entirely in C++, while Kubernetes is implemented in Go. The first version of Kubernetes was released in July 2015, and the Kubernetes Project was in ninth place in GitHub commits as of March 2018.

Kubernetes Serverless Framework

The Kubernetes Serverless Framework (KSF) allows developers to build and execute applications without consideration for the servers they run on. It provides an abstraction of servers and operating systems, so developers don’t need to provision or even manage servers. This serverless framework thus allows developers to shift the focus from the server level to the task level. Developers can also use Kubeless to develop and deploy serverless applications. Kubeless has a command line interface (CLI) that provides the automation, best practices and structure required by serverless applications out-of-the-box.

The primary differences between KSF and other application frameworks include its management of code as well as infrastructure. KSF also supports multiple languages including Node.js, Python and Ruby. The most significant features of KSF include the following:

Features of Kubernetes Serverless Framework

Fig 1: Features of Kubernetes Serverless Framework


Kubeless uses a Custom Resource Definition (CRD) that allows developers to create custom functions as Kubernetes resources. It also allows them to run in and-cluster controller to monitor those resources and wants runtimes as needed. This controller inserts functioning code into the runtimes dynamically and makes them available to users via HTTP or a PubSub mechanism.

Important features of Kubeless include support for programming tools such as Ballerina, Golang, .NET, Node.JS, PHP, Python and Ruby. It also monitors function calls and latency by default. Furthermore, Kubeless has triggers for HTTP events that provide alerts through the Kafka messaging system and HTTP events. Additional features of Kubeless include a CLI that’s compatible with the AWS Lambda CLI.

The major components of Kubeless include functions, events and services.


A Kubeless function is code deployed to the cloud that typically performs a single task. It’s similar to a micro service in that it’s an independent unit of deployment. Functions commonly perform scheduled tasks that will be added to newer versions of the application. They also process files in databases and save database users.

You can perform multiple jobs in a code unit, although this isn’t typically good practice. It’s best to maintain a separation of tasks since KSF allows you to develop and deploy functions easily in addition to managing them in large numbers.


A Kubeless event is anything that can trigger a response by KSF. They include API Gateway HTTP endpoints such as a REST API. Kafka messages are also a common event for Kubeless. Scheduled timers that execute an application at predefined intervals are events that will be added to future versions of Kubeless.


A Kubeless service is KSF’s unit organization and is distinct from Kubernetes Services. It’s like a project file, although an application can have multiple services. Developers define functions and the events that will trigger them within services, which is a file entitled with default names such as serverless.yml, serverless.json or serverless.js. Developers can use the –config option to specify a non-default name for the service file. The following example shows a simple service:

# serverless.yml
service: users
functions: # Your "Functions"
 handler: hello.hello # The code to call as a response to the event
 events: # The "Events" that trigger this function
 - http:
 path: /hello

Everything in a service like the above example is deployed at once when the developer uses serverless deploy to deploy with KSF.


Cloud computing is moving away from IaaS and PaaS and towards FaaS. While IaaS focuses on physical infrastructure, PaaS is more concerned with providing the user with specific capabilities. On the other hand, the benefits of FaaS platforms like Kubeless are all related to their services.

For example, Kubeless users don’t need to pay to develop or maintain virtual resources, whether those resources are hardware or software. Kubeless also scales its capacity automatically in response to business requirements and often does not require backup systems. The cost of these resources is on a per use basis, and the service level agreements (SLAs) are well-defined. Furthermore, developers can quickly build and deploy applications with Kubeless.

Operational management is also easier with Kubeless since it separates applications from the infrastructure they run on. The automatic scaling of FaaS reduces operational overhead as well as computational costs. Additionally, Kubeless allows system engineers to focus on managing the underlying infrastructure as well as core services like load balancers, allowing product engineers to focus on managing functions.

Product engineers can also innovate more quickly with Kubeless since they no longer have to perform system engineering. The reduction in operation overhead thus facilitates the adoption of Agile methodologies that blur the distinction between development and operations. The storage of applications is also easier with FaaS. Additional cost savings with a serverless architecture include the reduction in human resources (HR) since users pay for management of the application logic, databases and servers on a per usage basis.

Implementing Kubeless

The process of implementing Kubeless begins with creating a namespace for its components. Download the latest version of Kubeless and deploy into that namespace. Deploy the code, which will create the central pod and service to display messages from the Kubeless service. The Kubernetes Proxy command will provide the methods of accessing the service.

You can then simulate use cases with the Kubeless CLI command. Use CLI to easily create topics once you download the files containing the requirements and worker’s code. Use your web browser to view the data stream, while the main service listens to Kafka messages that can trigger functions.

You can also interact with Kubeless easily by using a Graphical User Interface (GUI). Retrieve the Kubernetes master URL to obtain the random IP address from the Kube API, and use it to the UI. You can then use Kubernetes Cluster to create functional services in Kubeless.

Best Practices in Kubeless

Best practices in Kubeless include the following:

  • Test for continuous implementation (CI) and continuous deployment (CD) environments.
  • Ensure each function fulfills minimal roles.
  • Map applications to observe the flow of information.
  • Apply perimeter security at the function level.
  • Secure application dependencies.
  • Always refresh FaaS containers.

Using Kubeless

Cloud developers typically write code in their development environment, and then move it to a production environment to run. This procedure requires developers to create a virtual machine (VM) or container when deploying even a small amount of code to the cloud. Serverless computing allows developers to simply write the code and upload it wherever they want to run it. Developers no longer need to create containers, configure Kubernetes clusters or maintain them.

Getting started with serverless computing in Kubeless only requires a Kubernetes cluster, Kubeless itself and some code to deploy. The following procedure describes the process for deploying code with the Bitnami Kubeless platform running on top of a Kubernetes cluster created with Google Container Engine or Minikube. The Bitnami Kubernetes starter tutorial provides detailed instructions for accomplishing these tasks. Once the Kubernetes cluster is running, deploying code in Kubeless consists of the following steps:

  • Start the Kubernetes Dashboard
  • Install the Kubeless CLI tool
  • Deploy Kubeless
  • Write a function
  • Register the function with Kubeless
  • Call the function

1. Start the Kubernetes dashboard.

Start the Kubernetes Dashboard and make it available from port 8080 by entering the following command:

kubectl proxy --port=8080

Execute the above command from a separate shell to ensure the dashboard continues to run while you perform the remaining steps. Enter http://localhost:8080/ui in your browser’s URL bar to view the Kubernetes dashboard as shown below:

Fig. 2: Kubernetes Dashboard

Fig. 2: Kubernetes Dashboard

2. Install the Kubeless CLI tool.

Install the Kubeless CLI on Linux with the following commands:

curl -L >
sudo cp bundles/kubeless_linux-amd64/kubeless /usr/local/bin/

Install the Kubeless CLI on Mac OS X with the following commands:

curl -L >
sudo cp bundles/kubeless_darwin-amd64/kubeless /usr/local/bin/

The Kubeless releases page provides more detailed instructions for this procedure.

3. Deploy Kubeless.

The Kubeless release package includes two YAML manifests that will allow you to deploy Kubeless to your Kubernetes cluster by creating a Kubeless namespace and CRD function. Kubeless is deployed once the Kubeless controller and Kafka are running on the server. kubeless-rbac-$RELEASE.yaml is used for role-based access control (RBAC) Kubernetes clusters, while kubeless-$RELEASE.yaml is for non-RBAC Kubernetes clusters. The example below shows how to deploy Kubeless to a non-RBAC Kubernetes cluster:

export RELEASE=0.0.20
kubectl create ns kubeless
kubectl create -f$RELEASE/kubeless-$RELEASE.yaml
kubectl get pods -n kubeless
kafka-0 1/1 Running 0 1m
kubeless-controller-3331951411-d60km 1/1 Running 0 1m
zoo-0 1/1 Running 0 1m
kubectl get deployment -n kubeless
kubeless-controller 1 1 1 1 1m
kubectl get statefulset -n kubeless
kafka 1 1 1m
zoo 1 1 1m
kubectl get customresourcedefinition
NAME DESCRIPTION VERSION(S) Kubeless: Serverless framework for Kubernetes v1
kubectl get functions

Enter the following command to test the Kubeless deployment:

kubeless function ls

The above command should produce output similar to the following if Kubeless is working:

Figure 3: Successful Kubeless Deployment

Figure 3: Successful Kubeless Deployment

The Kubernetes dashboard should show the Kubeless namespace and additional information confirming that Kubeless is running.

4. Write a Function.

Now that Kubeless is running, you no longer have to worry about containers, Kubernetes or VMs. All you need to do at this point is develop some code. The following example searches a list of Test Store locations for a search term and returns a list of matches:

import urllib2
import json
def find(request):
 term = request.json["term"]
 url = ""
 response = urllib2.urlopen(url)
 locations = json.loads(
 hits = []
 for location in locations["locationBeanList"]:
 if location["stAddress1"].find(term) > -1:
 return json.dumps(hits)

Note that the find() function described above has no input validation or error handling, as would a function in a production environment. It has an input parameter named request that Kubeless will use to pass a LocalRequest object from the Bottle Web Framework to find(). This object has a property called json that returns the POST request parameters as a JSON object. Find() can then extract the search term by parsing this object. Save the code in a file named something like /tmp/ so you can register it in the next step.

5. Register the function with Kubeless.

The server-based method of adding this function to an application involves integrating it into the existing codebase, creating the required containers or VMs, and deploying the result. Serverless computing allows you to simply register the function and access it through the web as needed. This process requires you to provide Kubeless with the following:

  • Call name
  • Protocol
  • Runtime
  • File name
  • Function name

The Kubeless Function Deploy command provides all of this information as shown below:

kubeless function deploy storesearch --trigger-http --runtime python2.7 --handler teststore.find --from-file /tmp/

The storesearch keyword in the above command tells Kubeless to register a function call named storesearch, which users can access over the web. The name of the function call isn’t necessarily the same as the function’s name within the code, which is specified later in the command with the –handler option.

The –trigger-http option tells Kubeless that users will invoke the function over HTTP, which is only one of the ways to access a Kubeless function.

The –runtime python2.7 option instructs Kubeless to execute the code with the Python 2.7 runtime. Kubeless also supports the Node.js runtime, and more will be added in the near future.

The – handler teststore.find option tells Kubeless the function’s name within the code module. Teststore is the name of the file, and find is the name of the function within that file.

The –from-file /tmp/ provides Kubeless with the location of the file to use as the source for the function. Additional methods of passing a function to an application also exist in Kubeless.

The Function Deploy command can take a while to finish executing, depending on the size of the file and internet connection. Once it’s finished, you can ensure it’s registered with Kubeless with the following Kubeless command:

kubeless function ls

The results of the above comment will show the functions currently registered in Kubeless, including the following information:

  • Call name
  • Handler
  • Runtime
  • Protocol
  • Topic
  • Namespace

You can delete the function with the Kubeless Function Delete command as follows:

kubeless function delete storesearch

6. Call the function.

Recall that the storesearch function expects to be called by a POST request that will provide the search key as an input parameter. A curl command is one way of accomplishing this as follows:

curl --data '{"term":"Elm"}' localhost:8080/api/v1/proxy/namespaces/default/services/storesearch/ --header "Content-Type:application/json"

The above command asks storesearch to return a JSON object containing a list of all stores with the string “Elm” in the address.

You can also accomplish the same thing directly from the Kubeless CLI as follows:

kubeless function call storesearch --data '{"term":"Elm"}'

The JSON object will contain a list of matches, including all the data fields associated with each match. This would likely include the street address, along with other store information such as city, geographic location and status.

In practice, this search function would be much more sophisticated, with multiple parameters and different types of output. For example, it could return a URL for a map showing the store’s location rather than a JSON object with text information.


The trend towards serverless computing is making it easier for developers to deploy code to cloud platforms. Cloud providers are also beginning to view their services in terms of functions rather than infrastructure. Kubeless is one of the latest FaaS platforms, which runs with the popular Kubernetes container management systems. Developers can use Kubeless to deploy their code to the cloud without concern for its infrastructure or operating environment. Enterprises also benefit from Kubeless’ ability to scale capacity automatically and monitor software activities.

About Us

Originally founded in 2007, Baytech has provided enterprise application development solutions for Fortune 500 companies in a wide range of industries, completing more than 100 separate projects.