Enterprise Software Development with Knative

August 07, 2020 / Bryan Reynolds

Reading Time: 10 minutes

Kubernetes can manage complex sets of software containers by itself, but it also creates its own configuration and management challenges. Knative is an extension of Kubernetes that eliminates the need for developers to perform these tasks and also serverless capabilities to Kubernetes. It runs on top of Kubernetes, allowing Knative to manage a large number of containers in enterprise software development. Google developed Knative as an open-source platform in collaboration with other companies such as IBM, Pivotal, Red Hat, and SAP.


Kubernetes and Knative are distinct software solutions, but they work closely together. Developers, therefore, need to understand the relationship between the two.


Kubernetes is an open-source container management system. Google originally developed Kubernetes, although the Cloud Native Computing Foundation (CNCF) currently maintains it. The purpose of Kubernetes is to provide software developers with a platform where they can automatically deploy and scale the operation of application containers across multiple hosts. Cloud service providers often offer Kubernetes-based infrastructure or platforms as Infrastructure-as-a-Service (IaaS) or Platform-as-a-Service (PaaS) respectively. Some vendors also provide their own distributions of Kubernetes.

Google released the first public version of Kubernetes in July 2015. It also partnered with the Linux Foundation at that time to form CNCF and provided Kubernetes as a seed technology for this partnership. Kubernetes was the second popular project in the Linux kernel in terms of authors and issues as of March 2020.

Kubernetes is a complex platform that needs a good deal of configuration and management to run properly. Developers have to log into their individual nodes to perform these tasks, which include configuring networking rules and installing dependencies. Furthermore, they must generate configuration files, log and trace activities, and write scripts to meet the continuous integration (CI) and continuous deployment (CD) requirements of Agile methodologies. Placing source code in the right containers also requires developers to perform multiple steps before they can deploy those containers.


Software deployment traditionally uses a server to host the software, which runs continually while awaiting new requests. Serverless computing is a deployment model that only instantiates the code when needed, saving compute resources and increasing productivity. This approach also means that serverless computing scales in response to demand for the software. Knative allows developers to build and run their containers as both services and functions, thus blurring the distinction between these two types of software. Developers spend much less time managing their containers, allowing them to focus on writing code.

The following diagram illustrates the relationship between Kubernetes and Knative:


Fig. 1: Relationship between Kubernetes and Knative

The diagram above shows how Knative sits on top of Kubernetes in a serverless computing architecture. Contributors, users, systems and developers now go through Knative to access Kubernetes resources. For example, contributors use a tool like GitHub to develop and document code for the project. Developers also build and deploy their applications to Knative through an API, while users and systems like the internet of things (IoT) access those applications through Knative.

However, operators deploy and manage Knative instances through Kubernetes by using APIs and other tools. Platform providers like Google Cloud Platform provide the underlying infrastructure for Kubernetes.


The primary components of Knative include Build, Serve, and Event.


Knative’s Build component converts source code into containers or functions that will reside on a cloud platform. This process requires the developer to retrieve the source code from a repository and install the underlying dependencies the code needs to run, such as environment variables and software libraries. The next step is to build images for the containers and register the containers so other developers can use them. Knative uses Kubernetes’ underlying resources to perform this process, which requires Knative to know where those resources are. Developers still need to specify these resources, but Knative can automatically build the required containers once they do.


The Knative Serve component treats containers as scalable services. It’s highly scalable, ranging from no instances of the container at all to thousands of instances. This component defines objects as Kubernetes Custom Resource Definitions (CRDs) that define how a serverless workload behaves on a Kubernetes cluster. The following diagram illustrates the relationship between Knative service, routing, configuration, and revision resources:


Fig. 2: Knative Resources

The resource automatically manages the application’s workload throughout its lifecycle. It ensures the application has a route, configuration, and revision each time the service is updated. Developers can also define a service so it routes traffic to the current revision.

The resource maps each network endpoint to one or more revisions. This capability becomes useful when a developer wants to provide some users with a new version of a software service without moving all users to the new service yet. A developer can use methods like fractional traffic and named routes to migrate a subset of users for testing purposes. The Knative routing resource can then send more users to the new service over time as the developer gains confidence in it.

The resource maintains the deployment in its desired state. It does this by creating different versions of the same service with a clear separation code and configuration based on the Twelve-Factor App methodology. Knative creates a new configuration each time a developer modifies a configuration, allowing the different versions to run concurrently.

The resource provides snapshots of each workload modification for both code and configuration. Developers can retain revisions as long as needed, but they can’t be changed. Revisions can also scale automatically based on incoming traffic.


Knative’s Event component allows events to trigger container-based functions and services. Knative automatically places these events into a queue, also known as a channel, eliminating the need for developers to write scripts or use middleware to perform this task. The messaging platform, or bus, then delivers these events to containers. The Event component can handle multiple channels and buses for developers to choose from.

Event also allows developers to create feeds that connect events to event producers for their containers to perform. Developers can express their interest in specific event types, which automatically creates this connection and routes the event to the appropriate service. This capability means that developers no longer need to program the process of creating a connection for each event producer. The following diagram illustrates this process:


Fig. 3: Broker Trigger Diagram

Knative Broker and Trigger objects allow developers to filter events based on their attributes. A Broker receives events and forwards them to one or more subscribers as defined by the triggers matching those attributes. Event producers can also submit events to the Broker via HTTP POST by specifying the Broker’s status.address.URL. This action is possible because Brokers implement the Addressable HTTP attribute.

A Trigger is a filter on event attributes that the developer can deliver to an Addressable object such as a Broker. A single Broker for each namespace is usually significant, although multiple Brokers can simplify the architecture for some use cases. Assume for this example that an application has events that contain Personally Identifiable Information (PII), and those that don’t contain PII. Using separate Brokers for each of these event types can simplify audits and access control for this application.


Kubernetes typically handles a large number of containers for a cloud-native application infrastructure, requiring capabilities such as health monitoring, load balancing, and scaling. Knative adds services to these capabilities that allow developers to take their use of containers to the next level. These services allow developers to focus on coding, allowing them to create iterative code more quickly and make a faster entry into serverless computing.

The DevOps methodology empowers developers to administer their programming environments, but they typically consider these tasks to be overhead. Building good code and fixing bugs is generally a better use of a developer’s time than configuring message queues to trigger events for manage containers scalability. Knative helps developers do this by automating much of these processes.

Developers can also implement new versions of containers more quickly with Knative. They can develop containers with a highly iterative approach, which is a requirement for Agile approaches to software development.

Serverless computing is often challenging to set up and manage. Knative allows developers to quickly implement serverless workflows because they’re just containers from the developers’ perspective. This benefit is a result of Knative’s automatic treatment of workflows as services or serverless functions.

Use Cases

Knative’s benefits can help solve several real-world challenges of developers, including the CI/CD workflows and rapid deployment cycles.

CI/CD workflows are essential components of DevOps processes. Automated workflows for software deployments can reduce deployment times and improve quality, but they also require time and expertise to implement. Furthermore, these workflows often use many products that the developer needs to integrate together. A DevOps team can use Knative to automate these projects.

A rapid deployment cycle often exposes software defects to users, which can affect business processes. This practice essentially results in developers testing their software on users. Knative’s system of configuring and routing new revisions of software allows developers to expose revisions to only a subset of users. Developers can gradually increase the user base over time while retaining the ability to rollback to an older version if necessary.

Installing Knative

Installing Knative on a Kubernetes cluster involves installing at least the Serving and/or Event components, which you may do together or independently. The first stable release of Knative with the Serving component is v0.9, which provides an abstraction for stateless, request-based services. A stable Event component has been available since v0.2, which abstracts binding event sources and consumers. Event sources for Knative include Github, Kafka, and Webhooks, while Knative consumers include Kubernetes and Knative Services.

The following procedure is for installing Knative on an existing Kubernetes cluster, which may vary according to the specific distribution of Knative. The growing popularity of Knative means that many vendors offer their own distribution, which you can learn more about here. You’ll also need to ensure compatibility between the versions of Knative and Kubernetes. For example, Knative v0.15.0 or newer requires Kubernetes v1.15 or newer in addition to a compatible version of kubectl, which is the command-line interface (CLI) that allows you to manage Kubernetes clusters.

This procedure also assumes the shell is bash and the operating system (OS) is Linux or Mac. A Windows shell will require slightly different commands, although the general procedure is the same.

Serving Component

1. Install the CRDs with this command:

kubectl apply –filename

2. Install the Serving core components with this command:

kubectl apply –filename

3. Select a networking layer from the following list:

  • Ambassador
  • Contour
  • Gloo
  • Istio
  • Kong
  • Kourier

Assume for this example that you selected Istio. Install the Knative Istio controller with this command:

kubectl apply –filename

Fetch the external IP address or CNAME with this command:

kubectl –namespace istio-system get service istio-ingressgateway

Save the output from the above command, which you’ll need to configure the domain name server (DNS).

4. Configure the DNS.

The choices include Magic DNS, Real DNS, or temporary DNS. Assume for this example that you selected Real DNS, and your domain name is *

If the output from step 3 above provided an IP address, it should look like the following:

* == A

In this case, configure a wildcard A record for the domain *

If the output from step 3 above provided a CNAME, it should look like the following:

* == CNAME

In this case, configure a CNAME record for the domain *

Direct Knative to use that domain as follows:

kubectl patch configmap/config-domain \

–namespace knative-serving \

–type merge \

–patch ‘{“data”:{“”:””}}’

where is the domain suffix.

5. Ensure the core Knative Serving components are running or completed with this command:

kubectl get pods –namespace Knative-serving

The basic installation of Knative Serving is now complete. Optional Serving extensions include the following:

  • HPA autoscaling
  • TLS with cert-manager
  • TLS via HTTP01
  • TLS wildcard support

Event Component

1. Install the CRDs with this command:

kubectl apply –selector \


2. Install the core components of Eventing with the following command:

kubectl apply –filename

3. Install a default channel layer from the following list:

  • Apache Kafka
  • Google Cloud
  • In-memory
  • NATS

Assume for this example that you want to install an in-memory channel. This implementation is preferable for standalone use cases because it’s simple, although it’s unsuitable for a production environment. The following command installs an in-memory channel for the Knative event component:

kubectl apply –filename

4. Install a Broker, or event layer, from the following list:

  • Channel-based
  • MT-Channel-based

Execute the following command to install a channel-based implementation of Broker:

kubectl apply –filename

Customize the broker channel implementation by updating the ConfigMap. These changes will specify which configurations you wish to use for which namespaces. This example is for an in-memory channel-based (IMC) implementation, so the ConfigMap should look like the following:

apiVersion: v1

kind: ConfigMap


name: imc-channel

namespace: Knative-eventing


channelTemplateSpec: |


kind: InMemoryChannel

5. Ensure the core Knative Event components are running with this command:

kubectl get pods –namespace Knative-eventing

The basic installation of Knative Event is now complete. Optional Event extensions include the following:

  • Enable Broker
  • Github Source
  • Apache Camel-K Source
  • Apache Kafka Source
  • GCP Sources
  • Apache CouchDB Source
  • VMware Sources and Bindings

Upgrading Knative

Use the kubectl apply command to upgrade Knative components and plugins. You should perform these upgrades by only one minor version number at a time before proceeding to the next one. Assume for this example that you have v0.6.0 installed, and you wish upgrade to v0.8.0. Upgrade to v0.7.0 and ensure that upgrade is stable before attempting the upgrade to v0.8.0.

Verify your version of the Serving component with the following command:

kubectl get namespace knative-serving -o ‘go-template={{index .metadata.labels “”}}’

A similar command provides the version of the Event component as follows:

kubectl get namespace knative-eventing -o ‘go-template={{index .metadata.labels “”}}’

This procedure is for a manual installation of Knative. An installation that’s managed by the eventing-operator or serving-operator plug-ins will be different.

1. Identify breaking changes.

Review the release notes for the version of the Knative component you plan to install. This step will help identify any additional changes you need to make to Knative installation. Each Knative component has its own set of release notes published on the “Releases” page of their respective GitHub repositories. For example, the following commands will provide the release notes for v0.15.0 of the Serving and Eventing components respectively:

kubectl apply –filename

kubectl apply –filename

2. View the current pod status.

Record the pod status for the namespaces of your current installation. This step allows you to compare the namespace statuses before and after the upgrade. The following commands provide the current state of the namespace for the Serving and Eventing components respectively:

kubectl get pods –namespace Knative-serving

kubectl get pods –namespace Knative-eventing

3. Upgrade the plug-ins.

Upgrade any plug-ins you have installed at the same time you upgrade the Knative components. For example, the monitoring plug-in isn’t a core component of Knative, although it’s part of many installations.

4. Upgrade existing Kubernetes resources to the latest stored version.

The location of Kubernetes resources depends on the specific version, which may require you to migrate resources when upgrading Knative. The release notes will describe when this is necessary.

5. Perform the upgrade.

Apply the .yaml files for the next minor version of all the installed Knative components. For example, if you were running v0.13.1 of the Serving, Eventing and Monitoring components, you would install v0.13.0 of all these components with the following command:

kubectl apply –filename \

–filename \


6. Verify the upgrade.

Confirm that you have successfully upgraded the Knative components and plugins by viewing the status of their pods in the relevant namespaces. The upgrade will restart these pods, thus resetting their age. The following commands will obtain this information for the Knative Serving, Eventing and Monitoring plug-ins respectively:

kubectl get pods –namespace Knative-serving

kubectl get pods –namespace Knative-eventing

kubectl get pods –namespace Knative-monitoring

The output from the above command will provide information on the pods in that namespace, including the name of the pod, its status, the number of restarts and the pod’s age. Review this information to ensure the pods are all up and running, and the ages of the pods have all been reset. Some of the old pods may have a status of “Terminating,” meaning that Knative is still cleaning them up. This can occur if you execute the above commands shortly after performing the upgrades. Check the status of all terminating pods until they no longer appear before considering the upgrade complete. Repeat the upgrade process as needed to reach the desired minor version number.

About Us

Originally founded in 2007, Baytech has provided enterprise application development solutions for Fortune 500 companies in a wide range of industries, completing more than 100 separate projects.