Sustainability in software development is becoming increasingly important – especially in the context of application delivery in the cloud. Rising energy prices are motivating more and more companies to think about more sustainable software architecture and to monitor their computing-related CO2 emissions in more detail.
In the cloud-native environment, there are more and more tools and methods to, for example, improve the sustainability of software through demand-oriented, needs-based scaling. The article uses a concrete case study to show how developers can use the open source software Knative to convert a containerized application in Kubernetes into a dynamically scalable serverless service, which even switches off completely in times of lack of demand.
Scale as needed for more sustainability
Knative is based on Kubernetes, which has established itself as the backbone of modern containerized application landscapes. Kubernetes offers many ways to automate the deployment, scaling, and management of containerized applications. However, despite its flexibility, Kubernetes lacks the ability to deploy serverless applications. Among other things, the Green Software Foundation considers the underlying deployment pattern to be sustainable, in which applications can be scaled based on current demand and, if necessary, switched off completely.
When migrating a simple sample application from a standard Kubernetes deployment to a Knative service, the following quality criteria must be taken into account:
The migration should require as few adjustments to the source code as possible. This is the only way to ensure that larger applications can be easily migrated without incurring excessive costs for restructuring the application landscape. The practice for logging, monitoring and tracing established in the existing application environment should continue to be usable. Therefore, to enable easy migration, it is necessary to ensure the compatibility of Knative with the existing tools. The example outlined below is therefore based on the widely used monitoring software Prometheus. Existing endpoints of the example application should be available with Knative under the same routes after the migration. In larger application landscapes, routes are often static addresses of the services. After a migration, if a service is suddenly available at a different URL, this can cause problems, especially if communication is not decoupled through an API gateway or service registry. The sample application load pattern must reflect extended periods of time where the service is not accessed, so there is no access by users. However, scaling down the application to zero replicas results in a significant gain in resources.
The example application to be migrated consists of a simple, stateless Kubernetes deployment that exposes a REST API via an ingress. The Prometheus operator is intended to collect the metrics for monitoring the application via the Kubernetes Custom Resource ServiceMonitor – as Figure 1 illustrates.
Initial situation for the example application to be migrated in the Kubernetes cluster (Fig. 1).
Knative: Serverless services in Kubernetes
The open source project Knative aims to run and manage serverless workloads in Kubernetes. It does this by providing auto-scaling capabilities, including the ability to scale down applications to zero pods to avoid pod idleness. Even though the savings per application may be small, the effect adds up across thousands of pods and a multitude of applications, especially in large clusters. It is important to note that Knative can only handle stateless applications that communicate exclusively via HTTP.
The articles in the new special issue iX Developer “Cloud Native” show how observability, platform engineering and other new approaches help developers work more productively across the entire software development lifecycle.
To create a serverless runtime environment within a Kubernetes cluster, several components of Knative Serving must work together. This particularly includes service, route and revision:
Service: A Knative service defines a serverless application and therefore should not be confused with a service in Kubernetes. When you create a Knative service, a route and configuration are automatically created. Route: This determines how requests are routed to the different revisions of an application. For example, you can specify that 90 percent of requests should flow to the current version of the application and the remaining 10 percent to a new version. Revision: Every time there is a change to the Knative service, Knative creates a new revision. Each revision represents a snapshot of the code and associated configuration. To the home page
#Sustainable #software #delivery #Kubernetes