days
0
-9
hours
0
0
minutes
-4
-6
seconds
0
-3
search

Knative Tutorial: Your first step towards serverless application development

Kamesh Sampath
© Shutterstock / Gonzalo Aragon  

In this article, Kamesh Sampath shows us how to master the first steps on the journey towards a serverless application. He shows how to set up the right environment and takes us through its deployment.

In the first part of this article, we will deal with setting up a development environment that is suitable for Knative in version 0.6.0. The second part deals with the deployment of your first serverless microservice. The basic requirement for using Knative to create serverless applications is a solid knowledge of Kubernetes. If you are still inexperienced, you should complete the official basic Kubernetes tutorial [1].

Before we get down to the proverbial “can do”, a few tools and utilities have to be installed:

 

  • Minikube [2]
  • kubectl [3]
  • kubens [4]

For Windows users, WSL [5] has proven to be quite useful, so I recommend installing that as well.

Setting up Minikube

Minikube is a single node Kubernetes cluster that is ideal for everyday development with Kubernetes. After the setup, the following steps must be performed to make Minikube ready for deployment with Knative Serving. Listing 1 shows what this looks like in the code.

 

Listing1
minikube profile knative 

minikube start -p knative --memory=8192 --cpus=6 \
  --kubernetes-version=v1.12.0 \
  --disk-size=50g \
  --extra-config=apiserver.enable-admission-plugins="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook" 

First, a Minikube profile must be created, which is what the first line achieves. The second command is then used to set up a Minikube instance that contains 8 GB RAM, 6 CPUs and 50 GB hard disk space. The boot command also contains a few additional configurations for the Kubernetes cluster that are necessary to get Knative up and running. It is also important that the used Kubernetes version is not older than version 1.12.0, otherwise Knative will not work. If Minikube doesn’t start immediately, it’s completely normal; it can take a few minutes until the initial startup is complete, so you should be a little patient when setting it up.

Setting up an Istio Ingress Gateway

Knative requires an Ingress Gateway to route requests to Knative Services. In addition to Istio [6], Gloo [7] is also supported as an Ingress Gateway. For our example we will use Istio, though. The following steps show how to perform a lightweight installation of Istio that contains only the Ingress Gateway:

curl -L https://raw.githubusercontent.com/knative/serving/release-0.6/third_party/istio-1.1.3/istio-lean.yaml \
| sed 's/LoadBalancer/NodePort/' \
| kubectl apply --filename –

Like the setup of Minikube, the deployment of the Istio Pod takes a few minutes. With the command kubectl —namespace istio-system get pods –watch you can see the status; the overview is finished with Ctrl + C. Whether the deployment was successful or not can be easily determined with the command kubectl –namespace istio-system get pods. If everything went well, the output should look like Listing 2.

Listing 2
NAME                                     READY   STATUS    RESTARTS   AGE
cluster-local-gateway-7989595989-9ng8l   1/1     Running   0          2m14s
istio-ingressgateway-6877d77579-fw97q    2/2     Running   0          2m14s
istio-pilot-5499866859-vtkb8             1/1     Running   0          2m14s

Installing Knative Serving

The installation of Knative Serving [8] allows us to run serverless workloads on Kubernetes. It also provides automatic scaling and tracking of revisions. You can install Knative Serving with the following commands:

kubectl apply --selector knative.dev/crd-install=true \
--filename https://github.com/knative/serving/releases/download/v0.6.0/serving.yaml

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.6.0/serving.yaml --selector networking.knative.dev/certificate-provider!=cert-manager

Again, it will probably take a few minutes until the Knative Pods are deployed; with the command kubectl –namespace knative-serving get pods –watch you can check the status. As before, the check can be aborted with Ctrl + C. With the command kubectl –namespace knative-serving get pods you can check if everything is running. If this is the case, an output like in Listing 3 should be displayed.

Listing 3
NAME                               READY   STATUS    RESTARTS   AGE
activator-54f7c49d5f-trr82         1/1     Running   0          27m
autoscaler-5bcd65c848-2cpv8        1/1     Running   0          27m
controller-c795f6fb-r7bmz          1/1     Running   0          27m
networking-istio-888848b88-bkxqr   1/1     Running   0          27m
webhook-796c5dd94f-phkxw           1/1     Running   0          27m

Deploy demo application

The application we want to create for demonstration is a simple greeting machine that outputs “Hi”. For this we use an existing Linux container image, which can be found on the Quay website [9].
The first step is to create a traditional Kubernetes deployment that can then be modified to use serverless functionality. This will make clear where the actual differences lie and how to make existing deployments using Knative serverless.

Create a Kubernetes resource file

The following steps show how to create a Kubernetes resource file. To do this, you must first create a new file called app.yaml, into which the code in Listing 4 must be copied.

Listing 4
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: greeter
spec:
  selector:
    matchLabels:
      app: greeter
  template:
    metadata:
      labels:
        app: greeter
    spec:
      containers:
      - name: greeter
        image: quay.io/rhdevelopers/knative-tutorial-greeter:quarkus
        resources:
          limits:
            memory: "32Mi"
            cpu: "100m"
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: greeter-svc
spec:
  selector:
    app: greeter
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080


Create the deployment and service

By applying the previously created YAML file, we can create the deployment and service. This is done using the kubectl apply –filename app.yaml command. Also, at this point, the command kubectl get pods –watch can be used to get information about the status of the application, while CTRL + C terminates the whole thing. If all went well, we should now have a deployment called greeter and a service called greeter-svc (Listing 5).

 
Listing 5
$ kubectl get deployments
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
greeter   1         1         1            1           16s

$ kubectl get svc
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
greeter-svc   NodePort    10.110.164.179           8080:31633/TCP   50s

To activate a service, you can also use a Minikube shortcut like minikube service greeter-svc, which opens the service URL in your browser. If you prefer to use curl to open the same URL, you have to use the command curl $(minikube service greeter-svc –url). Now you should see a text that looks something like this: Hi greeter => ‘9861675f8845’ : 1

Migrating the traditional Kubernetes deployment to serverless with Knative

The migration starts by simply copying the app.yaml file, naming it serverless-app-yaml and updating it to the lines shown in Listing 6.

 

Listing 6
---
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: greeter
spec:
  template:
    metadata:
      labels:
        app: greeter
    spec:
      containers:
      - image: quay.io/rhdevelopers/knative-tutorial-greeter:quarkus
        resources:
          limits:
            memory: "32Mi"
            cpu: "100m"
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /healthz
        readinessProbe:
          httpGet:
            path: /healthz

If we compare the traditional Kubernetes application (app.yaml) with the serverless application (serverless-app.yaml), we find three things. Firstly, no additional service is needed, as Knative will automatically create and route the service. Secondly, since the definition of the service is done manually, there is no need for selectors anymore, so the following lines of code are omitted:

 selector:
        matchLabels:
          app: greeter

Lastly, under TEMPLATE | SPEC | CONTAINERS name: is omitted because the name is automatically generated by Knative. In addition, no ports need to be defined for the probe’s liveness and readiness.

Deploying the serverless app

The deployment follows the same pattern as before, using the command kubectl apply –filename serverless-app.yaml. The following objects should have been created after the successful deployment of the serverless application: The deployment should now have been added (Listing 7). A few new services should also be available (Listing 8), including the ExternalName service, which points to istio-ingressgateway.istio-system.svc.cluster.local. There should also be a Knative service available with a URL to which requests can be sent (Listing 9).


Listing 7
$ kubectl get deployments
NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
greeter                    1         1         1            1           30m
greeter-bn8cm-deployment   1         1         1            1           59s

Listing 8
$ kubectl get services
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                           PORT(S)          AGE
greeter                 ExternalName              istio-ingressgateway.istio-system.svc.cluster.local              114s
greeter-bn8cm           ClusterIP      10.110.208.72                                                    80/TCP           2m21s
greeter-bn8cm-metrics   ClusterIP      10.100.237.125                                                   9090/TCP         2m21s
greeter-bn8cm-priv      ClusterIP      10.107.104.53                                                    80/TCP           2m21s

Listing 9
kubectl get services.serving.knative.dev
NAME    URL                                LATESTCREATED   LATESTREADY     READY   REASON
greeter http://greeter.default.example.com greeter-bn8cm   greeter-bn8cm   True

Attention
In a Minikube deployment we will have neither LoadBalancer nor DNS to resolve anything to *.example.com or a service URL like http://greeter.default.example.com. To call a service, the host header must be used with http/curl.

To be able to call a service, the request must go through the ingress or gateway (in our case Istio). To find out the address of the Istio gateway we have to use in the http/curl call, the following command can be used:

IP_ADDRESS="$(minikube ip):$(kubectl get svc istio-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')"

The command receives the NodePort of the service istio-ingressgateway in the namespace istio-system. If we have the NodePort of the istio-ingressgateway, we can call the greeter service via $IP_ADDRESS by passing the host header with http/curl calls.

 

curl -H "Host:greeter.default.example.com" $IP_ADDRESS

 

Now you should get the same answer as with traditional Kubernetes deployment (Hi greeter => ‘9861675f8845’ : 1). If you allow the deployment to be in idle mode for about 90 seconds, the deployment will be terminated. At the next call, the scheduled deployment is then reactivated, and the request is answered.

 

Congratulations, you have successfully deployed and called your first serverless application!

 

Links & Literature


[1] https://kubernetes.io/docs/tutorials/kubernetes-basics/
[2] https://kubernetes.io/docs/tasks/tools/install-minikube/
[3] https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux
[4] https://github.com/ahmetb/kubectx/blob/master/kubens/
[5] https://docs.microsoft.com/en-us/windows/wsl/install-win10
[6] https://istio.io
[7] https://gloo.solo.io
[8] https://knative.dev/docs/serving/
[9] https://quay.io/rhdevelopers/knative-tutorial-greeter

Author

Kamesh Sampath

Kamesh is a Principal Software Engineer at Red Hat. As part of his additional role as Director of Developer Experience at Red Hat, he actively educates on Kubernetes/OpenShift, Service Mesh, and serverless technologies. With a career spanning close to two decades, most of Kamesh’s career was with services industry helping various enterprise customers build Java-based solutions. Kamesh has been a contributor to Open Source projects for more than a decade and he now actively contributes to projects like Knative, Quarkus, Eclipse Che etc. As part of his developer philosophy he strongly believes in: “Learn more, do more and share more!”


Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of