Services

A Kubernetes Service object enables communications with components within, and outside of, the application. You can think of Services like doors; sure there are other ways to get into the house (e.g. windows, trap doors, the cimney), but the front door is your best bet! In the diagram below, take note of the fact that Services really are the gatekeepers of communication. We see users outside of the node accessing a Pod; we see Pods talking to eachother; and we see a Pod reach outside of the Node to an external database.

Services as Doors

There are three kinds of Service objects:

  1. LoadBalancer: Exposes Pods externally using a cloud provider’s load balancer.
  2. ClusterIP: Service creates virtual IP within cluster to enable communication between different services.
  3. NodePort: Service makes an internal port accessible through a port on the Node.

We will elaborate on the three Service types, and create a respective object for the following Pod config:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: Pod
metadata:
   name: myapp-pod
   labels:
      app: myapp
      type: back-end
spec:
   containers:
   -  name: nginx-container
      image: nginx

LoadBalancer Service

To reiterate, a LoadBalancer Service exposes Pods externally using a cloud provider’s load balancer.

credit: Ahmet Alp Balkan

A LoadBalancer Service object will give you a single IP address that will forward all traffic to your service, exposing it to the BBI (Big Bad Internet). Your particular implementation of the LoadBalancer may vary depending on which cloud provider you use, so I’ll link you to the Kubernetes Documentation for more info.


ClusterIP Service

To reiterate, a ClusterIP Service object creates a virtual IP within the cluster to enable communication between different services.

Cluster IP

In the diagram above, we are routing front-end to back-end, and back-end to redis; all enabled with the use of services. Let’s create a ClusterIP Service object that routes front-end to back-end.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Service
metadata:
   name: back-end
spec:

   # 1. Specifies that we are creating a ClusterIP Service object,
   type: ClusterIP

   # 2. Port details.
   ports:
   -  targetPort: 80
      port: 80

   # 3. Use a selector to specify what Pods to target.
   selector:
      app: myapp
      type: back-end
  1. Tell K8s that you want to make a ClusterIP Service object.
  2. Tell the ClusterIP Service what ports it should care about; port specifies the port that the Service is listening on, and the targetPort specifies the port that the target Pod is listening on.
  3. Using a selector with the labels of the Pod definition from above, we tell the NodePort to target myapp-pod. This will select all Pods matching this selector, and randomly route traffic to one of them. Of course, to learn how to change this random behavior, see here.

NodePort Service

To reiterate, a NodePort Service object makes an internal port accessible through a port on the Node.

Node Port

In the diagram above, take notice of:

  1. targetPort: field that refers to the Port of the Pod you wish to target.
  2. port: field that refers to the port on the service itself. The Service acts a lot like a virtual server, having it’s own IP address for routing, which is the cluster IP of the service.
  3. nodePort: which is the port being exposed from the Node. The default port range is 30000 - 32767, but this can be assigned to any ephemeral port.

Let’s create a NodePort Service definition file for myapp-pod, referenced in the Services section.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Service
metadata:
   name: myapp-service
spec:

   # 1. Specifies that we are creating a NodePort Service object.
   type: NodePort

   # 2. Port details as defined above.
   ports:
   -  targetPort: 80
      port: 80
      nodePort: 30008
   
   # 3. Use a selector to specify what Pods to target.
   selector:
      app: myapp
      type: back-end
  1. Tell K8s that you want to make a NodePort Service object.
  2. Tell the NodePort what ports it should care about.
  3. Using a selector with the labels of the Pod definition from above, we tell the NodePort to target myapp-pod.

Now, from the command line:

1
2
3
4
5
6
7
8
9
kubectl create -f service-def.yaml
#> service "myapp-service" created

kubectl get services
#> NAME              TYPE        CLUSTER-IP        EXTERNAL-IP    PORT(S)           AGE
#> kubernetes        ClusterIP   10.96.0.1         <none>         443/TCP           22d
#> myapp-service     NodePort    10.106.127.123    <none>         80:30008/TCP      18m

curl https://<physical-ip-of-your-node>:30008

If there are multiple Pods that match the selector, Kubernetes will select one at random to route the traffic to. This behavior can be changed, as described here.


Ingress Controllers

Let’s say you own a website that has several applications accessible through different paths. For example, google.com/voice, google.com/hangouts, etc… Normally, your browser would perform a DNS lookup for google.com, and route all traffic to whatever IP resolves. You would need to utilize a NodePort to expose a port, and then a series of LoadBalancers to route traffic to Pods in a scalable fashion. If you are on GCP, Azure, or AWS, you must pay for each LoadBalancer, not to mention the fact that you have to implement SSL/TLS through each hop, configure firewall rules for each service, etc…

This is becoming a headache, but thankfully, you can manage all of this directly from the Kubernetes cluster with the use of an Ingress. Ingress helps your users access your application through a single externally- accesible URL that you can configure to route to different services within your cluster. Oh, and you can configure it to use SSL!

Credit: Ahmet Alp Balkan

To begin setting up an Ingress, we must deploy an Ingress Controller, which is the application responsible for handling the proxying for us. You can use Nginx, Contour, HAProxy, Traefik, Istio, or some other application, but we will be using Nginx in this example. To configure an Ingress Controller, we require: a Deployment that abstracts interacting with the Nginx Pod, a NodePort Service to expose the Ingress Controller to the outside world, and a ServiceAccount to provide the Ingress Controller the ability to modify the internal K8s network. Let’s start by creating the deployment:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
   name: nginx-ingress-controller
spec:
   replicas: 1
   selector:
      matchLabels:
         name: nginx-ingress
   template:
      metadata:
         labels:
            name: nginx-ingress
      spec:
         containers:
         -  name: nginx-ingress-controller
            image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
         
         # 1. Set environment variables for Pod;
         env:
         -  name: POD_NAME
            valueFrom:
               fieldRef:
                  fieldPath: metadata.name
         - name: POD_NAMESPACE
            valueFrom:
               fieldRef:
                  fieldPath: metadata.namespace
               
         # 2. Command to run nginx server;
         args:
            - "/nginx-ingress-controller"
            - "--configmap-$(POD_NAMESPACE)/nginx-configuration"
         
         # 3. Specify the ports ued by the ingress controller.
         ports:
         -  name: http
            containerPort: 80
         -  name: https
            containerPorts: 443
  1. We create some environment variables that will be visable to all containers in our Pod. We utilize the metadata from the Pods that will be created and assign them to POD_NAME and POD_NAMESPACE.

  2. We run /nginx-ingress-controller --configmap-$POD_NAMESPACE)/nginx-configuration on the Pod to run the nginx server, using the environment variables we defined in the lines above. We define nginx-configuration in a second, but this is essentially a file that stores configuration information about the Ingress Controller.

  3. We specify the ports used by the Ingress Controller, which are your classic HTTP/S ports.

Now, we need to create a ConfigMap to pass information about how to configure the Nginx server. We won’t add much to it, but just know that if you ever need to configure your Ingress Controller, this is the place to do it:

1
2
3
4
apiVersion: v1
kind: ConfigMap
metadata:
   name: nginx-configuration 

Next, we need to create a NodePort Service object to route traffic that hits the node through TCP ports 443 or 80 to all Pods that match the given selector.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: Service
metadata:
   name: nginx-ingress
spec:
   type: NodePort
   ports:
   -  port: 80
      targetPort: 80
      protocol: TCP
      name: http
   -  port: 443
      targetPort: 443
      protocol: TCP
      name: https
   selector:
      name: nginx-ingress

Finally, we need to create a ServiceAccount with the correct roles and role bindings. This will allow the Ingress Controller to monitor the Kubernetes cluster for Ingress Resources (which we will describe in the next section.

1
2
3
4
apiVersion: v1
kind: ServiceAccount
metadata:
   name: nginx-ingress-serviceaccount

Ingress Resources

Now, it’s time to define Ingress Resource objects, which are sets of rules and configurations that are applied to the Ingress Controller. Through the use of Ingrss Resources, we can: forward all incoming traffic to a single application, route traffic to different applications based on the URL, route users based on the domain name itself, etc… Let’s start by defining an Ingress Resource to serve the Google Voice application.

1
2
3
4
5
6
7
8
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
   name: ingress-voice
spec:
   backend:
      serviceName: google-voice-service
      servicePort: 80

And now, we’ll create another Ingress object to describe the path to our Google Hangouts application.

1
2
3
4
5
6
7
8
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
   name: ingress-hangouts
spec:
   backend:
      serviceName: google-hangouts-service
      servicePort: 80

Now that we’ve defined Ingress Resources for our Google Voice and Google Hangouts apps, we need to tell the IngressController how to manage routing. Let’s build an Ingress Resource that routes traffic based on the URL; e.g. google.com/voice, google.com/hangsouts, etc. :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: ingress-voice-or-hangouts
spec:
   rules:
   -  http:
         paths:
         -  path: /voice
            backend:
               serviceName: google-voice-service
               servicePort: 80
         -  path: /hangouts
            backend:
               serviceName: google-hangouts-service
               servicePort: 80

At this point we can hit the specified backend via their specified serviceName and servicePort. But let’s say that instead, we wanted to be able to route traffic based on the domain. For example, we want to launch our Voice app through voice.google.com. Well, the good news is, we can simply add another Ingress Resource taht has rules defined for host:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: ingress-voice-or-hangouts-2
spec:
   rules:
   -  host: voice.google.com
      http:
         paths:
         -  backend:
               serviceName: google-voice-service
               servicePort: 80
-  host: hangouts.google.com
   http:
      paths:
      -  backend:
            serviceName: google-hangouts-service
            servicePort: 80

Tada! It’s that easy! If you go in your browser address bar right now and type https://google.com/voice, then you’ll notice that you are redirected to https://voice.google.com. This is accomplished with the use of the rewrite-target option. For more information, see here.


Network Policies

A NetworkPolicy is a way to control inbound or outbound traffic that Pods can receive or send. A NetworkPolicy is another Kubernetes object that uses labels and selectors to determine what Pods to target.

Let’s say we have an API that sends logfiles to a database. Refer to the Pod specs below:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: v1
kind: Pod
metadata:
   name: api-pod
   labels:
      role: api
spec:
   containers:
   -  name: python3
      image: python3
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: v1
kind: Pod
metadata:
   name: log-db
   labels:
      role: db
spec:
   containers:
      -  name: mysql 
         image: mysql 

As you can see, we the two Pods we are concerned about are named api-pod and log-db. Let’s create a SecurityPolicy object to only allow ingress (inbound) traffic to log-db from api-pod over port 3306/TCP.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
   name: db-policy
spec:

   # 1. Select what Pod(s) to apply the SecurityPolicy to.
   podSelector:
      matchLabels:
         role: db
   
   # 2. Specify that you want to define an ingress policy.
   policyTypes:
   -  Ingress

   # 3. Allow inbound from Pods matching the selector over port 3306/TCP.
   ingress:
   -  from:
      -  podSelector:
            matchLabels:
               name: api-pod
      ports:
      -  protocol: TCP
         port: 3306
  1. Utilize selectors to select Pods with role=db.
  2. Specify that you want to define an ingress policy.
  3. Describe your ingress policy. You want to allow ingress traffic from all Pods that match the podSelector:name=api-pod over port=3306 with protocol TCP.

Defining an egress (outbound) traffic rule is pretty much the same. The K8s docs describe this ad nauseum.

NOTE: Not all network solutions support NetworkPolicies. Kube-router, Calico, Romana, and Weave-net, are a few network solutions that support K8s NetworkPolicy objects. Read the K8s docs for more info.