Kubernetes Ingress
Ingress is major part when we talk about Kubernetes inbuilt load balancers.
kubernetes port types
Limitation
Unless something seriously magical has happened with “Docker for Mac,” then the type: LoadBalancer
is only designed for a cloud environment, where the Ingress controller can provision a cloud load balancer (i.e. AWS's ELB, GKE's ... whatever they use).
That said, one can see from your output that kubernetes has behaved as if it was type: NodePort
(with your specific example showing that port 32670
goes to port 8080
on your Service). It's unclear whether you can just use that NodePort-ish port as-is, or whether the Service in "pending" state somehow means traffic will not route as expected. I guess maybe just try it?
Or you can skip the pretense and create the Service legitimately of type: NodePort
, and then you and kubernetes will be on the same page about what is happening.
The other way you can chose to do things is run an in-cluster Ingress controller, such as ingress-nginx, and use virtual-hosting to expose all your services on just one port. That can be far more convenient if you have a lot of Services to expose, but it would likely be too big of a headache to set up just for one or two of them.
Why endpoints and not services
The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.
Practical approach for kubernetes ingress loadbalancer deployment.
SSL certificate generation and configure kubernetes secrets for it.
// Generate local root key with password
openssl genrsa -des3 -out rootCA.key 4096// Generate local root key without password
openssl genrsa -out rootCA.key 4096// Creating local Certificate Authority certificate
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -subj "/C=LK/ST=western/O=lsf, Inc./CN=local.com" -out rootCA.crt// updating your local certificate store
mkdir /usr/local/share/ca-certificates/extra
update-ca-certificates // Generate domain key
openssl genrsa -out example.com.key 2048// Generate self signed certificate request
openssl req -new -sha256 -key example.com.key -subj "/C=SL/ST=western/O=lsf, Inc./CN=example.com" -out example.com.csr// Generate self signed certificate
openssl x509 -req -in example.com.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out example.com.crt -days 500 -sha256// Configure kubernetes secret from the certificate
kubectl create secret tls tls-certificate --key example.com.key --cert example.com.crtWhile we are using OpenSSL, we should also create a strong Diffie-Hellman group, which is used in negotiating Perfect Forward Secrecy with clients.// Generate dhparam
openssl dhparam -out dhparam.pem 2048// Configure
kubectl create secret generic tls-dhparam --from-file=dhparam.pem
Once deploy the ingress controller you may check it.
$ kubectl get service nginx-ingress
Change the your localhost ip to your test domain name in /etc/hosts file.
Then it is possible to use browser for url calling.
$ sudo nano /etc/hosts
Then check it from using a browser or curl commands.
// Check normal
$ curl https://example.com -cacert ./tls/rootCA.crt// check providing CA authority certificate
$ curl https://example.com -cacert ./tls/rootCA.crt// If you'd like to turn off curl's verification of the certificate, // use the -k (or --insecure) option.
$ curl https://example.com --insecure
Check the services currently running in kubernetes cluster.
$ kubectl get services
Finally add the ingress rules as in following yaml.
Hear minor modification is added to the original site changing the domain name to example.com which i have used in my article.
View the hello-world-svc page by url through ingress.
View the hello world service directly through nodePort.