2 min read

How to add arbitrary records to Kube-DNS

How to add arbitrary records to Kube-DNS

Everything started the other day when I wanted to overwrite some DNS records inside my Kubernetes cluster, but I could not find a straightforward way to make that happen, or at least at the first glance.
As they say, the best way to learn something is getting your hands dirty with it. So I took the matters into my own hands and decided to dig deeper — I found some really interesting stuff and decided to write about it.

I found 2 possible solutions for this problem:

  1. Pod-wise — Adding the new records to every pod that needs to resolve these domains
  2. Cluster-wise — Adding the changes to a central place which all pods have access to, which in our case is Kube-DNS

Pod-wise solution

As of Kubernetes 1.7, it’s possible to add entries to a Pod’s /etc/hosts directly using .spec.hostAliases.

For example, to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, you can configure HostAliases for a Pod under .spec.hostAliases:

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "foo.local"
    - "bar.local"
  - ip: "10.1.2.3"
    hostnames:
    - "foo.remote"
    - "bar.remote"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"

Easy enough — the problem with this approach is that you’ll have to add the hostAliases to all the resources that need access to the custom entries, and that’s not ideal at all.


Cluster-wise solution

DNS-based service discovery has been part of Kubernetes for a long time with the Kube-DNS cluster addon. This has generally worked pretty well, but there have been some concerns around reliability, flexibility, and security.

As of Kubernetes v1.11, CoreDNS is the recommended DNS server, replacing Kube-DNS. If your cluster originally used Kube-dns, you may still have it deployed rather than CoreDNS.
I’ll assume that you’re using CoreDNS as your Kubernetes DNS.

In CoreDNS, it’s possible to add arbitrary entries to the cluster DNS so all pods will resolve these entries directly from DNS, without changing every /etc/hosts file in each pod.

First:

Edit the CoreDNS ConfigMap and add your changes:

kubectl edit cm coredns -n kube-system

Example:

apiVersion: v1
kind: ConfigMap
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        hosts /etc/coredns/customdomains.db example.org {
          fallthrough
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
  customdomains.db: |
    10.10.1.1 mongo-en-1.example.org
    10.10.1.2 mongo-en-2.example.org
    10.10.1.3 mongo-en-3.example.org
    10.10.1.4 mongo-en-4.example.org

What we did:

  1. Added the hosts plugin before the kubernetes plugin with the fallthrough option.
    The fallthrough option allows the request to continue down the plugin chain when a record isn’t found.
  2. Added a customdomains.db file with our custom domains.

Then ensure the new file is mounted into CoreDNS pods:

kubectl edit -n kube-system deployment coredns

Add:

volumes:
  - name: config-volume
    configMap:
      name: coredns
      items:
      - key: Corefile
        path: Corefile
      - key: customdomains.db
        path: customdomains.db

Finally, reload CoreDNS:

kubectl rollout restart -n kube-system deployment/coredns

This post is a small improvement over my Stack Overflow answer for the same question.