You are on page 1of 6

DNS Testing in Azure Kubernetes Service

In order to better understand the implications of DNS resolutions inside a cluster, we provided some
packet capture on the resources deployed and analyze them with Wireshark.

For testing purposes, we used the following resources:

- 1 Node - AKS Public Cluster


- kubectl krew – With sniff plugin for deployment of static compiled tcpdump to desired Pod
- Wireshark

The Pod used for the running queries has been created with the following manifest file:

apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: default
spec:
dnsPolicy: Default
dnsConfig:
options:
- name: single-request-reopen
- name: ndots
value: "5"
containers:
- name: dnsutils
image: k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3
command:
- sleep
- "360000"
imagePullPolicy: IfNotPresent
restartPolicy: Always

To have an automated way of running multiple queries on the DNS server, we used a simple bash
script that will perform a higher number of queries in a row.

#!/bin/bash
array=( ovidiu.systems zoso.ro xbox.com hotnews.ro adevarul.ro instagram.com
facebook.com apple.com bmw.de tesla.com adobe.com virtual7.de )
for i in "${array[@]}"
do
nslookup $i
done
After deploying the DNSUtil Pod on our managed cluster, we have the following content for the
/etc/resolv.conf file on respective Pod:

search czbnmn152xmubi511aotcw5rgh.fx.internal.cloudapp.net

nameserver 168.63.129.16

options single-request-reopen ndots:5

We will start the tcpdump capture for the dnsutils Pod with the following command:

kubectl sniff dnsutils -n default -o /tmp/dnstest02_withConfigDNS.pcap

After executing the testing script provided, our capture is complete (Ctrl-C) on the kubectl sniff session
and we have the pcap file saved in desired folder (/tmp).

As we can observe in the following Wireshark screen, for every domain where a DNS request is initiated,
there are 10 internal requests/responses as follows:

1. Request: ovidiu.systems.default.svc.cluster.local
2. Response: No such name… A Record
3. Request: ovidiu.systems.svc.cluster.local
4. Response: No such name… A Record
5. Request: ovidiu.systems.cluster.local
6. Response: No such name: SOA Record
7. Request: ovidiu.systems.czbnmn152xmubi511aotcw5rgh.fx.internal.cloudapp.net
8. No such name… A Record
9. Request: ovidiu.systems
10. Response: 51.138.178.xyz

As we can see, there are 8 req/resp fired up until the final request for the public DNS record for the
ssociated domain. The reason for this behavior is the default Kubernetes configuration for ndots, whose
default value is set to “5”.

Following a FQDN definition provided by https://en.wikipedia.org/wiki/Fully_qualified_domain_name

An fully qualified domain name, or absolute domain name is the domain label terminated with a Dot (.)
in the form of ovidiu.systems.
By setting in the application the FQDN in standard format (ovidiu.systems.), there will be skipped the
internal cluster search process and according to our Wireshark capture there will be only one Request +
One Respone for this.

What ndots represent:

Ndots values is the upper threshold value for which an internal cluster query will be triggered.
Example with a ndots value configured for “2”

Would reducing the value of ndots in a Kubernetes cluster will have a negative impact on my
application?

If your application will use a configured value of “2” for ndots and you will connect for example your
backend service to another intra cluster service by using the internal domain name (e.g.
mysql.cluster.svc.local) it will not resolve due to ndots configuration mismatch:

mysql.cluster.local = 3 ndots

configured ndots = 2 ndots

3 < 2 False Will skip the internal search domain and address to external DNS service

If no success Response in the DNS service address on cluster, the application will likely to fail.

The configuration change for ndots value on the workload level is a straightforward process
documented in the official Kuberneted Documentation: https://kubernetes.io/docs/concepts/services-
networking/dns-pod-service/

Adding to {“spec”, “dnsConfig”, “Options”} for name: ndots and value: desired value, can be changed
the value for a specific Pod.

apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: default
spec:
dnsPolicy: Default
dnsConfig:
options:
- name: single-request-reopen
- name: ndots
value: "2"
containers:
- name: dnsutils
image: k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3
command:
- sleep
- "360000"
imagePullPolicy: IfNotPresent
restartPolicy: Always

Racing Condidtion on Conntrack:

The connection tracking mechanism is implemented through Linux netfilter framework. There are
documented issues with DNAT and conntrack when two or more UDP packets are sent in the same time
through the same socket.

There is a configuration option that can address the allocation ports with single-request-reopen option
in dnsConfig

Addressing those issues by migrating from UDP DNS Queries to TCP, where the last one means a smaller
numbers of dropped packets.

This configuration can be achieved with use-vc in resolv.con dnsConfig setting

apiVersion: v1
kind: Pod
spec:
dnsConfig:
options:
- name: use-vc
https://pracucci.com/kubernetes-dns-resolution-ndots-options-and-why-it-may-affect-application-
performances.html

You might also like