Services

Resources

Company

Sep 30, 2025 | 3 min read

Debugging KubeVPN's DNS resolution failures

Debugging KubeVPN's DNS resolution failures

Sep 30, 2025 | 3 min read

Debugging KubeVPN's DNS resolution failures

Sep 30, 2025 | 3 min read

Debugging KubeVPN's DNS resolution failures

Sep 30, 2025 | 3 min read

Debugging KubeVPN's DNS resolution failures

The other day, I was trying to access the Go documentation at go.dev, but the page wouldn't load. Not on Chrome, not in incognito mode, not even on Safari. The error was simple but frustrating: "Could not resolve hostname".

This is a debugging story of how I solved this issue and learnt an important lesson about computer networking.

Problem

So, the site loaded fine on my phone and personal laptop. Just not on my work machine.

I tried the usual suspects:

  • Cleared DNS cache with sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

  • Checked for ad blockers (none installed)

  • Tested different browsers

Nothing worked. Even running curl go.dev inside a Docker container failed, which was strange since containers typically use isolated network stacks.

Investigation

At One2N, we have this “30-minute rule” of not being stuck for more than 30-minutes. If you’re stuck for more than 30 minutes, you are expected to reach out for help. So, I reached out for help to my colleague Saurabh and we started digging deeper.

The error I was getting on my browser was this.

The curl resulted in this error

Even ping didn’t work. And here was the result from nslookup.

Another thing we thought to check was /etc/resolv.conf file to see if it’s been modified. But there was nothing unusual there.

After a lot of back and forth, we found out that the issue was related to invalid DNS resolver entries in scutil (a tool to manage system configuration parameters in MacOS).

We ran scutil --dns, which showed something odd - there were DNS resolver entries with domains ending in .dev.

Here's what we found:

  • curl go.dev failed with hostname resolution errors.

  • dig go.dev returned correct DNS records.

  • The issue persisted even inside Docker containers.

This told us the problem wasn't with external DNS servers - something on my local machine was intercepting DNS queries before they reached the real DNS servers.

Root cause: KubeVPN's DNS hijacking

A few days earlier, I had installed KubeVPN using brew install kubevpn. I thought it was magic - instead of port-forwarding Kubernetes services to localhost, I could directly access them using service-name.namespace.svc.cluster.local.

What I didn't realize was that KubeVPN works by hijacking DNS resolution. It modifies your system's DNS configuration so that queries for .cluster.local domains get routed to your Kubernetes cluster's DNS server.

Now, the problem was that my Kubernetes cluster had a namespace called dev. So when I tried to access go.dev, the system was looking for:

  • go.dev.dev.svc.cluster.local

  • go.dev.svc.cluster.local

  • go.dev.cluster.local

  • go.dev

Since there was no Kubernetes service named "go" in my cluster, DNS resolution failed completely.

Solution: resetting network specific DNS

Here's where it gets interesting. On macOS, there are two levels of DNS configuration:

  1. System-wide resolvers (controlled by /etc/resolv.conf)

  2. Per-network-adapter resolvers (controlled by network settings)

KubeVPN had messed up the per-adapter settings. Simply updating /etc/resolv.conf wouldn't fix this - we needed to reset DNS servers for each network interface.

The fix was this command:

services=$(networksetup -listallnetworkservices | grep 'Wi-Fi\\|Ethernet\\|USB')

while read -r service; do
    echo "Setting DNS for $service"
    networksetup -setdnsservers "$service" 1.1.1.1 1.0.0.1 2606:4700:4700::1111 2606:4700:4700::1001
done <<< "$services"

This loops through all network services and sets them to use Cloudflare's public DNS servers. The moment we ran it, go.dev started working again.

What I learned

  1. DNS on macOS is more complex than it appears. There are multiple layers where DNS resolution can be modified.

  2. Tools that seem like magic usually are doing something complex behind the scenes. KubeVPN's convenience came at the cost of modifying system networking in ways I didn't understand.

  3. Read the docs before running commands. I installed KubeVPN without fully understanding what it would do to my system.

  4. Container networking isn't always isolated. Docker containers inherit DNS configuration from the host in many scenarios.

The right way forward

If you're using tools like KubeVPN, make sure you understand:

  • What system changes they make

  • How to properly connect and disconnect

  • What the cleanup process looks like

For KubeVPN specifically, kubevpn disconnect should properly revert DNS changes. But if you're in a broken state like I was, the network adapter DNS reset approach will get you back to a clean slate.

Takeaway

Sometimes the most frustrating debugging sessions teach you the most. This incident helped me understand how DNS resolution actually works on macOS and reminded me that convenience tools often make system-level changes that aren't immediately obvious.

The next time a website mysteriously stops working on just one machine, dig deeper into the DNS configuration. The answer might be hiding in your system's network settings.

The other day, I was trying to access the Go documentation at go.dev, but the page wouldn't load. Not on Chrome, not in incognito mode, not even on Safari. The error was simple but frustrating: "Could not resolve hostname".

This is a debugging story of how I solved this issue and learnt an important lesson about computer networking.

Problem

So, the site loaded fine on my phone and personal laptop. Just not on my work machine.

I tried the usual suspects:

  • Cleared DNS cache with sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

  • Checked for ad blockers (none installed)

  • Tested different browsers

Nothing worked. Even running curl go.dev inside a Docker container failed, which was strange since containers typically use isolated network stacks.

Investigation

At One2N, we have this “30-minute rule” of not being stuck for more than 30-minutes. If you’re stuck for more than 30 minutes, you are expected to reach out for help. So, I reached out for help to my colleague Saurabh and we started digging deeper.

The error I was getting on my browser was this.

The curl resulted in this error

Even ping didn’t work. And here was the result from nslookup.

Another thing we thought to check was /etc/resolv.conf file to see if it’s been modified. But there was nothing unusual there.

After a lot of back and forth, we found out that the issue was related to invalid DNS resolver entries in scutil (a tool to manage system configuration parameters in MacOS).

We ran scutil --dns, which showed something odd - there were DNS resolver entries with domains ending in .dev.

Here's what we found:

  • curl go.dev failed with hostname resolution errors.

  • dig go.dev returned correct DNS records.

  • The issue persisted even inside Docker containers.

This told us the problem wasn't with external DNS servers - something on my local machine was intercepting DNS queries before they reached the real DNS servers.

Root cause: KubeVPN's DNS hijacking

A few days earlier, I had installed KubeVPN using brew install kubevpn. I thought it was magic - instead of port-forwarding Kubernetes services to localhost, I could directly access them using service-name.namespace.svc.cluster.local.

What I didn't realize was that KubeVPN works by hijacking DNS resolution. It modifies your system's DNS configuration so that queries for .cluster.local domains get routed to your Kubernetes cluster's DNS server.

Now, the problem was that my Kubernetes cluster had a namespace called dev. So when I tried to access go.dev, the system was looking for:

  • go.dev.dev.svc.cluster.local

  • go.dev.svc.cluster.local

  • go.dev.cluster.local

  • go.dev

Since there was no Kubernetes service named "go" in my cluster, DNS resolution failed completely.

Solution: resetting network specific DNS

Here's where it gets interesting. On macOS, there are two levels of DNS configuration:

  1. System-wide resolvers (controlled by /etc/resolv.conf)

  2. Per-network-adapter resolvers (controlled by network settings)

KubeVPN had messed up the per-adapter settings. Simply updating /etc/resolv.conf wouldn't fix this - we needed to reset DNS servers for each network interface.

The fix was this command:

services=$(networksetup -listallnetworkservices | grep 'Wi-Fi\\|Ethernet\\|USB')

while read -r service; do
    echo "Setting DNS for $service"
    networksetup -setdnsservers "$service" 1.1.1.1 1.0.0.1 2606:4700:4700::1111 2606:4700:4700::1001
done <<< "$services"

This loops through all network services and sets them to use Cloudflare's public DNS servers. The moment we ran it, go.dev started working again.

What I learned

  1. DNS on macOS is more complex than it appears. There are multiple layers where DNS resolution can be modified.

  2. Tools that seem like magic usually are doing something complex behind the scenes. KubeVPN's convenience came at the cost of modifying system networking in ways I didn't understand.

  3. Read the docs before running commands. I installed KubeVPN without fully understanding what it would do to my system.

  4. Container networking isn't always isolated. Docker containers inherit DNS configuration from the host in many scenarios.

The right way forward

If you're using tools like KubeVPN, make sure you understand:

  • What system changes they make

  • How to properly connect and disconnect

  • What the cleanup process looks like

For KubeVPN specifically, kubevpn disconnect should properly revert DNS changes. But if you're in a broken state like I was, the network adapter DNS reset approach will get you back to a clean slate.

Takeaway

Sometimes the most frustrating debugging sessions teach you the most. This incident helped me understand how DNS resolution actually works on macOS and reminded me that convenience tools often make system-level changes that aren't immediately obvious.

The next time a website mysteriously stops working on just one machine, dig deeper into the DNS configuration. The answer might be hiding in your system's network settings.

The other day, I was trying to access the Go documentation at go.dev, but the page wouldn't load. Not on Chrome, not in incognito mode, not even on Safari. The error was simple but frustrating: "Could not resolve hostname".

This is a debugging story of how I solved this issue and learnt an important lesson about computer networking.

Problem

So, the site loaded fine on my phone and personal laptop. Just not on my work machine.

I tried the usual suspects:

  • Cleared DNS cache with sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

  • Checked for ad blockers (none installed)

  • Tested different browsers

Nothing worked. Even running curl go.dev inside a Docker container failed, which was strange since containers typically use isolated network stacks.

Investigation

At One2N, we have this “30-minute rule” of not being stuck for more than 30-minutes. If you’re stuck for more than 30 minutes, you are expected to reach out for help. So, I reached out for help to my colleague Saurabh and we started digging deeper.

The error I was getting on my browser was this.

The curl resulted in this error

Even ping didn’t work. And here was the result from nslookup.

Another thing we thought to check was /etc/resolv.conf file to see if it’s been modified. But there was nothing unusual there.

After a lot of back and forth, we found out that the issue was related to invalid DNS resolver entries in scutil (a tool to manage system configuration parameters in MacOS).

We ran scutil --dns, which showed something odd - there were DNS resolver entries with domains ending in .dev.

Here's what we found:

  • curl go.dev failed with hostname resolution errors.

  • dig go.dev returned correct DNS records.

  • The issue persisted even inside Docker containers.

This told us the problem wasn't with external DNS servers - something on my local machine was intercepting DNS queries before they reached the real DNS servers.

Root cause: KubeVPN's DNS hijacking

A few days earlier, I had installed KubeVPN using brew install kubevpn. I thought it was magic - instead of port-forwarding Kubernetes services to localhost, I could directly access them using service-name.namespace.svc.cluster.local.

What I didn't realize was that KubeVPN works by hijacking DNS resolution. It modifies your system's DNS configuration so that queries for .cluster.local domains get routed to your Kubernetes cluster's DNS server.

Now, the problem was that my Kubernetes cluster had a namespace called dev. So when I tried to access go.dev, the system was looking for:

  • go.dev.dev.svc.cluster.local

  • go.dev.svc.cluster.local

  • go.dev.cluster.local

  • go.dev

Since there was no Kubernetes service named "go" in my cluster, DNS resolution failed completely.

Solution: resetting network specific DNS

Here's where it gets interesting. On macOS, there are two levels of DNS configuration:

  1. System-wide resolvers (controlled by /etc/resolv.conf)

  2. Per-network-adapter resolvers (controlled by network settings)

KubeVPN had messed up the per-adapter settings. Simply updating /etc/resolv.conf wouldn't fix this - we needed to reset DNS servers for each network interface.

The fix was this command:

services=$(networksetup -listallnetworkservices | grep 'Wi-Fi\\|Ethernet\\|USB')

while read -r service; do
    echo "Setting DNS for $service"
    networksetup -setdnsservers "$service" 1.1.1.1 1.0.0.1 2606:4700:4700::1111 2606:4700:4700::1001
done <<< "$services"

This loops through all network services and sets them to use Cloudflare's public DNS servers. The moment we ran it, go.dev started working again.

What I learned

  1. DNS on macOS is more complex than it appears. There are multiple layers where DNS resolution can be modified.

  2. Tools that seem like magic usually are doing something complex behind the scenes. KubeVPN's convenience came at the cost of modifying system networking in ways I didn't understand.

  3. Read the docs before running commands. I installed KubeVPN without fully understanding what it would do to my system.

  4. Container networking isn't always isolated. Docker containers inherit DNS configuration from the host in many scenarios.

The right way forward

If you're using tools like KubeVPN, make sure you understand:

  • What system changes they make

  • How to properly connect and disconnect

  • What the cleanup process looks like

For KubeVPN specifically, kubevpn disconnect should properly revert DNS changes. But if you're in a broken state like I was, the network adapter DNS reset approach will get you back to a clean slate.

Takeaway

Sometimes the most frustrating debugging sessions teach you the most. This incident helped me understand how DNS resolution actually works on macOS and reminded me that convenience tools often make system-level changes that aren't immediately obvious.

The next time a website mysteriously stops working on just one machine, dig deeper into the DNS configuration. The answer might be hiding in your system's network settings.

The other day, I was trying to access the Go documentation at go.dev, but the page wouldn't load. Not on Chrome, not in incognito mode, not even on Safari. The error was simple but frustrating: "Could not resolve hostname".

This is a debugging story of how I solved this issue and learnt an important lesson about computer networking.

Problem

So, the site loaded fine on my phone and personal laptop. Just not on my work machine.

I tried the usual suspects:

  • Cleared DNS cache with sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

  • Checked for ad blockers (none installed)

  • Tested different browsers

Nothing worked. Even running curl go.dev inside a Docker container failed, which was strange since containers typically use isolated network stacks.

Investigation

At One2N, we have this “30-minute rule” of not being stuck for more than 30-minutes. If you’re stuck for more than 30 minutes, you are expected to reach out for help. So, I reached out for help to my colleague Saurabh and we started digging deeper.

The error I was getting on my browser was this.

The curl resulted in this error

Even ping didn’t work. And here was the result from nslookup.

Another thing we thought to check was /etc/resolv.conf file to see if it’s been modified. But there was nothing unusual there.

After a lot of back and forth, we found out that the issue was related to invalid DNS resolver entries in scutil (a tool to manage system configuration parameters in MacOS).

We ran scutil --dns, which showed something odd - there were DNS resolver entries with domains ending in .dev.

Here's what we found:

  • curl go.dev failed with hostname resolution errors.

  • dig go.dev returned correct DNS records.

  • The issue persisted even inside Docker containers.

This told us the problem wasn't with external DNS servers - something on my local machine was intercepting DNS queries before they reached the real DNS servers.

Root cause: KubeVPN's DNS hijacking

A few days earlier, I had installed KubeVPN using brew install kubevpn. I thought it was magic - instead of port-forwarding Kubernetes services to localhost, I could directly access them using service-name.namespace.svc.cluster.local.

What I didn't realize was that KubeVPN works by hijacking DNS resolution. It modifies your system's DNS configuration so that queries for .cluster.local domains get routed to your Kubernetes cluster's DNS server.

Now, the problem was that my Kubernetes cluster had a namespace called dev. So when I tried to access go.dev, the system was looking for:

  • go.dev.dev.svc.cluster.local

  • go.dev.svc.cluster.local

  • go.dev.cluster.local

  • go.dev

Since there was no Kubernetes service named "go" in my cluster, DNS resolution failed completely.

Solution: resetting network specific DNS

Here's where it gets interesting. On macOS, there are two levels of DNS configuration:

  1. System-wide resolvers (controlled by /etc/resolv.conf)

  2. Per-network-adapter resolvers (controlled by network settings)

KubeVPN had messed up the per-adapter settings. Simply updating /etc/resolv.conf wouldn't fix this - we needed to reset DNS servers for each network interface.

The fix was this command:

services=$(networksetup -listallnetworkservices | grep 'Wi-Fi\\|Ethernet\\|USB')

while read -r service; do
    echo "Setting DNS for $service"
    networksetup -setdnsservers "$service" 1.1.1.1 1.0.0.1 2606:4700:4700::1111 2606:4700:4700::1001
done <<< "$services"

This loops through all network services and sets them to use Cloudflare's public DNS servers. The moment we ran it, go.dev started working again.

What I learned

  1. DNS on macOS is more complex than it appears. There are multiple layers where DNS resolution can be modified.

  2. Tools that seem like magic usually are doing something complex behind the scenes. KubeVPN's convenience came at the cost of modifying system networking in ways I didn't understand.

  3. Read the docs before running commands. I installed KubeVPN without fully understanding what it would do to my system.

  4. Container networking isn't always isolated. Docker containers inherit DNS configuration from the host in many scenarios.

The right way forward

If you're using tools like KubeVPN, make sure you understand:

  • What system changes they make

  • How to properly connect and disconnect

  • What the cleanup process looks like

For KubeVPN specifically, kubevpn disconnect should properly revert DNS changes. But if you're in a broken state like I was, the network adapter DNS reset approach will get you back to a clean slate.

Takeaway

Sometimes the most frustrating debugging sessions teach you the most. This incident helped me understand how DNS resolution actually works on macOS and reminded me that convenience tools often make system-level changes that aren't immediately obvious.

The next time a website mysteriously stops working on just one machine, dig deeper into the DNS configuration. The answer might be hiding in your system's network settings.

The other day, I was trying to access the Go documentation at go.dev, but the page wouldn't load. Not on Chrome, not in incognito mode, not even on Safari. The error was simple but frustrating: "Could not resolve hostname".

This is a debugging story of how I solved this issue and learnt an important lesson about computer networking.

Problem

So, the site loaded fine on my phone and personal laptop. Just not on my work machine.

I tried the usual suspects:

  • Cleared DNS cache with sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

  • Checked for ad blockers (none installed)

  • Tested different browsers

Nothing worked. Even running curl go.dev inside a Docker container failed, which was strange since containers typically use isolated network stacks.

Investigation

At One2N, we have this “30-minute rule” of not being stuck for more than 30-minutes. If you’re stuck for more than 30 minutes, you are expected to reach out for help. So, I reached out for help to my colleague Saurabh and we started digging deeper.

The error I was getting on my browser was this.

The curl resulted in this error

Even ping didn’t work. And here was the result from nslookup.

Another thing we thought to check was /etc/resolv.conf file to see if it’s been modified. But there was nothing unusual there.

After a lot of back and forth, we found out that the issue was related to invalid DNS resolver entries in scutil (a tool to manage system configuration parameters in MacOS).

We ran scutil --dns, which showed something odd - there were DNS resolver entries with domains ending in .dev.

Here's what we found:

  • curl go.dev failed with hostname resolution errors.

  • dig go.dev returned correct DNS records.

  • The issue persisted even inside Docker containers.

This told us the problem wasn't with external DNS servers - something on my local machine was intercepting DNS queries before they reached the real DNS servers.

Root cause: KubeVPN's DNS hijacking

A few days earlier, I had installed KubeVPN using brew install kubevpn. I thought it was magic - instead of port-forwarding Kubernetes services to localhost, I could directly access them using service-name.namespace.svc.cluster.local.

What I didn't realize was that KubeVPN works by hijacking DNS resolution. It modifies your system's DNS configuration so that queries for .cluster.local domains get routed to your Kubernetes cluster's DNS server.

Now, the problem was that my Kubernetes cluster had a namespace called dev. So when I tried to access go.dev, the system was looking for:

  • go.dev.dev.svc.cluster.local

  • go.dev.svc.cluster.local

  • go.dev.cluster.local

  • go.dev

Since there was no Kubernetes service named "go" in my cluster, DNS resolution failed completely.

Solution: resetting network specific DNS

Here's where it gets interesting. On macOS, there are two levels of DNS configuration:

  1. System-wide resolvers (controlled by /etc/resolv.conf)

  2. Per-network-adapter resolvers (controlled by network settings)

KubeVPN had messed up the per-adapter settings. Simply updating /etc/resolv.conf wouldn't fix this - we needed to reset DNS servers for each network interface.

The fix was this command:

services=$(networksetup -listallnetworkservices | grep 'Wi-Fi\\|Ethernet\\|USB')

while read -r service; do
    echo "Setting DNS for $service"
    networksetup -setdnsservers "$service" 1.1.1.1 1.0.0.1 2606:4700:4700::1111 2606:4700:4700::1001
done <<< "$services"

This loops through all network services and sets them to use Cloudflare's public DNS servers. The moment we ran it, go.dev started working again.

What I learned

  1. DNS on macOS is more complex than it appears. There are multiple layers where DNS resolution can be modified.

  2. Tools that seem like magic usually are doing something complex behind the scenes. KubeVPN's convenience came at the cost of modifying system networking in ways I didn't understand.

  3. Read the docs before running commands. I installed KubeVPN without fully understanding what it would do to my system.

  4. Container networking isn't always isolated. Docker containers inherit DNS configuration from the host in many scenarios.

The right way forward

If you're using tools like KubeVPN, make sure you understand:

  • What system changes they make

  • How to properly connect and disconnect

  • What the cleanup process looks like

For KubeVPN specifically, kubevpn disconnect should properly revert DNS changes. But if you're in a broken state like I was, the network adapter DNS reset approach will get you back to a clean slate.

Takeaway

Sometimes the most frustrating debugging sessions teach you the most. This incident helped me understand how DNS resolution actually works on macOS and reminded me that convenience tools often make system-level changes that aren't immediately obvious.

The next time a website mysteriously stops working on just one machine, dig deeper into the DNS configuration. The answer might be hiding in your system's network settings.

Share

Jump to section

Continue reading.

Subscribe for more such content

Get the latest in software engineering best practices straight to your inbox. Subscribe now!

Subscribe for more such content

Get the latest in software engineering best practices straight to your inbox. Subscribe now!

Subscribe for more such content

Get the latest in software engineering best practices straight to your inbox. Subscribe now!

Subscribe for more such content

Get the latest in software engineering best practices straight to your inbox. Subscribe now!

Subscribe for more such content

Get the latest in software engineering best practices straight to your inbox. Subscribe now!