Skip to content

Kubernetes ingress pattern for my Proxmox homelab

Posted on:January 18, 2026

For years, my homelab was a mess of HTTP services running on random ports. I’d access my wiki at http://wiki.home:8080, my music server at http://music.home:4533, and metrics at http://metrics.home:9090. And domains worked only because pfSense/OPNsense DNS resolver handled local hostnames.

Then I got tired of remembering/bookmarking ports and installed nginx on every VM. At least everything was on port 80 now. I could access services at http://wiki.home/ instead of remembering port numbers. Progress!

But I still didn’t have HTTPS. Managing certificates on each VM felt like too much work. I kept telling myself HTTP was “fine.”

The Kubernetes

Then I started working more with Kubernetes at my job. I kept seeing this pattern: instead of exposing each service directly, everything went through an Ingress controller. One entry point, one place for TLS termination, one configuration for routing.

I realized I could apply the same pattern to my homelab:

  1. Have a single gateway that receives all incoming traffic
  2. Route requests based on domain name to the appropriate IP address and port (VM needs to have static IP)
  3. Handle TLS certificates in one place

The solution

I created a dedicated VM running Caddy as my ingress gateway. Caddy is perfect for this because it handles automatic HTTPS certificates via Let’s Encrypt and has a dead-simple configuration syntax.

Here’s the architecture:

Router (Local Network / Internet / VPN)
  |
 Ingress VM (Caddy)
   |
   +---> 1055 wiki VM (192.168.1.55)
   +---> 1056 music VM (192.168.1.56)
   +---> 1059 metrics VM (192.168.1.59)
   +---> 1062 git VM (192.168.1.62)
   ...

Notice that Proxmox VM id numbers correspond to the last two digits of the IP addresses. This makes it easy to remember which VM is which.

All DNS records for my internal domain point to the ingress VM. Caddy then reverse proxies to the actual services. I don’t expose my real domains here, so let’s use home.example.com as the internal domain.

The Caddyfile

I use DNS challenge for certificates since my services aren’t publicly accessible, and Caddy supports various DNS providers (I use Cloudflare):

# TLS configuration using DNS challenge
(tls_dns) {
    tls [email protected] {
        dns cloudflare {env.CLOUDFLARE_API_TOKEN}
    }
}

# Internal services
wiki.home.example.com {
    import tls_dns
    reverse_proxy http://192.168.1.55
}

music.home.example.com {
    import tls_dns
    reverse_proxy http://192.168.1.56
}

metrics.home.example.com {
    import tls_dns
    reverse_proxy http://192.168.1.59
}

git.home.example.com {
    import tls_dns
    reverse_proxy http://192.168.1.62:3000
}

That’s it. Caddy automatically obtains and renews certificates for each domain. The backend services still run plain HTTP - they don’t need to know about TLS at all. No nginx needed, VMs run their own services only.

DNS setup

All subdomains point to the ingress VM:

wiki.home.example.com    A    192.168.1.52
music.home.example.com   A    192.168.1.52
metrics.home.example.com A    192.168.1.52
git.home.example.com     A    192.168.1.52

I manage DNS as code using dnscontrol, which makes it easy to add new services - just add a DNS record and a Caddy block.

Public vs private services

This setup also gives me clean control over which services are internet-accessible.

I have two domains:

The same Caddy instance handles both:

# Internal - only accessible via VPN
wiki.home.example.com {
    import tls_dns
    reverse_proxy http://192.168.1.55
}

# External - accessible from internet
music.public.example.com {
    import tls_dns
    reverse_proxy http://192.168.1.56
}

uptime.public.example.com {
    import tls_dns
    reverse_proxy http://192.168.1.64:3001
}

The internal domain’s DNS records point directly to the internal ingress IP with no Cloudflare proxying. The external domain routes through Cloudflare to my home public IP, which port-forwards to the ingress VM.

This way, I can expose my music server to access it on the go, while keeping metrics and wiki strictly internal.

I think this pattern is really easy to understand and implement. It centralizes TLS management, cleans up access URLs, and scales well as I add more services to my homelab.