TL;DR

The first seven parts of this series answered “how do I reach a service I’m allowed into?” — from trusted wires through VPNs, WireGuard, identity, and Zero Trust. This epilogue answers the inverse question: how does my service reach the internet without opening a port? Reverse tunnels — ngrok, Cloudflare Tunnel, Tailscale Funnel, and self-hosted options like frp — are the unsung half of the story. Some are demo-grade, some are production primitives, all of them fit cleanly into the Part 6 Application/Policy/Tunnel/Route model once you know where to look.

This is an unplanned Part 8 — the epilogue that readers asked for after Part 7. The series is still done; this is the bonus chapter that closes the loop.

The inverse problem

Every preceding post in this series assumed a topology: the service already has an address somewhere — on a leased line, inside a VPN, behind a proxy, or fronted by an identity-aware gateway. The hard part was getting to it.

Flip the assumption. You’re running a service on your laptop, or in a home lab, or on an on-prem appliance sitting behind a NAT you don’t control. There is no public IP. There is no port-forward. You want the internet — or a specific client — to reach it anyway.

Conventional wisdom says: punch a hole in the firewall, get a static IP, put something in a DMZ, call the network team. Reverse tunnels say: don’t. Have the service dial out to a trusted edge; the edge publishes a URL; traffic flows back through the outbound connection the service already opened. No inbound port, no public IP, no NAT fight.

The inverse problem

How they actually work

Strip the branding and all four major players use the same shape. An agent runs next to the service. It opens an outbound, long-lived connection to a provider edge (a globally distributed fleet of PoPs). Over that connection, the provider publishes an HTTPS URL — either on its own domain or on yours. Inbound requests hit the edge; the edge multiplexes them back over the already-open tunnel to the agent; the agent forwards them to the local service.

A minimal cloudflared run looks like this:

# 1. Authenticate once — this opens a browser to your Cloudflare account.
cloudflared tunnel login

# 2. Create a named tunnel; you get back a UUID + credentials file.
cloudflared tunnel create demo-app

# 3. Route a DNS name on your zone at Cloudflare to that tunnel.
cloudflared tunnel route dns demo-app demo-app.example.com

# 4. Run the agent. It dials out to the Cloudflare edge over QUIC (UDP/7844).
#    The origin service is plain HTTP on localhost — TLS terminates at the edge.
cloudflared tunnel --url http://localhost:8000 run demo-app

ngrok http 8000 is the same shape with a different operator and a different edge. Tailscale Funnel reuses the existing WireGuard mesh as the transport. frp is the self-hosted equivalent: you run frps on a VPS you own, the frpc agent dials out to it, your VPS publishes the URL. Same inversion, different trust boundary.

The landscape

The market splits cleanly along two axes: who operates the edge and what’s on top of the tunnel.

  • ngrok — the original. Optimized for developer experience: one command, one URL, done. Paid tiers add custom domains, IP policies, and OIDC-gated edges. Often the fastest way to get a webhook endpoint into Stripe or GitHub.
  • Cloudflare Tunnel (cloudflared) — the production end of the spectrum. Paired with Cloudflare Access it becomes an identity-aware proxy with Zero Trust policy on every request. A fair number of teams now use it as their only ingress for internal apps, with no public IP on the origin at all.
  • Tailscale Funnel — a WireGuard-substrate tie-back to Part 4. Exposes a Tailscale-native node to the public internet via Tailscale’s edge. Currently best understood as demo/dev; the product still carries explicit “not for high-traffic production” language.
  • Self-hosted (frp, inlets, Pangolin) — the no-third-party option. You run the rendezvous server on your own VPS; the agent connects to you. Operational burden is yours, but the data path never leaves infrastructure you control. This is the choice compliance-sensitive shops land on when a SaaS edge isn’t acceptable.

Secure by default? Almost — but the default is not enough

Transport-layer security is the easy part. Every listed tool terminates TLS at its edge with a valid cert, and the tunnel back to the origin is authenticated and encrypted. That’s not where mistakes happen.

The mistakes happen at authorization. A fresh ngrok http 8000 or cloudflared tunnel gives you a public URL. Anyone on the internet who guesses or observes that URL can reach your service. If your service has no auth of its own — a dev dashboard, a webhook receiver, a Kubernetes API behind kubectl proxy, a local development database UI — you’ve just made it public.

The failure mode I see most often in consulting: a developer exposes a local dashboard via ngrok for a quick demo, forgets to stop the tunnel, and walks away. The URL stays live. Bots scan the ngrok and Cloudflare tunnel address spaces continuously, and a random-looking hostname is not security. Within hours, that dashboard is logging requests from half of Eastern Europe. Nothing was breached; nothing was authenticated either. The tool did exactly what it promised, which is publish a URL. The question is whether you layered policy in front of it.

Turning that into a safe deployment requires layering identity policy at the edge:

  • ngrok: paid edges with Google/GitHub/OIDC, IP CIDR allowlists, custom domain + Zero Trust policies.
  • Cloudflare Tunnel: Cloudflare Access policies on the published hostname. Every request evaluates identity, device posture, geography, and session age before the tunnel is even consulted. This is Part 6’s Application/Policy/Tunnel/Route model made literal — Cloudflare is the Policy and Tunnel pieces; your service is the Application; DNS is the Route.
  • Self-hosted: you bring your own policy layer (oauth2-proxy, Pomerium, nginx auth_request).

One supply-chain note that belongs in every post on this topic: the agent binary terminates your application’s TLS inside its own process, then re-terminates at the provider edge. You are trusting the provider’s runtime and the provider’s edge with cleartext. For most use cases that’s fine; for regulated traffic, that’s the architecture diagram that the auditor circles.

DNS, one more time

DNS has threaded through the whole series. Reverse tunnels close the loop on it.

In a cloudflared deployment the DNS record is literally everything you expose to the internet. demo-app.example.com points at *.cfargotunnel.com (a Cloudflare-managed CNAME); there is no public IP on your origin. This is the same pattern from Part 6 but shipped as a managed service: DNS is the Route, the edge is the Policy, and the tunnel is the Tunnel. Your compliance story becomes “the only internet-reachable surface is a DNS record and Cloudflare’s edge,” which is an easier sentence than most infrastructure diagrams allow.

Production or testing? An honest matrix

tunnel options

The short version: ngrok is production-grade on paid tiers but data-path-sensitive shops will still flag it. Cloudflare Tunnel paired with Cloudflare Access is a legitimate primary ingress for internal apps — I have clients running it that way today. Tailscale Funnel is best kept to dev and internal demos. Self-hosted is production if you’re willing to own it; if you already operate a VPS and a nginx, frp is a surprisingly small additional burden.

When I reach for which tool

  • Stripe / GitHub webhook into my laptopngrok http 8000. Every time. Ten-second round trip from Slack request to working callback.
  • Demo a half-built app to a stakeholder for thirty minutes — ngrok or Tailscale Funnel, depending on whether the stakeholder is already in my tailnet.
  • Internal Grafana / Argo CD / Hashi-stack dashboard exposed to staff without a VPN — Cloudflare Tunnel + Cloudflare Access. The policy layer does the heavy lifting; the tunnel just removes the public IP.
  • On-prem HIPAA-adjacent app exposing a reporting endpoint to a SaaS data plane — self-hosted frp on a VPS in the same compliance boundary, with oauth2-proxy in front. No third-party edge in the PHI data path.

Where this leaves the series ⁉️

Parts 1 through 7 described how trust moved from the cable to the identity. Reverse tunnels are the piece that, quietly, made the rest of that story practical. They’re the reason a startup with no static IPs can ship a production ingress in an afternoon, why the last client I helped turn off their on-prem VPN concentrator didn’t have to replace it with a hyperscaler box, and why “I’ll get it online for you” is a five-minute promise instead of a five-day ticket.

The series is still done. But if you’ve been reading from Part 1, you now have the tool for the second-most-common remote-access question you’ll get this year — and hopefully a clearer sense of when to reach for which one.

👐 Companion lab(s)

three reference setups:

All 3 reside in the series repo so you can compare the three topologies against the same k3d backend.

📖 Further Reading