Why TUN stack and strict-route deserve their own checklist

Windows 11 does not negotiate politely when multiple programs want to steer packets. TUN mode sits much closer to the real routing table than system proxy tricks, which means tiny mismatches explode into “half my tabs work” symptoms that waste evenings. Searchers land here with a specific intent: they already installed Mihomo Party, imported a profile, and now need the advanced toggles that prevent split routing weirdness and lingering interface metrics after exit.

Two knobs dominate that story on mihomo-class cores. First, the TUN stack choice—often labeled around gVisor versus system—changes how packets are handed between userspace and the OS. Second, strict-route tightens whether traffic is allowed to “fall through” to paths outside the tunnel’s routing discipline. Understanding both keeps policy groups trustworthy: you are not debugging subscription merges anymore; you are debugging who owns the default route at breakfast.

If vocabulary still feels abstract, read Clash TUN mode deep dive once, then return—this article stays practical and Mihomo-Party-shaped instead of re-deriving the entire networking syllabus.

Lawful use: employers and campuses may forbid tunneling or mandate vendor VPNs. Elevated TUN mode on a managed PC can violate policy even when installers succeed. Obtain written approval where required.

Pair this guide with the Windows 11 Mihomo Party install walkthrough

This page assumes you completed the first-run ladder in Install Mihomo Party on Windows 11: Import Subscription Step by Step—trusted installer, working mihomo core path, refreshed subscription, active merged profile, confirmed mixed-port, and a deliberate system proxy test. If any of those still wobble, fix upstream first; transparent capture magnifies configuration errors instead of hiding them.

Readers comparing desktop GUIs should still skim how to choose a Clash client so expectations about menus stay aligned with the program you actually launched. Mihomo Party wording and placement differ from Clash Verge Rev; muscle memory is not portable field-for-field.

Prerequisites: admin rights, route owners, DNS posture

TUN mode on Win11 routinely asks for elevation. Run from an account that can approve UAC prompts cleanly; broken admin workflows produce “works once, never again” tunnel interfaces. Inventory who else participates in routing: corporate VPN, “gaming accelerators,” hypervisors, and second copies of Clash-class apps each register filters or routes. Pause competitors during experiments so you attribute behavior to Mihomo Party alone.

DNS deserves prior thought. If your profile uses fake-ip, browsers, Windows resolver caches, and ephemeral WSL guests can disagree about what “localhost resolution” means until you align bypass lists and interface binding. Before chasing stack switches, make sure DNS fallback and fake-ip filters are not fighting your TUN assumptions.

Keep clock sync automatic. Skewed time manifests as spooky TLS failures exactly when you escalate privileges and blame the wrong toggle. Confirm one stable baseline: same profile, same DNS mode, reproducible browsing—then layer TUN.

TUN mode versus system proxy: when each one lies

System proxy persuades well-behaved WinINet consumers to talk to your mixed-port. It is an excellent teaching mode because logs stay readable and you can retreat instantly. It is also incomplete: CLI tools, some Electron apps, and quirky UDP workloads may ignore those hints entirely. TUN mode enforces policy closer to the IP layer, which is why gamers, chat apps, and developers reach for it after baseline success.

The trap is flipping both philosophies without a checklist. Stacking confused metrics—proxy on while TUN imagines it owns defaults—creates “DNS works in Edge but not ping” ghost stories. Pick a verification ladder: either prove system proxy first, then replace its scope with TUN, or commit to TUN but disable redundant steering so only one story controls the narrative.

TUN stack on Windows 11: gVisor versus system

Labels vary slightly across builds, but the underlying choice is stable. A gVisor-style (gvisor) stack keeps more of the TCP/IP path in userspace. That isolation helps when another program already installed hooks in the Windows Filtering Platform stack or when you need predictable overlap behavior with finicky drivers. It is not free: CPU overhead can rise on heavy streams, and some edge protocols expose bugs sooner in userspace stacks.

The system stack delegates more work to the OS path. When the machine is comparatively clean—no second VPN, no aggressive third-party firewall filter—this often yields lower overhead and fewer surprises for throughput-sensitive workloads. The failure mode is different: conflicts become OS-level contention—duplicate filters, broken bindings after sleep/resume, or adapters that refuse to recycle until reboot.

Treat the choice as empirical. Start with the stack your operator documents; if startup fails, toggles hang, or partial routing persists after resume, switch once, capture logs, and reboot between attempts so you are not stacking stale metrics atop fresh hypotheses.

Strict-route in plain language

Strict Route (often strict-route: true in exported YAML) narrows how packets may leave the box relative to the tunnel’s routing instructions. Practically, it reduces accidental “leaks” where some flows still prefer interfaces you did not intend after TUN comes up. That is valuable when split routing symptoms look like random policy bypass even though YAML looks sane.

The trade-off is conservatism. HOME LAN printers, corporate split-tunnel destinations, and captive portals sometimes require crisp DIRECT slices or interface-specific rules. Turning strict-route on without revisiting those paths can masquerade as “everything broke” when in reality the tunnel is doing what you asked—excluding shortcuts your previous lax table allowed.

Illustrative YAML flags (compare with your exported profile)
tun:
  enable: true
  stack: gvisor
  strict-route: true

Exact keys evolve with kernels; trust your active export plus release notes. GUI toggles in Mihomo Party should mirror these semantics—if they disagree, re-merge profiles deliberately rather than editing fragments in two places.

Single-owner debugging: change only one variable per reboot cycle—stack or strict-route or DNS mode—not all three while tired. Windows route tables forgive nothing when you move fast.

Step 1 — Prove mixed-port and rules before touching TUN

Re-open Mihomo Party, select the production profile, and confirm listeners match expectations: mixed-port bound, subscription timestamps fresh, policy groups populated. Browse with system proxy temporarily if that remains your known-good path. Watch the connections panel for sensible group names; mysterious blank hops usually mean DNS or fake-ip inconsistencies—not TUN stack selection.

Run a coarse IP check from a single browser profile, then disable steering and confirm the reading changes. If that differentiation fails, fix listeners or merges before elevating privileges. For port squats, see mixed-port conflict troubleshooting; TUN will not rescue an occupied baseline.

Step 2 — Enable TUN inside Mihomo Party with elevation

Inside the GUI, locate the TUN / virtual adapter controls—wording shifts slightly by version, but the workflow is consistent: enable the feature, allow Windows to create or recycle the tunnel adapter, then wait until the Mihomo status surface reports a healthy start. If UAC prompts appear, approve from the same security context you intend to keep; denying once and approving later yields inconsistent adapter permissions that are tedious to unwind.

After start, glance at the Windows adapter list: a dedicated tunnel interface should appear without immediate error state. Sleep and resume once during testing weeks; some filter drivers resume in odd orders. If adapters linger disabled after resume, a reboot plus a documented stack switch is cheaper than mysticism.

Step 3 — Pick gVisor or system stack with a decision ladder

Use this ladder instead of forum roulette. First, if another always-on VPN or security suite installs WFP filters, bias toward gVisor first and document the outcome. Second, on clean home PCs where latency matters and CPUs are modest, try system stack and measure CPU during a heavy tab session. Third, if either stack shows intermittent start failures, swap once with a full quit—not a frenzied toggle burst—and clear stale routes by rebooting before declaring victory.

Match the stack choice to documented operator guidance when it exists. Some upstream bundles assume a specific behavior for UDP-heavy rules; ignoring that guidance trades minutes now for hours during game patches or voice calls later.

Step 4 — Turn strict-route on and retest DIRECT slices

Enable strict-route after the tunnel establishes cleanly. Immediately revisit destinations you expect to go DIRECT: LAN administration pages, intranet hosts, NAS subnets, and printer IPs. If any disappear, refine rule order—rule precedence hygiene matters more once strict discipline arrives.

Captive portals deserve an explicit plan. Hotel Wi-Fi often needs short-lived DIRECT allowances, not heroic proxying. Attempting strict capture before acknowledging portal flows looks like “TUN broke coffee shop internet” when the real issue is policy scope.

Step 5 — Align DNS, fake-ip filters, and bypass lists

TUN amplifies DNS ambiguity. When fake-ip answers differ from what the OS cache remembers, you see single-domain failures that feel personal. Reconcile nameserver policies with your tunnel: ensure local domain bypasses include the subnets you genuinely need to resolve without fake answers. Cross-check against Fake-IP LAN bypass guidance if LAN devices oscillate between reachable and not.

If you operate WSL2 or VMs on the same host, remember those environments maintain distinct resolver expectations. A fix on bare-metal Win11 may still miss nested guests; document which interface each workload uses instead of assuming global miracles.

Step 6 — Verify with connection logs and route print

Verification should be boring. With TUN active, open the live connections view inside Mihomo Party and load a mix of DIRECT and proxy destinations. Groups should match mental models; anything recurrently mis-tagged signals rules—not mystic packet loss. Pair GUI signals with an elevated command prompt: route print should reflect reasonable defaults for the tunnel experiment you think you are running. Sudden duplicate defaults point to overlapping VPNs or half-applied strict settings.

For stubborn timeouts, combine with timeout and TLS log interpretation so you separate remote flake from local resolver recursion. Keep log level at info unless you are actively capturing a repro; trace floods hide the single line that matters.

  1. TUN adapter shows up without error state in Windows networking.
  2. Policy groups classify predictable domains during casual browsing.
  3. DIRECT paths still reach LAN and portal pages after strict-route.
  4. route print does not list contradictory default gateways without explanation.
  5. Sleep/resume leaves the tunnel in a recoverable state or documents a reboot requirement.

Step 7 — Clean disconnect ritual (no orphaned routes)

Exiting badly teaches Win11 bad habits. Disable TUN inside Mihomo Party first; wait until the tunnel adapter drops cleanly. Then turn off system proxy if you enabled it earlier, and clear stray environment variables such as HTTPS_PROXY from developer shells. If metrics still look wrong, reboot once—many “no internet after VPN” reports are simply waiting for the stack to relearn an honest default route.

For structured recovery when things go sideways, bookmark proxy and TUN exit cleanup guidance. The steps overlap regardless of which GUI you used to enable capture.

Troubleshooting: common Win11 failure modes

“Partial internet” after TUN: almost always duplicate owners—corporate VPN, second tunnel, or aggressive antivirus injection. Remove overlaps methodically before rewriting remote YAML. Works until resume: driver resume order; try system stack once, gVisor once, document which survives sleep on your hardware. DNS-only failures: revisit fake-ip alignment before touching node lists. Everything slow: confirm you did not choose a userspace stack on a CPU already saturated by streaming and antivirus scans.

Executable-level policies still matter. When policy-by-app granularity becomes necessary, pair TUN fundamentals with process-based routing on Windows rather than hammering global toggles harder.

Frequently asked questions

Should I pick gVisor or system stack for Mihomo Party TUN on Windows 11?

gVisor-class stacks trade some CPU for isolation and often behave better when other WFP participants exist. System stacks can be leaner on clean PCs. If either misbehaves on your machine, switch once with a rebooted baseline rather than stacking attempts mid-session.

What does strict-route do in practice?

It tightens how flows follow the tunnel’s routing discipline, which reduces accidental bypasses that look like split routing bugs. You may need crisper DIRECT paths for LAN and captive portals afterward.

Why does only part of my traffic use TUN after I enable it?

Many apps ignore OS proxy hints by design, bind interfaces explicitly, or ship bundled DNS. Use connection logs to see which executables remain outside the tunnel, then adjust rules or consider whether another VPN still owns pieces of the table.

I lost internet after disabling TUN—what should I do first?

Follow a cleanup sequence: quit Mihomo Party, disable Windows proxy settings, verify adapters, reboot if routes look stale. Detailed sequencing lives in the exit-cleanup article linked above—Win11 symptoms rhyme across clients even when the GUI differs.

Wrap-up

Stable Mihomo Party TUN mode on Windows 11 is less about secret keywords and more about sequencing: prove listeners and rules with mixed-port, elevate deliberately, choose a TUN stack with evidence instead of folklore, apply strict-route only after DIRECT slices still work, align DNS with your fake-ip story, verify with connection panels and route print, then exit with a ritual that leaves no orphaned metrics. Capture which stack you chose—future you will forget before the next cumulative update.

Conceptual anchoring remains useful alongside this checklist: our FAQ answers broad “why,” while specialized articles cover DNS, fake-ip edge cases, and port collisions without duplicating every paragraph here.

Generic “ultimate proxy” blogs often mix outdated Clash for Windows screenshots with modern mihomo fields, which sends readers toggling strict-route on kernels that never matched the article—or worse, downloading abandonware with stale stack defaults. The pain is real: you burn hours on routes that were never yours to fix. Clash V.CORE takes the opposite stance: acquisitions and guides that track real mihomo semantics, diagnostics that separate DNS from stack conflicts, and teardown playbooks that still read true after Win11 patches. If fragmented tutorials stopped being funnier than they are helpful, download Clash for free from this hub and keep Mihomo Party paired with documentation that treats TUN stack and strict-route as measurable choices—not vibes.