Why Notion Sync Is Not a Figma, Copilot, or OpenAI Runbook
A popular shortcut is to paste a “global AI” or “developer CDN” list and hope Notion rides along. That often fails, because Notion’s sync and attachment paths lean heavily on AWS: Amazon S3 and CloudFront host patterns are shared across half the commercial internet. If you DOMAIN-SUFFIX,amazonaws.com,PROXY with a wide brush, you may drag unrelated tools through the same node; if you under-cover cloudfront.net while the app shell is proxied, you get a split brain where the first navigation succeeds and long polls or file transfers die—exactly the “stuck on sync” feeling users describe. The problem is not the same as routing a single vendor like OpenAI for chat, nor the same as Figma and font edges, which use a different domain map entirely.
This guide stays on the Notion plus AWS track: it explains how to think in traffic families (app and API on notion.so / notion.com, CDN and large objects, regional API names you must confirm from logs), not to recycle a generic “AI split routing” article. The order and precedence discipline in Clash rules best practices still governs how you layer these lines with domestic shortcuts, corporate intranets, and blocklists. If you also need Microsoft or GitHub for other workflows, you add separate rules—GitHub Copilot and VS Code marketplace split is a different host list, by design.
What “Syncing” Spinners Usually Mean in HTTPS Terms
On the product side, Notion shows one “syncing” or updating workspace banner. On the network side, the client is interleaving short API calls and longer transfers: content lists may resolve quickly, while a page with embedded files or a database with heavy rows waits on Amazon S3 or CloudFront for blob delivery. The first GET to notion.so can succeed while a parallel request to an s3.…amazonaws.com or …cloudfront.net host never completes its TLS or stalls mid-transfer, which the UI flattens into a spinner that never ends. That is a different failure shape from “one chat host times out” or “video rebuffers,” and it is why recycling only OpenAI-style lists is insufficient for knowledge-base tools.
Clash matches the first rule in order. A missing DOMAIN-SUFFIX for a secondary AWS hostname is not a minor gap—it is a split where half the workspace loads through your chosen exit and half sits on DIRECT until a timeout fires, which the app reports as sync failure. Rule sets can track moving CDN names, but you still need to read logs to confirm the hostname that actually failed. Large-file CDN patterns that overlap other products (model hubs, static apps) are covered conceptually in Hugging Face and large-file CDN split routing—the overlap discipline is the same even though Notion is not a model registry.
Traffic Families: Notion, AWS S3, CloudFront, and APIs
Notion app, web, and API on notion.so / notion.com
The Notion product surface—pages, comments, and workspace metadata—generally lives under notion.so and notion.com, with many subdomains for API, internal services, and regional endpoints. Split routing should cover those suffixes as a baseline so the shell and sync handshakes share one policy. Exact subdomains evolve; treat the baseline as necessary but not sufficient, and re-check after a major client update or when a new API path appears in your logs.
Amazon S3 and object storage legs
Uploads, exports, and large attachments often touch *.s3…amazonaws.com or regional AWS bucket hostnames. These labels are shared across millions of services—blunt DOMAIN-SUFFIX,amazonaws.com rules can over-capture unrelated work traffic. A pragmatic pattern is: log the exact host first, add DOMAIN lines for proven hosts, then consider tighter suffixes or curated rule sets that your team reviews. A profile that only tags notion.so and forgets the bucket sync path is a classic source of “text loads, file never does.”
CloudFront and CDN edges
CloudFront distributions frequently appear as *.cloudfront.net (and sometimes AWS CloudFront custom domains that still resolve through shared edges). A wide DOMAIN-SUFFIX,cloudfront.net line may fix Notion and break something else, or the opposite. Prefer observation: pull failing hosts from the browser or app network trace, then add surgical matches. If you need a mental model for “one product, many CDN names,” the design-tooling article on Figma, Adobe, and shared CDN split routing is parallel reading—it is a different vendor, but the same CDN caution applies.
Identity, sessions, and third-party auth
Sign-in, OAuth, and account recovery may use hosts outside the narrow notion namespace (identity providers, email links, captcha, or Apple / Google buttons). If those legs route inconsistently, you can get “logged in in the browser, desktop still confused.” Keep identity-related split routing coherent with the same policy group you use for the Notion shell while testing, or document explicit exceptions (for example, corporate SSO on DIRECT) so you do not fight yourself during sync.
Observing Hostnames: Browser, App, and Clash Logs
Reproduce the stall: open developer tools in the web app, filter by “blocked” or slow requests, and list hosts for the failing sync or attachment action. For the desktop client, use OS network monitors or the client’s own diagnostics if available, because not every leg uses the system proxy table the same way. In Clash, line up timestamps: a burst to notion.so with no follow-up to the AWS host you expected suggests a rule gap, not a random node. If the log already shows the right group but the TCP or TLS phase still hangs, rotate nodes or try another uplink before you assume the YAML is wrong.
Use a client with readable connection logs and policy visibility; choosing a capable Clash client matters because you will return to the profile every time Notion or AWS names shift in production.
A Dedicated NOTION / AWS Policy Group
Name a group such as NOTION_AWS or NOTION_SYNC and route both Notion product suffixes and the AWS legs you have proven in traces through it. Order matters: LAN and domestic shortcuts, enterprise VPN exceptions, and blocklists should sit before the catch for Notion+ AWS so you do not leak private ranges or trap yourself in a proxy loop. After those lines, add your notion baselines, then the observed S3 and CloudFront hosts, then GEOIP or domestic MATCH defaults as your policy requires. A permissive MATCH,DIRECT is comfortable day to day but bites when a new CDN name appears; a one-size global proxy is simpler but may be heavier than you want for local services. Clash works best with explicit, reviewed lists for the software you actually run.
Illustrative DOMAIN-SUFFIX and Match Order
The following YAML fragment is illustrative only. You must add AWS hostnames you observe in your logs, and you may need to avoid over-broad DOMAIN-SUFFIX lines (for example, all of amazonaws.com or cloudfront.net) if they capture non-Notion work traffic. Replace group names, domestic shortcuts, and MATCH with your policy. Keep comments that explain why a line exists.
Example rules fragment (illustrative)
rules:
# Notion product namespaces (verify subdomains in your own traces)
- DOMAIN-SUFFIX,notion.so,NOTION_AWS
- DOMAIN-SUFFIX,notion.com,NOTION_AWS
- DOMAIN-SUFFIX,notion.site,NOTION_AWS
# Amazon S3 — prefer DOMAIN lines from logs; suffix is often too wide
# - DOMAIN,example-bucket.s3.ap-northeast-1.amazonaws.com,NOTION_AWS
# CloudFront / CDN edges (high overlap; tighten to observed hosts)
# - DOMAIN-SUFFIX,cloudfront.net,NOTION_AWS
- GEOIP,CN,DIRECT
- MATCH,DIRECT
DOMAIN-KEYWORD is a last resort. Short tokens that appear inside unrelated hostnames will create false positives. Prefer notion DOMAIN-SUFFIX baselines, then DOMAIN for proven AWS hosts, then remote rule sets you review after each update.
Rule Sets, Shared CDNs, and Over-Capture Risk
Public rule sets can track new AWS and CDN labels without you editing YAML daily. The trade is trust: a set built for “proxy everything” may steal traffic you wanted on DIRECT, or duplicate your own baselines. A practical split: maintain a short, hand-reviewed list for Notion and the exact S3 and CloudFront hosts you have seen, then layer remote providers with a diff habit—if a rule set auto-refreshes the night your sync fails, read the change before you roll back your entire split routing design.
CloudFront and generic Amazon suffixes cover enormous surface area. When over-capture breaks another team’s stack, log the exact failing host, replace the suffix with a surgical DOMAIN line, and iterate. The same “shared CDN, surgical rules” story appears when large downloads span many hosts—compare with the workflow in Hugging Face CDN split routing for a different product with similar caution on breadth.
DNS, fake-ip, and Sync Timeouts
Under fake-ip, the client may “resolve” immediately while the real name resolution and policy choice happen on the proxy path. If a Notion or AWS flow does not follow the policy you expect, you can see “DNS OK” in one tool yet stalled TCP in Clash—a recipe for spurious sync failure reports. Align nameserver order, fake-ip filters, and the same rules you use for notion and AWS hostnames. A full Meta-oriented walkthrough is in Clash Meta DNS, fallback, and fake-ip; apply the same rigor to workspace sync, not only generic browsing.
When multiple resolvers (browser DoH, OS, Clash) disagree, timeout behavior can look random. For stack-level DNS leaks and dual-path confusion that mirror “fast resolve, slow connect,” also read IPv6 dual-stack and DNS direct rules in Clash alongside your log review.
System Proxy, Desktop App, and TUN
System proxy mode is convenient when every process honors the OS settings. Notion’s desktop and mobile apps may use libraries that do not follow the system proxy the same way a browser tab does, especially for background sync or large uploads. TUN raises routing to the stack so more traffic shares one split routing story, at the cost of kernel permissions and possible friction with other VPNs or corporate clients. For stack-level behavior, read TUN mode in depth before you enable TUN on a work machine. The practical test: if the web app syncs but the desktop client does not, compare policy coverage, process model, and logs before you keep stacking random rules.
Separating Stalled TLS From Wrong Policy in Logs
Not every hang is a “bad node.” Stalls before TCP completes often implicate routing or DNS misalignment. Failures in the TLS handshake can point to middleboxes, old ciphers, or a tired exit. Clash log lines tell you which phase died; for vocabulary and method, follow Clash connection logs: timeouts and TLS while you correlate timestamps with a failing Notion save or file upload. Knowing whether you are fighting policy selection or node quality keeps you from deleting good split routing out of panic.
Subscription Refreshes and Stale Endpoints
Remote rule sets and airport subscriptions must actually refresh. A stale CDN or node list is a common reason sync “worked until Wednesday.” Make sure the URLs that update your profile are not themselves trapped in a proxy loop, and re-check node health the same way you do for any long-running Clash install. The maintenance rhythm in Clash subscription and node maintenance applies: distinguish dead exits from a missing DOMAIN line before you reinstall Notion for the fourth time.
Checklist Before You Blame the Node or Reinstall
- Confirm you are allowed to use Clash and to use Notion on this account, device, and network.
- Reproduce in devtools or app diagnostics: list every slow or failing host and phase (TLS alert, timeout, reset, and so on).
- Map each host to a Clash log line: does it hit
NOTION_AWS(or your chosen group) as expected? - Close gaps with
DOMAIN-SUFFIXorDOMAINlines, then re-test sync, comments, and attachment flows—not only the first page load. - Align DNS and fake-ip with those suffixes; look for “resolved but never connected” on AWS names.
- Audit rule order: domestic, LAN, and block rules must not starve a required API or CDN host.
- Decide system proxy vs TUN for the desktop client under test; keep one primary story per experiment.
- Verify subscription and rule set update URLs work without proxy loops.
- Only then rotate nodes or check vendor status; note timestamps for each change.
Wrap-Up: Observable Notion and AWS Pipelines in Clash
A knowledge workspace in Notion is a multi-host pipeline: API sync, page data, and large binaries that often ride AWS CDN and storage patterns. Clash makes that pipeline observable: policy groups, ordered DOMAIN-SUFFIX rules, curated rule sets for moving edges, and explicit DNS so fake-ip and split routing stay aligned. The failure mode in 2026 is still rarely “Notion is globally broken” and often “a new S3 or CloudFront host you never added to the profile,” which reads as sync failure or endless spinners until you fix the domain map.
If your machine runs several hot tools at once, keep the narrative coherent: the match order and logging habits in Clash rules best practices scale from a single app to a whole studio profile—including Notion next to, but not confused with, ChatGPT, GitHub Copilot, or design stacks you route separately.
→ Download Clash for free and experience the difference—so your workspace syncs on the first try, not after the fifth toggle that was only a missing AWS suffix.