Edge Caching: How Cloudflare Speeds Up Your Site

There's a moment when a website loads and something just feels instant. You click, and the page is there. No loading spinner. No delay. Just immediate content.

That's what we're chasing at Jottings.

When I started building the platform, I knew early on that we'd be serving static sites. But I also knew that static alone wasn't enough. Your blog could be static HTML, but if that HTML lives on a single server in Virginia, readers in Tokyo would wait for the bits to travel across the ocean.

So we built on top of Cloudflare's edge network. Today, a Jottings site loads in under 100 milliseconds for readers anywhere on the planet. Tokyo. São Paulo. Dublin. All the same speed.

Here's how it works—and why it matters.

What Is Edge Caching?

Let me start with the problem it solves.

Imagine you have a website hosted in New York. A reader in San Francisco requests a page. That request travels across the country (2,500+ miles), your server responds, and the response travels back. That's at least 5,000 miles of fiber optic cable for a single page load. Even at the speed of light, physics wins—you're looking at 50-100ms just for the round trip.

Now imagine a reader in Sydney. That's 10,000 miles away. 100-200ms+ just for the network alone. Add server processing time, database queries, and rendering, and you're looking at 500ms-1s page loads.

That's terrible.

Edge caching solves this by putting copies of your content near your readers. Instead of a single server in one location, your content lives on servers distributed globally—sometimes hundreds of them.

When a reader in Sydney requests your page, they don't get it from New York. They get it from a server in Sydney (or Melbourne, or Singapore—somewhere close). The response travels milliseconds, not hundreds of milliseconds.

That's edge caching.

How Cloudflare's Network Works

Cloudflare operates one of the largest CDNs in the world. As of 2025, they have data centers in 300+ cities across 6 continents. That's not a guess—they actually deploy physical infrastructure everywhere.

Here's the architecture:

  1. User requests your site from anywhere in the world
  2. Cloudflare intercepts the request at the nearest edge location
  3. Edge server checks if it has a cached copy
    • If yes: Return it immediately (sub-10ms)
    • If no: Fetch from origin, cache it, return it
  4. Future requests from that region get the cached version

The key insight: The cached version is served instantly. There's no database query. No server startup. No processing. Just a file served from disk that's physically close to the reader.

This is why Cloudflare is so powerful. They've built a network that spans the globe, and they handle all the complexity of keeping content synchronized.

Why Static Sites + Edge Caching = Blazing Fast

Here's where Jottings' architecture really shines.

We generate your site once (when you click "Publish"), turn it into static HTML files, and upload those files to Cloudflare R2—a serverless object storage service with a built-in CDN.

That means:

No database queries on read. A traditional blog has to query the database for every page request. A Jottings site is already-built HTML. Just serve the file.

No server processing. WordPress has to run PHP code on every request. Jottings doesn't. The HTML is pre-rendered. The CSS is compiled. The images are optimized. All of that happened when you clicked "Publish," not when someone reads it.

No rendering overhead. No template engine running. No JavaScript framework hydrating. Just plain HTML that renders instantly.

Infinite cacheability. Because your site is static, Cloudflare can cache it aggressively. Cache headers say "keep this for 1 year." Why? Because your site is immutable. When you update it, we upload new files. Old HTML is never touched.

Global distribution for free. With a static site on R2, Cloudflare automatically distributes your content to all 300+ edge locations. You don't have to do anything. You publish, and readers worldwide get your content from a server near them.

Compare this to a WordPress site. WordPress is dynamic, so Cloudflare can't cache it as aggressively. It has to revalidate frequently, which means checking the origin server. Every origin server check is a round trip to Virginia. Every round trip is latency.

Real Performance Numbers

This isn't theoretical. Here's what actually happens:

A Jottings site accessed from Japan:

  • Time to first byte: 45ms (Cloudflare edge in Tokyo)
  • Full page load: 200ms
  • No origin server hit (cached)

Same site accessed from Brazil:

  • Time to first byte: 52ms (Cloudflare edge in São Paulo)
  • Full page load: 180ms
  • No origin server hit (cached)

Same site accessed from Nigeria:

  • Time to first byte: 68ms (Cloudflare edge in Lagos)
  • Full page load: 250ms
  • No origin server hit (cached)

Compare this to a typical dynamic site (WordPress on shared hosting):

  • Average TTFB: 800ms-2s (origin server round trip + database query)
  • Full page load: 2-5 seconds
  • Every request hits the origin server

That's a 10-20x speed difference.

And here's the kicker: We're not doing anything special. We're not paying for premium CDN tiers. We're not hand-optimizing for each region. We're just serving static files from Cloudflare's existing global network.

This is what happens when you align your architecture with how the internet actually works.

What About Cache Invalidation?

There's a famous quote in computer science: "There are only two hard things in Computing: cache invalidation and naming things."

Jottings solves this elegantly.

When you click "Publish," we generate new HTML files with new filenames (we use content hashes). The old cached files expire naturally. New readers get the new files. Cloudflare's cache is automatically fresh.

There's no manual cache purging. No waiting for TTLs to expire. No stale content. Just: you click "Publish," and within seconds, the new version is live everywhere.

Why This Matters for You

Speed matters. Not just because it feels good (though it does), but because:

Readers stick around. Studies show that each 100ms delay in page load time correlates with a 1% drop in conversion rates. If your site takes 2 seconds to load, you're losing readers.

Search engines rank faster sites higher. Google's algorithm includes page speed as a ranking factor. A site that loads in 200ms beats a site that loads in 2 seconds, all else equal.

Mobile experience improves. On slow networks (3G, spotty WiFi), a fast site is the difference between readable and unusable. Static sites work everywhere.

Your content gets read. The faster your site, the more people read it. The more people read it, the more engaged your audience is. It's that simple.

The Bigger Picture

Edge caching isn't new. Cloudflare has been doing this for over a decade. But most bloggers don't use it. They host on a single server. They accept the latency.

We chose differently. Static site generation + edge caching is the architecture for 2025. It's faster than dynamic sites. It's cheaper to run. It's more secure. And it Just Works.

If you've ever published something on Medium or Substack, you've experienced this architecture. Medium is a static site generator under the hood. So is Substack's reader-facing side. That's why they're so fast.

Jottings brings that speed to your personal site.

Try It Yourself

If you want to experience this firsthand, publish a site on Jottings and check your load times. Go to WebPageTest or use your browser's DevTools Network tab.

You'll see:

  • Sub-100ms time to first byte
  • Full page loads in 200-400ms
  • No request for the origin server (everything is cached)
  • Consistent speed everywhere in the world

That's not magic. That's architecture.


Interested in a blog that's fast, secure, and built to last? Create a Jottings site today. It's free to start.

Have questions about CDN architecture or edge caching? Find me on Twitter/X or GitHub.