How Does a CDN Handle Content Replication

Arpit Bhayani

curious, tinkerer, and explorer


Content Delivery Networks (CDNs) content replication is far more nuanced than most engineers realize. Let’s dive deep into how CDNs actually replicate, distribute, and manage content across their global networks.

Why Content Replication Matters

When a user in Tokyo requests a cat video from a server in Virginia, they’re looking at 150-200ms of latency just from the round-trip time. Add TCP handshakes, TLS negotiations, and actual data transfer, and you’re easily hitting 500ms+ for initial content delivery.

CDNs solve this by placing copies of content geographically closer to users, but the challenge is determining what to replicate, where to place it, and when to update it.

Push vs. Pull

CDNs use two primary approaches for content replication, and most modern CDNs actually use a hybrid of both.

Push-Based Replication (Origin Push)

In push-based replication, your origin server proactively sends content to CDN edge nodes. Think of it like Amazon distributing books to warehouses before customers order them.

When you upload a new JavaScript bundle to your origin, you trigger an API call to your CDN (say, Cloudflare or Akamai). The CDN’s control plane receives this request and initiates a replication job. This job creates a directed acyclic graph (DAG) of distribution tasks, often using a hierarchical or mesh topology.

Let’s say you have 200 edge locations. The CDN doesn’t push directly from origin to all 200; that would crush your origin’s bandwidth. Instead, it

  1. Pushes to 5-10 regional “parent” nodes (tier-1 caches)
  2. The parents distribute to 20-30 “mid-tier” nodes (tier-2 caches)
  3. Mid-tier nodes replicate to edge nodes in their region

This hierarchical push typically uses BitTorrent’s principles at scale, chunking files and using parallel transfers. Fastly, for example, uses a proprietary protocol called Varnish clustering for this. On Akamai, you can use NetStorage to configure a push-based replication.

When to use push replication

  • Large files that won’t change often (firmware updates, game patches)
  • Content with predictable high demand (major software releases)
  • Time-sensitive content that needs to be everywhere immediately (livestream and news broadcasts)
  • When you have the bandwidth at the origin to support the initial distribution

Pull-Based Replication (On-Demand/Lazy Loading)

Pull-based replication is reactive; content is only replicated to an edge node when a user requests it from that location. Here’s how the request flow looks:

  1. User in Mumbai requests example.com/bundle.js
  2. Request hits CDN’s Mumbai edge node
  3. Edge checks its local cache, cache miss
  4. Edge makes a request to its parent node or directly to the origin
  5. Edge receives content, stores it locally, and serves it to the user
  6. Next user in Mumbai gets a cache hit, no origin request needed

The “cache miss goes to origin” explanation is simplified. In reality, there are usually 2-3 cache tiers:

  • Edge cache (L1): Thousands of these, closest to users, smallest storage
  • Regional cache (L2): Fewer nodes, more storage, aggregates requests from multiple edges
  • Origin shield (L3): Optional layer that sits in front of your origin to prevent stampedes

Here’s what a cache miss actually looks like in a multi-tier system:

User → Edge (miss) → Regional (miss) → Origin Shield (miss) → Your Origin

Each tier caches the response on the way back, so subsequent requests don’t need to go as far up the chain.

Cache stampede protection

One critical aspect: when content isn’t cached and suddenly gets 10,000 requests (maybe a tweet went viral), you don’t want 10,000 requests hammering your origin. CDNs use request coalescing or request collapsing, something like this…

def get(self, url: str) -> Any:
    # Check cache first
    if url in self.cache:
        return self.cache[url]
    
    # Check if someone else is already fetching this URL
    with self.lock:
        if url in self.inflight_requests:
            # Someone else is fetching - get their event to wait on
            event = self.inflight_requests[url]
            is_first_request = False
        else:
            # We're the first - create an event for others to wait on
            event = threading.Event()
            self.inflight_requests[url] = event
            is_first_request = True
    
    if is_first_request:
        # We're the first request - actually fetch from upstream
        try:
            response = self.fetch_from_upstream(url)
            self.cache[url] = response
            self.inflight_results[url] = response
            
            # Signal all waiting threads that the result is ready
            event.set()
            return response
        finally:
            # Clean up
            with self.lock:
                self.inflight_requests.pop(url, None)
                self.inflight_results.pop(url, None)
    else:
        # We're NOT the first - wait for the first request to finish
        event.wait()  # Block until the first request completes
        
        # Return the result that the first request fetched
        return self.cache.get(url) or self.inflight_results.get(url)

Only one request actually goes upstream; the other 9,999 wait for that result and share it.

The Hybrid Approach

Modern CDNs don’t strictly use push or pull; they use hybrid strategies with predictive intelligence.

Predictive pre-positioning

CDNs analyze traffic patterns using machine learning to predict what content should be where. If analytics show that a particular video always gets requested in Brazil on Friday evenings, the CDN proactively replicates it to Brazilian edge nodes on Friday afternoon, even though it’s technically a “pull” CDN.

Adaptive replication based on popularity

Content might exist in only 10 edge locations when it’s new, but if it suddenly gets popular, the CDN’s orchestration layer notices the high request rate and automatically replicates it to 50 more locations. Conversely, unpopular content gets evicted from edge caches and might only live in regional caches or the origin.

Geographic targeting

We can also configure replication rules

{
  "replication_rules": [
    {
      "path_pattern": "/api/v1/*",
      "strategy": "pull",
      "cache_tier": "regional_only",
      "reason": "API responses are user-specific, low cache hit rate"
    },
    {
      "path_pattern": "/static/fonts/*",
      "strategy": "push",
      "target_regions": ["all"],
      "reason": "Fonts are cacheable and requested everywhere"
    },
    {
      "path_pattern": "/videos/*.mp4",
      "strategy": "hybrid",
      "initial_regions": ["us-east", "us-west"],
      "auto_expand_threshold": "100_requests_per_hour",
      "reason": "Video popularity varies; start regional, expand if needed"
    }
  ]
}

The Data Structure For Replication

How does a CDN know if it has the right version of content? Most modern CDNs use content-addressed storage.

Instead of storing files by their URL alone, CDNs compute a hash (like SHA-256) of the content and use that hash as part of the cache key. This means:

Cache key = hash(URL + Hash(content) + Vary headers + Query params)

When your origin serves content, it includes an ETag header

HTTP/1.1 200 OK
ETag: "33a64df551425fcc55e4d42a148795d9f25f89d4"
Cache-Control: public, max-age=31536000, immutable
Content-Type: application/javascript

The CDN edge stores this with the content. Later, when checking if cached content is still valid, it can send:

GET /bundle.js HTTP/1.1
If-None-Match: "33a64df551425fcc55e4d42a148795d9f25f89d4"

If the content hasn’t changed, the origin responds with 304 Not Modified, no data transfer needed. This is why you see URLs like:

/static/bundle.a7f3d92b.js
/images/hero.png?v=1234567890

The hash or version in the filename/query param becomes part of the cache key. When you deploy new code, the hash changes, so it’s effectively a different object in the CDN’s eyes. The old version can stay cached (maybe someone is on an old app version), and the new version gets replicated independently.

Consistency Challenges

CDNs face a distributed systems problem: how do you ensure content is consistent across 200+ globally distributed nodes?

Most CDNs are eventually consistent by design. When you push an update or purge content, it doesn’t happen atomically everywhere. You might see:

  • 80% of edges updated in 10 seconds
  • 95% updated in 30 seconds
  • 99.9% updated in 2 minutes
  • Stragglers taking up to 5-10 minutes

This is a fundamental trade-off. CDNs choose availability over strong consistency because:

  1. Network partitions happen (undersea cable cuts, regional outages)
  2. Users care more about fast responses than perfect consistency for static assets
  3. The alternative (locking all edges during updates) would be catastrophically slow

But, if you cannot tolerate inconsistency, here’s what you can do

Version your API responses

// Cacheable API response without version awareness
app.get('/api/config', (req, res) => {
  res.setHeader('Cache-Control', 'public, max-age=3600');
  res.json({ feature_flags: getFeatureFlags() });
});

// Include version/timestamp so clients know if data is stale
app.get('/api/config', (req, res) => {
  const config = getFeatureFlags();
  res.setHeader('Cache-Control', 'public, max-age=3600');
  res.setHeader('X-Config-Version', config.version);
  res.json({
    version: config.version,
    generated_at: Date.now(),
    feature_flags: config.data
  });
});

Use purge

Purging is expensive and creates thundering herd problems (ref 1, ref 2). When you purge, thousands of edge nodes might simultaneously request fresh content from the origin. Instead of purging, you can use short TTLs for content that changes. Something like this…

# Instead of purging, use short TTLs for content that changes
def set_smart_cache_headers(content_type, mutability):
    if mutability == 'immutable':
        # Content with hash in URL, never changes
        return 'public, max-age=31536000, immutable'
    elif mutability == 'occasional':
        # Changes weekly/monthly (pricing pages, marketing content)
        return 'public, max-age=3600, stale-while-revalidate=86400'
    elif mutability == 'frequent':
        # Changes daily (blog homepage, news feed)
        return 'public, max-age=300, stale-while-revalidate=600'
    else:
        # User-specific or real-time data
        return 'private, max-age=0, must-revalidate'

The stale-while-revalidate directive is particularly clever: it lets the CDN serve stale content immediately while fetching fresh content in the background, avoiding both latency spikes and origin load spikes.

Replication Protocols

CDNs use optimized internal protocols for node-to-node transfer:

  • Consistent hashing: Determines which nodes should store which content
  • Gossip protocols: For propagating metadata about what content exists where
  • Custom UDP-based protocols: For low-latency health checks and coordination
  • Proprietary compression: Beyond gzip/brotli, optimized for internal transfers

Multi-region replication architecture

Here’s a simplified view of Cloudflare’s architecture

                [Origin Server]
                       |
                [Origin Shield]
                 /     |     \
          [Colo-1]  [Colo-2]  [Colo-3]  (Regional hubs)
           /  \       /  \       /  \
      [Edge] [Edge] [Edge] [Edge] [Edge] [Edge]  (Edge nodes)

Each “Colo” (colocation facility) contains multiple servers. Within a colo, they use Anycast routing, multiple servers share the same IP address, and requests are routed to the nearest/least-loaded one.

Transferring petabytes between nodes is expensive. CDNs optimize:

  1. Delta encoding: Only transfer the diff between versions
  2. Chunked transfer: Break large files into chunks, transfer chunks in parallel
  3. Peer-to-peer between edges: Edges can fetch from nearby edges, not just from parent nodes
  4. Compression: Use algorithms optimized for specific content types

Handling Dynamic Content

Modern CDNs don’t just cache static files; they run code at the edge using edge workers and functions. Services like Cloudflare Workers, Fastly Compute@Edge, and AWS Lambda@Edge let you run JavaScript/WebAssembly at edge nodes

This means the CDN isn’t just replicating static content; it’s replicating code execution capabilities. Your logic runs in 200+ locations simultaneously.

Implications for replication:

  • Code updates must be replicated (usually within seconds)
  • Code can generate responses dynamically, so cache hit rates drop
  • You need strategies like edge-side includes (ESI) to cache fragments

Real-World Performance Considerations

If CDN is going to power the most critical piece of your product (consider Live streaming), consider monitoring the replication lag. This is how your code would look…

class CDNReplicationMonitor:
    def __init__(self, cdn_client, edge_locations):
        self.cdn_client = cdn_client
        self.edge_locations = edge_locations
    
    async def verify_replication(self, url, expected_etag, timeout=300):
        """
        Verify that the content has replicated to all edge locations
        """
        start_time = time.time()
        unsynced_edges = set(self.edge_locations)
        
        while unsynced_edges and (time.time() - start_time) < timeout:
            for edge in list(unsynced_edges):
                # Make request from specific edge location
                etag = await self.check_edge_etag(url, edge)
                
                if etag == expected_etag:
                    unsynced_edges.remove(edge)
                    print(f"✓ {edge} synced")
                else:
                    print(f"✗ {edge} still has old version")
            
            if unsynced_edges:
                await asyncio.sleep(5)  # Wait before rechecking
        
        sync_time = time.time() - start_time
        success_rate = (len(self.edge_locations) - len(unsynced_edges)) / len(self.edge_locations)
        
        return {
            'success_rate': success_rate,
            'sync_time_seconds': sync_time,
            'unsynced_edges': list(unsynced_edges)
        }

By the way, replication isn’t free, and hence, configure it when you really need it, because it induces

  • Storage costs: 200 locations = 200x storage cost
  • Bandwidth costs: Inter-node transfers are usually free within CDN, but origin → CDN bandwidth costs you
  • Request costs: Cache misses mean more origin requests = higher origin bandwidth bills

Footnotes

CDN replication is a fascinating way to understand and appreciate distributed systems. It trades strict consistency for availability and performance at massive scale. Some key takeaways are

  • Choose eventual consistency wherever possible
  • URLs with hashes/versions make replication predictable
  • Longer TTL isn’t always better.
  • Don’t assume purges happen instantly or completely
  • Not everything needs aggressive replication

If you find this helpful and interesting,

Arpit Bhayani

Staff Engg at GCP Memorystore, Creator of DiceDB, ex-Staff Engg for Google Ads and GCP Dataproc, ex-Amazon Fast Data, ex-Director of Engg. SRE and Data Engineering at Unacademy. I spark engineering curiosity through my no-fluff engineering videos on YouTube and my courses

Writings and Learnings

Blogs

Papershelf

Bookshelf

RSS Feed


Arpit's Newsletter read by 145,000 engineers

Weekly essays on real-world system design, distributed systems, or a deep dive into some super-clever algorithm.


The courses listed on this website are offered by

Relog Deeptech Pvt. Ltd.
203, Sagar Apartment, Camp Road, Mangilal Plot, Amravati, Maharashtra, 444602
GSTIN: 27AALCR5165R1ZF