Cloudflare’s shift to EPYC Turin lands less like a routine upgrade and more like a hard reset for fleet economics. Behind it, the server refresh cycle now tracks directly with denser output per rack.
The sharper story sits in the software and platform mix. A Rust request path reshapes content delivery workloads, and memory, power, and I/O choices tuned for data center efficiency let Turin outrun Genoa-X on requests per CPU across the fleet at scale while trimming latency enough that the headline figures plainly stop looking like the whole point.
Why gen 13 pulls ahead of Genoa-X on throughput and rack density
Cloudflare frames Gen 13 as a broader platform win, not merely a CPU swap, with AMD’s EPYC 9965 replacing the prior server design. In Cloudflare’s Genoa-X comparison the newer servers deliver more requests per socket which helps explain why the company talks about a 2x throughput jump on production traffic rather than a narrow lab result.
The rack story matters just as much, because edge deployments live on density. Cloudflare says rack-level throughput rises by as much as 60%, and those server density gains let Gen 13 carry far more useful capacity within the same footprint.
FL2 cuts latency while Turin lifts requests per CPU
Latency looked less flattering in early Turin tests, so Cloudflare paired the hardware change with a software rewrite. That FL2 work moved proxy logic toward Rust request handling, tightened memory access patterns, and, by Cloudflare’s account, raised requests per CPU by up to 50% without turning the result into a trick.
The payoff showed up where users notice it first. Cloudflare reports latency cuts of up to 70%, with better lower tail latency making loaded edge services feel steadier during sharp bursts of traffic.
Power, memory and I/O choices that support the new platform
Power sits near the center of Cloudflare’s Gen 13 story, not at the edge for the fleet. The company says the move to EPYC Turin delivers 50% better performance per watt than Gen 12, while the chosen balance of memory per core keeps capacity aligned with edge workloads.
I/O choices complete the picture neatly here. Cloudflare pairs 100 GbE networking with ample PCIe 5.0 lanes, giving NICs and storage room to breathe and reducing the chance that faster CPUs end up waiting on the platform.