DX Today | No-Hype Podcast & News About AI & DX podcast show image

DX Today | No-Hype Podcast & News About AI & DX

Rick Spair

Podcast

Episodes

Listen, download, subscribe

TurboQuant: Google's 6x KV Cache Compression, the Pied Piper Moment, and the New Inference Cost Math - May 7, 2026

Send us Fan Mail TurboQuant: Google's 6x KV Cache Compression, the Pied Piper Moment, and the New Inference Cost Math - May 7, 2026 Google Research dropped TurboQuant at ICLR 2026, a two stage vector quantization algorithm that compresses LLM key value caches to roughly three bits per coordinate while delivering an eight times attention speedup on H100 GPUs. The economics ripple is enormous: inference is now 85% of enterprise AI spend, and TurboQuant's 6x memory cut could halve that bill, which is exactly why Micron and SK Hynix took a hit when the news broke. Hosted by Chris and Laura. The DX Today Podcast brings you daily deep dives into the most consequential stories in the AI ecosystem. Send us fan mail: https://dxtoday.com/contact #AI #LLMInference #GoogleResearch #AIInfrastructure #TechNews

DX Today | No-Hype Podcast & News About AI & DX RSS Feed


Share: TwitterFacebook

Powered by Plink Plink icon plinkhq.com