NVIDIA’s new B200 GPU vs last year’s H200

Hey there!

Welcome back to The Pulse, where we dive into interesting AI stories and trends backed by data, all presented through simple visuals.

> test shows real-world cluster behavior under load

> at 1K requests: B200 outputs ~39K tokens/s vs H200’s ~13K

> B200 ~1.3× & ~3.5× faster per query at low & high load while H200 chokes under pressure

> higher throughput + faster responses without tradeoff

> i.e, handles 3× more users while sustaining 3.5× faster responses - no latency collapse at scale, ideal for agent-heavy use

> major cost advantage per inference - more capacity, better $/compute efficiency

> largest datacenter M&A in history - $40B deal for ~5 GW capacity

> Aligned platform spans 80+ AI-optimized, liquid-cooled campuses across US + LATAM

> BlackRock’s GIP now majority owner/operator; Microsoft, NVIDIA, and xAI hold minority stakes

> 1st datacenter M&A explicitly structured around AI-specific compute demand

> consortium move to lock in GPU rack & power supply through 2028

> OpenEvidence (AI-driven medical search) raised ~$485M in 2025 across 3 up-rounds:

  • $75M in Feb at ~$1B

  • $210M in Jul at ~$3.5B

  • $200M in Oct at ~$6B (Google Ventures, Sequoia, Blackstone, Thrive, Coatue)

> valuation climbed ~6× in 8 months - one of the fastest for a non-frontier AI startup

> GV’s repeat lead (B & C) signals long-term strategic confidence in AI+healthcare play