Google's Gemma 3 LMSYS Elo score vs Parameter size

Hey there!

Welcome back to The Pulse, where we dive into interesting AI stories and trends backed by data, all presented through simple visuals.

> Google's new Gemma 3 released last week

> 4 sizes out: 1B, 4B, 12B, 27B; multimodal & speaks 140+ languages

> optimized to run on phones & laptops

> 27B model has ELO of 1338 (98% score of DeepSeek R1) + can run on a single NVIDIA H100 GPU

> higher score than o3-mini high, Sonnet 3.7, o1-preview, etc.

> multimodal + multilingual model; even more compact yet high performing

> also can be deployed on consumer hardware + allowed to be used for commercial purposes

> same position as OpenAI's revenue in Nov 2023

> puts the company 17 months behind OpenAI

> likeliest revenue projection for 2025: $2B; optimistic projection: $4B

> most revenue likely coming from API

> Claude suspected to power AI tool Manus; supposedly pays Anthropic $2 per task on avg