At Cerebras AI Day, we unveiled the next chapter of the Cerebras AI platform, new state-of-the-art AI models, and our latest AI supercomputers.
> Cerebras announces CS-3, the world’s fastest AI Chip with a whopping 4 trillion transistors
> Cerebras selects Qualcomm to deliver unprecedented performance in AI Inference
> Cerebras and G42 break ground on Condor Galaxy 3, an 8 exaFLOPs AI Supercomputer
138. Cerebras AI Applications
& Research Panel
Praneetha Elugunti
Mayo Clinic
Jim Culver
GSK
Tim Bishop
Mayo Clinic
Irinia Rish
University of Montreal
Andy Hock
Cerebras
141. Cerebras x Qualcomm Technology Partnership
Reducing Inference Cost by 10x
Cerebras CS-3
AI Training
Qualcomm Cloud AI100 Ultra
AI Inference
142. Jointly optimized software stack for
cost efficient LLMs
Cerebras
Stack
Qualcomm
Stack
Sparse training Sparse inference
Train in FP16 Compile & run in MX6
Train large + small models Apply speculative decoding
Network Architecture
Search
Compile & run on Ultra AI 100
145. G42 across the Entire AI Value Chain
Customer &
Industry Tailored
Solutions
Data
Centers
Compute
Infrastructure
Cloud
Platforms
AI Model
Development
Cloud &
Enterprise AI
Deployment
Application
Development
146. 476B Arabic tokens
1.63T Total tokens
The world’s largest
open-source Arabic LLM
30B parameter, bilingual
Arabic-English model
Trained on the
Condor Galaxy 1 and 2
AI Supercomputer