mlperf

Nvidia B200 GPU and Google Trillium TPU debut on the MLPerf Training v4.1 benchmark charts; the B200 posted a doubling of performance on some tests vs. the H100

Nvidia, Oracle, Google, Dell and 13 other companies reported how long it takes their computers to train the key neural networks in use today. Among those results were the first glimpse of Nvidia’s next generation GPU, the B200, and Google’s upcoming accelerator, called Trillium. The B200 posted a…




mlperf

NVIDIA: Blackwell Delivers Next-Level MLPerf Training Performance

Nov. 13, 2024 — Generative AI applications that use text, computer code, protein chains, summaries, video and even 3D graphics require data-center-scale accelerated computing to efficiently train the large language models […]

The post NVIDIA: Blackwell Delivers Next-Level MLPerf Training Performance appeared first on HPCwire.




mlperf

MLPerf Releases Latest Inference Results and New Storage Benchmark

MLCommons this week issued the results of its latest MLPerf Inference (v3.1) benchmark exercise. Nvidia was again the top performing accelerator, but Intel (Xeon CPU) and Habana (Gaudi1 and 2) […]

The post MLPerf Releases Latest Inference Results and New Storage Benchmark appeared first on HPCwire.




mlperf

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently king of accelerated computing) wins […]

The post MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added appeared first on HPCwire.