Graphic chip maker Nvidia has once again smashed performance records for its AI computing platform in the latest benchmark test rounds of MLPerf, extending its lead on AI performance of hardware, software, and services.
The company won every test across all six application areas for data center and edge computing systems in the second version of MLPerf Inference.
The benchmark called MLPerf is from the MLPerf organization, an industry consortium that administers the tests.
Organizations across a wide range of industries are already tapping into the Nvidia A100 Tensor Core GPU’s exceptional inference performance to take AI from their research groups into daily operations.
“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, general manager and vice president of Accelerated Computing at Nvidia.
“The work we’ve done to achieve these results on MLPerf gives companies a new level of AI performance to improve our everyday lives.”
Nvidia extended the lead on the MLPerf benchmark with its A100 chip delivering up to 237 times faster AI inference than CPUs, thus enabling businesses to move AI from research to production.
For the first time, NVIDIA GPUs now offer more AI inference capacity in the public cloud than CPUs.
Total cloud AI inference computes capacity on NVIDIA GPUs has been growing roughly 10x every two years.
The benchmarks also showed that NVIDIA T4 Tensor Core GPU continues to be a solid inference platform for mainstream enterprise, edge servers, and cost-effective cloud instances.
“Financial institutions are using conversational AI to answer customer questions faster; retailers are using AI to keep shelves stocked, and healthcare providers are using AI to analyze millions of medical images to more accurately identify the disease and help save lives,” the company said.