Nvidia is using deceptive practices and abusing its dominant market position to eliminate competition, according to Andrew Feldman, CEO of Cerebras Systems, after the company unexpectedly announced its latest GPU product roadmap in October 2023.
Nvidia introduced new graphics cards scheduled for annual release between 2024 and 2026 to add to the industry-leading A100 and H100 GPUs currently in high demand, with industry organizations gobbling them up for gaming workloads. Generative AI.
But Feldman called the news a “pre-announcement” when speaking to HPCWire, emphasizing that the company has no obligation to go through with releasing any of the components it has announced. By doing this, he assumed that it was only causing confusion in the market, especially in light of the fact that Nvidia was, say, a year late with the H100 GPU. And he doubts that Nvidia can carry out this strategy, nor does it want to.
Nvidia is just “throwing sand in the air”
Nvidia announced annual progress on a single architecture in its announcement, with the Hopper Next following the Hpper GPU in 2024, followed by the Ada Lovelace-Next GPU, successor to the Ada Lovelace graphics card, scheduled for release in 2025.
“Companies have been making chips for a long time, and no one has ever been able to be successful on a one-year pace because manufacturing plants don’t change on a one-year pace,” Feldman countered HPCWire.
“In many ways, this has been a terrible time for Nvidia. Stability AI announced that they were going to go with Intel. Amazon said the Anthropic was going to work on them. We announced a monster deal that would produce enough compute to make it clear that you could build… large clusters with us.
“The answer (from Nvidia), which does not surprise me, in the area of strategy, is not a better product. It’s… throwing sand in the air and moving your hands a lot. And you know, Nvidia was a year late with the H100.
Feldman designed the world’s largest AI chip, the Cerebras Wafer-Scale Engine 2 processor, which measures 46,226 mm square and contains 2.6 trillion transistors across 850,000 cores.
He told the New Yorker that massive chips are better than smaller ones because the cores communicate faster when they are on the same chip rather than being scattered across a server room.