Taipei/Hong Kong
CNN
—
Nvidia, AMD and Intel have separately launched the next generation of their artificial intelligence (AI) chips in Taiwan, as a three-way race intensifies.
Jensen Huang, CEO of Nvidia (NVDA), said on Sunday that the company would roll out its most advanced AI chip platform, called Rubin, in 2026.
The Rubin platform will succeed Blackwell, which provides chips for data centers and was only announced in March. It was dubbed by Nvidia at the time as “the most powerful chip in the world.”
The Rubin will feature new graphics processing units (GPUs), a new central processing unit (CPU) called Vera and advanced networking chips, Huang said in a speech at National Taiwan University in Taipei.
“Today we are on the cusp of a major change in computing,” Huang told the audience before the opening of Computex, a technology fair held annually in Taiwan. “The intersection of AI and accelerated computing is poised to redefine the future. »
He revealed a roadmap for new semiconductors that will arrive at a “one-year pace.”
Investors have driven up shares of microchip companies, taking advantage of the boom in generative AI. Shares of market leader Nvidia have more than doubled over the past year.
“Nvidia clearly intends to maintain its dominance for as long as possible and in the current generation there is nothing on the horizon to challenge that,” said Richard Windsor, founder of Radio Free Mobile, a mobile communications company. research focused on digital and mobile. ecosystem.
Nvidia accounts for about 70% of AI semiconductor sales. But competition is increasing, with major competitors AMD (AMD) and Intel (INTC) introducing new products in an effort to challenge Nvidia’s dominance.
On Monday, AMD CEO Lisa Su unveiled the company’s latest AI processors and a plan to develop new products over the next two years in Taipei.
Its next-generation MI325X accelerator will be available in the fourth quarter of this year, it said.
A day later, Intel CEO Patrick Gelsinger announced the sixth generation of its Xeon data center chips and Gaudi 3 AI accelerator chips. He touted the latter, which competes with Nvidia’s H100, as being a third cheaper than its rivals.
Global competition to create generative AI applications has led to high demand for cutting-edge chips used in data centers to support these programs.
Nvidia and AMD, which work by Taiwan-born American CEOs who are part of the same family, were once best known among gamers for selling GPUs that display visuals in video games, helping them come to life.
Although the two still compete in this area, their GPUs are now also used to power generative AI, the technology behind newly popular systems such as ChatGPT.
“AI is our number one priority, and we are at the start of an incredibly exciting time for the industry,” Su added.
“We launched the MI300X last year with industry-leading inference performance, memory size and compute capabilities, and we have now expanded our roadmap to now be on an annual cadence, this which means a new family of products every year,” she said.
The new chip will succeed the MI300 and offer more memory, faster memory bandwidth and better computing performance, Su added. The company will launch a new family of products each year, with the MI350 planned for 2025 and the MI400 a year later.