https://auntresodamid.com/iJugHxINePLH1VY/96561
AI memory is sold out, causing an unprecedented surge in prices

AI memory is sold out, causing an unprecedented surge in prices

Sharing is caring!


Eugene Mymrin | Moment | Getty Images

All computing devices require a part called memory, or RAM, for short-term data storage, but this year, there won’t be enough of these essential components to meet worldwide demand.

That’s because companies like Nvidia, Advanced Micro Devices and Google need so much RAM for their artificial intelligence chips, and those companies are the first ones in line for the components.

Three primary memory vendors — Micron, SK Hynix and Samsung Electronics — make up nearly the entire RAM market, and their businesses are benefitting from the surge in demand.

We have seen a very sharp, significant surge in demand for memory, and it has far outpaced our ability to supply that memory and, in our estimation, the supply capability of the whole memory industry,” Micron business chief Sumit Sadana told CNBC this week at the CES trade show in Las Vegas.

Micron’s stock is up 247% over the past year year, and the company reported that net income nearly tripled in the most recent quarter. Samsung this week said that it expects its December quarter operating profit to nearly triple as well. Meanwhile, SK Hynix is considering a U.S. listing as its stock price in South Korea surges, and in October, the company said it had secured demand for its entire 2026 RAM production capacity.

Now, prices for memory are rising.

TrendForce, a Taipei-based researcher that closely covers the memory market, this week said it expects average DRAM memory prices to rise between 50% and 55% this quarter versus the fourth quarter of 2025. TrendForce analyst Tom Hsu told CNBC that type of increase for memory prices was “unprecedented.”

Three-to-one basis

Chipmakers like Nvidia surround the part of the chip that does the computation — the graphics processing unit, or GPU — with several blocks of a fast, specialized component called high-bandwidth memory, or HBM, Sadana said. HBM is often visible when chipmakers hold up their new chips. Micron supplies memory to both Nvidia and AMD, the two leading GPU makers.

Nvidia’s Rubin GPU, which recently entered production, comes with up to 288 gigabytes of next-generation HBM4 memory per chip. HBM is installed in eight visible blocks above and below the processor, and that GPU will be sold as part of single server rack called NVL72, which fittingly combines 72 of those GPUs into a single system. By comparison, smartphones typically come with 8 or 12GB of lower-powered DDR memory.

Nvidia founder and CEO Jensen Huang introduces the Rubin GPU and the Vera CPU as he speaks during Nvidia Live at CES 2026 ahead of the annual Consumer Electronics Show in Las Vegas, Nevada, on Jan. 5, 2026.

Patrick T. Fallon | AFP | Getty Images

‘Memory wall’

AI researchers started to see memory as a bottleneck just before OpenAI’s ChatGPT hit the market in late 2022, said Majestic Labs co-founder Sha Rabii, an entrepreneur who previously worked on silicon at Google and Meta.

Prior AI systems were designed for models like convolutional neural networks, which require less memory than large language models, or LLMs, that are popular today, Rabii said.

While AI chips themselves have been getting much faster, memory has not, he said, which leads to powerful GPUs waiting around to get the data needed to run LLMs.

“Your performance is limited by the amount of memory and the speed of the memory that you have, and if you keep adding more GPUs, it’s not a win,” Rabii said.

The AI industry refers to this as the “memory wall.”

Erik Isakson | Digitalvision | Getty Images

“The processor spends more time just twiddling its thumbs, waiting for data,” Micron’s Sadana said.

More and faster memory means that AI systems can run bigger models, serve more customers simultaneously and add “context windows” that allow chatbots and other LLMs to remember previous conversations with users, which adds a touch of personalization to the experience.

Majestic Labs is designing an AI system for inference with 128 terabytes of memory, or about 100 times more memory than some current AI systems, Rabii said, adding that the company plans to eschew HBM memory for lower-cost options. Rabii said the additional RAM and architecture support in the design will enable its computers to support significantly more users at the same time than other AI servers while using less power.

Sold out for 2026

Wall Street has been asking companies in the consumer electronics business, like Apple and Dell Technologies, how they will handle the memory shortage and if they might be forced to raise prices or cut margins. These days, memory accounts for about 20% of the hardware costs of a laptop, Hsu said. That’s up from between 10% and 18% in the first half of 2025.

In October, Apple finance chief Kevan Parekh told analysts that his company was seeing a “slight tailwind” on memory prices but he downplayed it as “nothing really to note there.”

But in November, Dell said it expected its cost basis for all of its products to go up as a result of the memory shortage. COO Jefferey Clarke told analysts that Dell planned to change its mix of configurations to minimize the price impacts, but he said the shortage will likely affect retail prices for devices.

“I don’t see how this will not make its way into the customer base,” Clarke said. “We’ll do everything we can to mitigate that.”

Even Nvidia, which has emerged as the biggest customer in the HBM market, is facing questions about its ravenous memory needs — in particular, about its consumer products.

At a press conference Tuesday at CES, Nvidia CEO Jensen Huang was asked if he was concerned that the company’s gaming customers might be resentful of AI technology because of rising game console and graphics cards prices that are being driven by the memory shortage.

Huang said Nvidia is a very large customer of memory and has long relationships with the companies in the space but that, ultimately, there would need to be more memory factories because the needs of AI are so high.

“Because our demand is so high, every factory, every HBM supplier, is gearing up, and they’re all doing great,” Huang said.

At most, Micron can only meet two-thirds of the medium-term memory requirements for some customers, Sadana said. But the company is currently building two big factories called fabs in Boise, Idaho, that will start producing memory in 2027 and 2028, he said. Micron is also going to break ground on a fab in the town of Clay, New York, that he said is expect to come online in 2030.

But for now, “we’re sold out for 2026,” Sadana said.



Source link

Oval@3x 2

Don’t miss latest news!

Select list(s):

We don’t spam! Read our [link]privacy policy[/link] for more info.

🕶 Relax!

Put your feet up and let us do the hard work for you. Sign up to receive our latest news directly in your inbox.

Select Your Choice:

We’ll never send you spam or share your email address.
Find out more in our Privacy Policy.

🕶 Relax!

Put your feet up and let us do the hard work for you. Sign up to receive our latest news directly in your inbox.

Select Your Choice:

We’ll never send you spam or share your email address.
Find out more in our Privacy Policy.

Sharing is caring!

Read More :-  $TRUMP and other meme coins won't be protected by SEC, Commissioner Hester Peirce says

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top