“The industry-wide memory shortage and price increases are likely to define the overall scale of the handset industry through the fiscal year. Several handsets [manufacturers], especially in China, are taking a cautious approach in reducing their chipset inventory.”
Qualcomm Inc.
Semiconductor and telecom|Feb. 2026
“In automobile business, in addition to the volume decline mainly in Asia, we reflect a volume reduction by 110,000 units [cars] in the North American region due to the impact of the semiconductor shortages.”
Honda Motor Co.
Automobiles|Nov. 2025
“Like others, we are seeing increased input costs driven primarily by the rising prices of DRAM and NAND. We expect this volatility to remain throughout fiscal '26 and likely into fiscal '27.”
HP Inc.
Computer hardware|Feb. 2026
“The business environment has changed drastically, from US tariffs and soaring material prices to semiconductor shortages. In terms of profits, we are facing headwinds.”
Yamaha Motor Co.
Automobiles|Feb. 2026
“This is the most significant disconnect between demand and supply in terms of magnitude as well as time horizon that we’ve experienced in my 25 years in the industry.”
Micron Technology Inc.
Semiconductor|Dec. 2025
“This reflects about a 5% decrease in revenues year-over-year driven by the current global semiconductor shortage.”
Corsair Gaming Inc.
Consumer electronics|Feb. 2026
“We are operating at a high level of proficiency of changing our price as our input costs are rapidly changing.”
Dell Technologies Inc.
Computer hardware|Feb. 2026
“There’s no relief as far as I know. There’s no relief until 2028.”
Intel Corp.
Semiconductor|Feb. 2026
“We do continue to see market pricing for memory increasing significantly.”
Apple Inc.
Consumer electronics|Jan. 2026
“We’ve got two choices: hit the chip wall or make a fab.”
Tesla Inc.
Automobiles|Jan. 2026
Sources: inSpectrum Tech Inc.; Bloomberg
Why the AI Boom Will Make Phones, Cars and Electronics More Expensive
AI demand is triggering a historic memory-chip shortage. Meeting exponential demand for chips will be expensive and maybe even impossible.
To secure capacity for AI systems, tech giants are buying up memory chips like never before — and paying a premium for multiyear contracts that guarantee supply in the future.
That’s prompted chipmakers to allocate more of their production to these higher-margin orders, leaving fewer memory chips available for things like consumer devices and cars. So prices have surged. The most dramatic spike has been in DRAM, short-term memory used in data centers, PCs, smartphones and even vehicles. In some cases, spot prices have jumped nearly 700% in the past year.
Spot price
Contract price
The price of NAND storage — the flash memory that stores photos, games and files on everyday devices — is rising quickly as well.
Companies have said the crunch is already inflating the cost of AI infrastructure and everything else that relies on memory: laptops, gaming consoles, smartphones and more. It threatens to squeeze margins, delay product launches and potentially render some devices too unprofitable to make.
Companies that make memory chips have always had to deal with periods of oversupply and undersupply. These manufacturers plan years in advance for expected demand and, inevitably, they sometimes bet wrong. What’s happening in the industry today, though, goes far beyond the usual whiplash.
With the AI boom pressuring supply, the memory chip crunch is “a crisis like no other,” the market research firm IDC said. And the AI buildout is only accelerating; big tech companies are on track to spend a staggering $650 billion in 2026, up about 80% from last year’s record. So even if chipmakers ramp up production, potential relief from the shortage is more than a year away — if not longer.
Already, leaders at tech companies including Apple Inc., Alphabet Inc., and Tesla Inc. have been speaking about the impact of the shortage on profitability and even timelines for AI progress. Google DeepMind’s Demis Hassabis called it a “choke point” for the industry. On Tesla’s earnings call in late January, Chief Executive Officer Elon Musk even raised the idea of producing his own memory chips. But production of the chips that are especially needed for AI use requires skills only three companies have.
Why are memory chips so important?
Memory chips are essential to modern computing. They don’t perform calculations themselves, but they store data and feed it to a central processing unit (CPU) — the “brain” of a device. These chips are embedded in smartphones, gaming consoles, cars and home electronics — and, increasingly, AI data centers.
Without them, digital systems would grind to a halt. Apps and computer programs would take a long time to load, videos would buffer endlessly, and there’d be no perky replies from Siri or Alexa.
For decades, computers have relied on two main types of memory. The first is NAND (an abbreviation for Not And), a form of flash storage that retains data even when a device is powered off. It is used for long-term storage in products such as solid-state drives (SSDs) and USB drives.
The second is DRAM (dynamic random-access memory), the most common working memory in computers. It temporarily stores data that the CPU is actively using. In a personal computer, DRAM typically sits on removable modules attached to the motherboard.
This is DDR5, a standard DRAM.
Think of a memory chip as the parking lot at a mall. The mall is like the CPU where all of the activity happens, and cars are like packets of data — they have to wait somewhere until they’re needed.
The rise of artificial intelligence has driven a new way of packaging DRAM chips known as high-bandwidth memory, or HBM. It involves stacking multiple memory dies — individual slices of silicon that store data — vertically and positioning them close to the processor, dramatically increasing data-transfer speeds compared with conventional memory.
HBM is like an upgraded multi-story carpark. It has many ramps so many cars can get in and out at the same time — and much faster.
Thousands of microscopic holes (Through-Silicon Vias) that are drilled down through the stack allow data to travel directly and simultaneously — and therefore more quickly — between layers.
Transferring 1 terabyte can take as long as more than 10 seconds with a DDR5 chip — a widely deployed type of conventional DRAM. With a single HBM3, it’s about 10 times faster.
That speed is critical for AI systems, which must move enormous amounts of data without bottlenecks. By cutting transfer times, HBM helps models load and process data more quickly, making it one of the most sought-after components in the AI supply chain.
What sparked the memory chip squeeze?
As AI models grow larger and more complex, servers are being designed with far more HBM than earlier generations. They also require increasing amounts of conventional DRAM and NAND to handle training data and support cloud workloads.
Since 2023, technology companies such as Amazon.com Inc., Alphabet Inc., Microsoft Corp., and Meta Platforms Inc. have collectively committed hundreds of billions of dollars to expand data centers and computing capacity, intensifying the race to build ever-larger facilities to power AI applications.
Strong Demand From Artificial Intelligence
AI-centric firms are becoming an increasingly important source of revenue for SK Hynix and Micron Technology, two of the world’s three major memory chip manufacturers
Source: Supply-chain data compiled by Bloomberg, based on company disclosures or other publicly available sources
Notes: The graphic captures only a subset of publicly-disclosed customers for the two companies. 2022 data captures customers as of March 1, 2022 while 2026 data captures customers information as of Mar. 1, 2026. Total revenue reflects the four fiscal quarters ending closest to March 1 each year for SK Hynix and Micron — about $66 billion in the 2022 chart and more than $100 billion in 2026. Samsung, one of the three major memory-chip makers, has been excluded as the company has a broader business outside of memory products.
Data center demand for DRAM surged to around 50% of global consumption in 2025, up sharply from 32% five years earlier, according to Bloomberg Intelligence.
That share is expected to climb further. By 2030, AI servers are projected to account for more than 60% of global consumption.
Part of that surge is being driven by the rise of so-called AI agents — software designed to run continuously and complete tasks with limited human supervision.
What’s the fallout?
Companies building AI systems are willing to pay a premium and sign longer-term supply agreements to secure chips. In response, memory-chip makers are steering capital and new production capacity toward making these higher-margin HBM chips, and away from the conventional DRAM used in mainstream devices.
Manufacturers of smartphones, PCs, gaming consoles and other devices are now competing for a much tighter supply of memory, even to meet routine demand. Many have effectively been pushed to the back of the line as AI customers get priority access.
The result is a sharp rise in what it costs to manufacture consumer electronics.
HP Inc., one of the world’s three largest PC makers, says memory now accounts for roughly 35% of the cost of materials needed to build a laptop, up from about 15% to 18% just a quarter earlier.
It’s now raising the price of its computers to offset the increase. It is also changing some product configurations — offering models with lower memory capacity — which could affect the performance and longevity of the devices.
Dell began raising prices for its servers in the middle of December and did the same for PCs in January.
The strain extends beyond PCs. Counterpoint Research estimates that higher memory prices could lift the cost of materials required for making smartphones by 15% or more in coming quarters.
To cope, handset makers are also reducing memory in some of their models and reconsidering low-margin entry-level devices altogether. IDC projects the global smartphone market will shrink by 12.9% in 2026, the sharpest drop on record for the industry.
The impact may be particularly acute in China — where competition in lower-priced smartphones is most intense — according to Qualcomm Inc., the largest supplier of smartphone processors.
Gaming console makers are facing similar constraints. Companies including Sony Group Corp. and Nintendo Co. have warned that tighter component supply and higher input costs could influence the price of their products and could even delay the launch of products in the future.
Can’t factories just churn out more chips?
The global memory market is dominated by just three companies — Samsung Electronics Co. and SK Hynix Inc. in South Korea, and Micron Technology Inc. in the US — a concentration shaped by decades of volatile profits and the soaring cost of building and equipping factories.
Although these companies are racing to expand capacity through new or upgraded manufacturing and advanced packaging facilities, such projects require years and billions of dollars before they generate significant output.
HBM chips pose an additional challenge: they are unusually difficult to manufacture at scale. They are built by stacking multiple memory dies, each thinner than a human hair, with microscopic precision. A single defect can compromise an entire stack, making production slower and yields lower than for conventional DRAM.
Why HBM Is So Hard to Manufacture
Human hair 60 microns
Pollen 25 microns
Wafer height 20 microns
TSV column 5 microns
1 Thinning
For high-density vertical stacking, wafers are thinned to less than the thickness of a sheet of paper, a process that increases the risk of warping and cracking.
2 Drilling
“Drilling” (etching with special chemicals) through as many as 16 wafers in one stack demands extreme precision. Each stack needs thousands of these tiny channels — any slight defect can spoil the whole unit.
3 Electroplating
The TSV holes are filled — or electroplated — with copper to create conduits for both the data and electricity. If there are gaps around the copper the transmission may not work.
4 Bonding
Each stack is bonded together with thousands of droplets of solder as small as 10 microns. If these “micro drops” aren’t bonded together correctly the transmission of data or electricity can fail.
Some versions of HBM also integrate small logic chips to help manage and route data, adding further complexity and consuming a disproportionate share of manufacturing capacity.
So, what happens now?
The memory chip industry has long struggled to match new production with swings in demand. Even as manufacturers expand capacity, they remain wary of repeating the boom-and-bust cycles that have previously wiped out profits and driven weaker players into bankruptcy.
As recently as 2023, Micron and SK Hynix lost billions of dollars during a prolonged industrywide glut after overestimating how long pandemic-era demand would last. While they are eager to capture AI-driven orders, they have little appetite for another supply glut — and the losses that come with it. Capacity additions are therefore likely to proceed cautiously, more cautiously, at least, than many customers would prefer.
Making Memory Chips Is a Business of Ups and Downs
Gyrations in revenue can produce similar volatility in capital spending
Sources: Company filings; Bloomberg
What remains unclear is whether the industry is heading toward another familiar downturn or whether AI represents a structural shift that will keep memory demand elevated for many years, forcing chipmakers to commit to sustained expansion.
For now, companies tied to the data-center buildout are securing the memory they need to keep expanding. Their revenue and profits are climbing alongside those of the memory manufacturers benefiting from the surge.
For consumer electronics companies, however, the supply crunch could mean more expensive products, tighter margins and slower product upgrades.