In recent months, a remarkable shift has taken over the stock market, fueled by specialized AI chipsThe AI chip industry is experiencing a dramatic resurgence, marked by a series of compelling stories about wealth creation through AI advancements anticipated for 2024.
On December 5, 2023, Marvell Technology, a key player in semiconductor solutions, saw its market valuation surge past $100 billion, surpassing the beleaguered Intel corporationThen, on December 14, Broadcom reached an astounding $1 trillion valuation, making it the eighth largest publicly traded company in the U.S., trailing only giants like Apple, Microsoft, Nvidia, Alphabet (Google's parent company), Amazon, Meta, and Tesla.
Furthermore, on December 18, 2023, Cambricon Technologies reported its share price soared past 600 yuan per share, elevating its market capitalization beyond 250 billion yuan, ranking third on the STAR Market, only behind SMIC and Haiguang Information Technology.
These three chip companies have seen astronomical growth in 2023. Marvell's stock has increased by over 80% this year, whereas Broadcom's shares have risen by over 70% since August, and Cambricon's price has skyrocketed by more than 560% since February, marking what analysts are calling the “Nvidia Moment” for these companies.
The commonality among these companies is their focus on developing ASIC chips tailored for massive AI infrastructures.
Interestingly, while the specialized AI chip market surged, Nvidia, which holds a dominant position in the GPU market, faced consecutive drops in stock prices
This paradox has reverberated throughout the AI chip sector, opening doors to new opportunities as dedicated AI chips emerge, showing aggressiveness in both data center markets and stock exchanges, crafting a compelling narrative for AI's future.
01.
Broadcom: Leading in Customized AI Chips
Teaming Up with Giants like Google and Apple
In its fiscal year 2023, Broadcom reported $3.8 billion in AI-related revenue—almost doubling the income from the previous year
- Holiday Trading Thins in Europe and US
- Flash Tech Poised to Drive AI Glasses Boom
- Honda and Nissan Announce Merger
- Nvidia, AMD, and Intel Join Forces
- ByteDance Makes AI Push
As it transitioned into fiscal year 2024, AI business segment experienced explosive growth, reaching $12.2 billion in the fourth quarter as of November this year, marking a staggering 220% increase year-on-year.
And this is just the beginning.
Broadcom attributes its robust AI revenue growth to its unique blend of customized AI chips and networking products, which are core components of two key AI market segments.
The company specializes in crafting customized AI chips for hyperscale enterprises, emphasizing how various specific customers are adopting these chips, distinct from traditional GPUs
Compared to GPUs, customized ASICs contribute greatly to cost reduction and efficiency in targeted application scenarios.
Currently, cloud computing giants and AI firms are fervently building AI computing clustersBroadcom stands to gain significantly, as these enterprises tailor their AI chips for specific business needs, with Broadcom integrating its components around the core computing engines.
“We believe that there is tremendous opportunity in the AI space over the next three yearsSpecific hyperscale companies have begun their journey to develop custom AI accelerators or XPUs,” stated Hock Tan, Broadcom’s CEO, predicting that the target market for AI computation and networking chips among its three key AI partners would sit around $15 to $20 billion, while by 2027, firms will deploy 500,000 to 1 million custom AI chips in their data centers.
He predicts that Broadcom's three major tech clients will spend between $60 billion to $90 billion on ASIC chips and networking components by the fiscal year 2027, above previous expectations of $15 to $20 billion.
The three clients are speculated to be Google, Meta, and ByteDance. Among these, Google’s upcoming TPUv6 is poised to be a 3nm dedicated AI chip, forecasted to generate billions in revenues for Broadcom.
Broadcom is also collaborating with two other companies on custom AI chips, likely to be Apple and OpenAI.
Reports indicate that Apple is partnering with Broadcom to develop an AI server chip, under the code name “Baltra,” which is expected to begin mass production in 2026. Industry insiders speculate that Apple will lead chip design while Broadcom provides key IP or support for modules and other components.
On another front, escalating AI computational costs necessitate the connection of numerous computing chips as a unified system
Nvidia has repeatedly emphasized the significance of advanced networking technologies in AI infrastructure, while communication and networking have always been Broadcom's strongholdFurthermore, Broadcom’s AI dedicated chips utilize CPO (Co-Packaged Optics) devices to enhance energy efficiency and scalability to meet complex computational demands.
Tan highlighted that within AI chip-related expenditures, the proportion of network chips would expand from the current 5% to 10% to around 15% to 20%.
Broadcom's multifaceted investments in AI infrastructure solidify its crucial position in the market
As a top-tier network chip supplier and custom AI chip partner, Broadcom's chips are embedded in nearly all data centers supporting AI applications, ensuring connectivity and acceleration across the board.
02.
Marvell: Winning a Massive Five-Year Contract with Amazon
Collaborating with Three Storage Giants to Customize HBM
Marvell, too, is riding the wave of customized AI chips to reap benefits
While Broadcom is ahead with its 3nm ASIC technology, Marvell follows about a year behind, leveraging its burgeoning portfolio of custom accelerator products and optical chips to capitalize on immense AI chip opportunities.
In early December, Marvell announced impressive third-quarter results, with AI-related sales driving its custom chip data center revenue to an astonishing 98% year-on-year growth, constituting over 70% of its total third-quarter revenue, compared to about 40% in the same quarter last year.
Likewise, Marvell's CEO, Matt Murphy, attributed their stellar performance to the production-ready custom AI chip projects and a strong demand for cloud interconnect products, projecting that the company will likely exceed its initial $1.5 billion AI revenue target for the fiscal year and that momentum will sustain until fiscal year 2026 with a total revenue forecasted to soar by 40%.
Analysts predict that Marvell’s AI revenue could reach between $1.8 billion and $2 billion in fiscal year 2025, and potentially reach $5 billion in fiscal year 2026, far surpassing the conservative estimate of $2.5 billion set by the management.
In the realm of customized ASICs, Marvell partners with tech titans like Microsoft and Google. It has secured a five-year contract with Amazon to customize AI chips for AWS (Amazon Web Services), solidifying future revenue growth prospects for Marvell.
Analysts believe that AWS's Trainium AI chips are expected to help Marvell double its custom AI revenue by the fiscal year ending January 2026.
Reports from the data analytics platform Visible Alpha forecast that this partnership could propel Marvell's annual revenue in fiscal year 2026 beyond $8 billion, reflecting a 40% increase from current projections; it is anticipated that the following year will see a 20% rise as Marvell partners with another tech giant for custom chip development
Analysts speculate this client could be Microsoft.
Investment bank Evercore ISI analyst Mark Lipacis predicts that by 2030, sales in the custom AI chip industry could reach between $30 billion and $50 billion. His report indicates that Marvell has the potential to capture one-third of this market.
Marvell's goal is to secure a 20% market share in the custom chip TAM market, exceeding $40 billion by the fiscal year 2029. If successful, this would elevate its annual revenue beyond $16 billion.
Similar to Broadcom, Marvell is positioning itself to succeed in the AI market by enhancing data connectivity and AI performance.
The company is launching cutting-edge technologies targeting AI data centers, including a new 1.6Tbps optical chip set and customized HBM to accelerate data transmission and enhance AI performance, all while meeting the increasing demands for efficient computing.
Marvell is collaborating with Micron, Samsung, and SK Hynix to develop customized HBM for their XPUs, which is touted as an advancement in the design and delivery paradigm for AI accelerators.
When integrated into their XPU AI chips, customized HBM enhances access to over 33% of the memory and performance by more than 25%, all while improving energy efficiency
This customized HBM architecture optimizes the XPU’s performance, power, chip size, and cost through serialized acceleration of I/O interfaces throughout the AI computing chip and HBM substrates.
03.
Cambricon: The New King of Chips in A-shares
A Staggering Increase of 400% This Year
While Nvidia and Broadcom are buzzing with excitement across the pacific, Cambricon is also enjoying a continuous rise in the A-share market.
Unlike its counterparts, Cambricon primarily relies on its own AI chip sales and supporting infrastructure without much collaboration with larger firms on custom chip designs
From the beginning of the year to December 19, Cambricon’s stock price surged nearly 400%, with its market valuation skyrocketing from around 50 billion yuan to 265.5 billion yuan.
While the company has a robust presence, its underlying fundamentals are relatively weak.
Currently, Cambricon's revenue primarily stems from its cloud-based intelligent chips and acceleration card product lines, enduring losses since its inception over eight years agoTheir performance reports indicate that for the first three quarters of 2024, revenue stood at 185 million yuan, with a staggering net loss of 728 million yuan and R&D expenditures amounting to 659 million yuan.
This year, several competitors for Cambricon have also started their initial public offerings.
On September 11, Birun Technology filed the reports for its initial public offeringThen, on November 13,
Moore Threads completed the registration for its IPO process at the Beijing Securities Regulatory Bureau.Entering 2025 will bring more variables to the AI chip market.
04.
Conclusion: 2024, A Year for Specialized AI Chip Takeoff
For Broadcom, Marvell, and Cambricon, 2024 is set to be an extraordinary year
Thanks to an explosion of AI demand, they are viewed with unprecedented optimism within the AI chip sector, as reflected by record-high stock prices and market valuations.
Cloud computing behemoths and leading AI enterprises are increasingly striving to reduce dependency on Nvidia, the reigning AI chip supplier, while exploring the potential of self-designed AI chips to optimize data center energy useThis trend benefits U.SAI and customized computing leaders, Broadcom and Marvell, as they disrupt the competitive landscape of AI chips through collaborations with cloud companies and innovative networking solutions.
The optimistic sentiment surrounding AI is on the rise, fueled by relentless demand for computational power and the growth of the custom chip market; enthusiasts are eagerly anticipating the emergence of the next AI chip victor following Nvidia.
However, displacing Nvidia is no easy task.
Broadcom's AI business constitutes just a fraction of Nvidia’s overall revenue