Whether AI ASIC can become a viable alternative to Nvidia's GPU has been a major global debate for some time. We delve into how big ASIC could be in future AI computing, energy consumption, and AI capex budgets. We also identify the winners and least preferred stocks in the global ASIC supply chain.
AI ASIC 是否能成为 Nvidia GPU 的可行替代品已成为全球范围内的重大争论。我们深入探讨了未来 AI 计算中 ASIC 的规模、能源消耗和 AI 资本支出预算。我们还识别了全球 ASIC 供应链中的赢家和最不受欢迎的股票。
- We size the cloud AI semi TAM into 2027e. Base case – US$238bn (demand-driven). Bull case – US$405bn, based on TSMC’s CoWoS capacity (supply-driven).
我们将云 AI 半导体市场规模预测到 2027 年。基本情况 - 2380 亿美元(需求驱动)。乐观情况 - 4050 亿美元,基于台积电的 CoWoS 产能(供应驱动)。 - Nvidia's GPU and CSPs' ASICs will both grow given global AI capex expansion. We forecast AI ASIC TAM to grow from US$12bn in 2024 to US$30bn in 2027.
英伟达的 GPU 和云服务提供商的 ASIC 都将因全球人工智能资本支出扩张而增长。我们预测人工智能 ASIC 的总可寻址市场将从 2024 年的 120 亿美元增长到 2027 年的 300 亿美元。 - Within ASIC service providers, we expect Asian suppliers such as Alchip and MediaTek to gain market share from a low base.
在 ASIC 服务提供商中,我们预计亚洲供应商如 Alchip 和联发科技将从低基数中获得市场份额。 - TSMC, Cadence, and Advantest should benefit from intensified competition in AI chip design.
台积电、凯捷和安费诺将受益于人工智能芯片设计领域竞争的加剧。
Cloud service providers (CSPs) need alternatives even though Nvidia's GPU is performing well: We see common reasons for continued ASIC development: 1) optimization for internal workload; and 2) better price-performance. Over the long term, if they execute well, CSPs may enjoy greater bargaining power in AI semi procurement with their own custom silicon.
云服务提供商(CSPs)需要替代方案,即使 Nvidia 的 GPU 表现良好:我们看到持续开发 ASIC 的共同原因:1)针对内部工作负载的优化;2)更好的性价比。从长远来看,如果他们执行得当,CSPs 可能在 AI 半导体采购中享有更大的议价能力,利用他们自己的定制硅。
While application-specific integrated circuit (ASIC) investment is poised to continue, we remain bullish on Nvidia: We see its leadership position as sustainable due to: 1) continued systems advantage; 2) extensive R&D budget and leading scale; and 3) ubiquity across clouds.
尽管专用集成电路(ASIC)投资有望持续增长,但我们对英伟达持乐观态度:我们认为其领导地位是可持续的,原因有:1)持续的系统优势;2)庞大的研发预算和领先的规模;3)在云计算中的普遍存在。
Who will win in the cloud AI semi market in future? By segment:
未来云 AI 半导体市场将由谁主导?按细分市场:
- AI GPU: OW Nvidia (Top Pick)
AI GPU: OW Nvidia(首选) - Global AI ASIC vendors: OW Broadcom, Alchip (Top Pick) , Socionext
全球 AI ASIC 供应商:OW 博通,Alchip(首选),Socionext - EDA: OW Cadence EDA:超配 Cadence
- Foundry: OW TSMC and its CoWoS subcontractor (ASE) and tool supplier (AllRing)
铸造:OW TSMC 及其 CoWoS 分包商(ASE)和工具供应商(AllRing) - Test: OW Advantest (Top Pick) and the testing ecosystem (e.g., KYEC)
测试:OW Advantest(首选)和测试生态系统(例如,KYEC) - HBM: OW Samsung. HBM:超配三星。
Winners in AI ASIC? We believe major CSPs will continue to use Broadcom and Marvell to stay competitive in high-speed interface technology, but they also seek more efficient and cheaper alternatives. We see Asian vendors gaining share in 2026 – e.g., Alchip (AWS 3nm project) and MediaTek (Google 3nm TPU v7 for training). Socionext is gaining multiple CSP and automotive ASIC projects.
在 AI ASIC 中谁是赢家?我们相信主要的云服务提供商将继续使用博通和美满科技,以保持在高速接口技术中的竞争力,但他们也在寻求更高效和更便宜的替代方案。我们预计亚洲供应商将在 2026 年获得市场份额——例如,Alchip(AWS 3nm 项目)和联发科技(谷歌 3nm TPU v7 用于训练)。Socionext 正在获得多个云服务提供商和汽车 ASIC 项目。
Who are least preferred? Merchant GPU alternatives (e.g. AMD), mature node foundries that don’t really have big AI exposure (e.g. UMC) .
谁是最不受欢迎的?商用 GPU 替代品(例如 AMD),没有大规模 AI 曝光的成熟节点代工厂(例如 UMC)。
02Six key charts on GPU vs. ASIC debates
02 六个关于 GPU 与 ASIC 辩论的关键图表
Six key charts on GPU vs. ASIC debates
六个关于 GPU 与 ASIC 辩论的关键图表
展览 1:云人工智能半导体总可寻址市场在 2023 年至 2030 年期间以 34%的年复合增长率增长。定制人工智能芯片(ASIC)将在长期内超越通用芯片。

来源:Gartner,摩根士丹利研究(e)估计
展览 2:市盈率倍数:ASIC 集团曾超过英伟达,随后在英伟达加速后被降级

来源:公司数据,摩根士丹利研究;注:替代 AI 半导体组:AMD、Alchip、GUC、Andes、Marvell、博通。AI 半导体支持者组:台积电、Synopsys、Cadence、ASML、BESI、ASMPT、Ibiden、KYEC、Advantest。
展览 3:按客户划分的全球 CoWoS 分配。2025 年 27%预留给 ASIC。

来源:公司数据,摩根士丹利研究(e)估计
展览 4:定制 AI ASIC 芯片的价值预计将在 2025 年达到 220 亿美元。AWS Trainium2 贡献了大部分增长。

来源:公司数据,摩根士丹利研究
展览 5:以 AI 推理的性能与成本比。ASIC 领先于 Nvidia

来源:公司数据,摩根士丹利研究
展览 6:在我们的牛市案例(供给驱动)假设中,云 AI 半导体的总可寻址市场(TAM)预计在 2025 年将增长至 2350 亿美元,而我们的基准案例(需求驱动)为 1740 亿美元。

来源:公司数据,摩根士丹利研究(e)估计
03Overview 03 概述
Overview 概述
Sizing up the cloud AI semi TAM in 2027e
评估 2027 年云 AI 半导体市场规模
We believe we underestimated the total cloud AI semi market size when we analyzed global AI semi trends back in 2023. Generative AI applications have been developing faster than we had expected, while the arms race across global AI capex has been much more intense than we had anticipated.
我们相信,在 2023 年分析全球 AI 半导体趋势时,我们低估了整体云 AI 半导体市场规模。生成式 AI 应用的发展速度超出了我们的预期,而全球 AI 资本支出的军备竞赛则比我们预想的要激烈得多。
We now see global cloud AI capex in 2024 reaching US$150bn – 3x the 2023 value.
我们现在预计 2024 年全球云 AI 资本支出将达到 1500 亿美元,是 2023 年数值的 3 倍。
In this report, we estimate the growth of the cloud AI semi TAM into 2027e using scenario analysis.
在本报告中,我们使用情景分析估计云人工智能半导体总可寻址市场(TAM)到 2027 年的增长。
- Our base case (demand-driven) TAM is US$238bn for cloud AI semis (including both GPU and ASIC), which we believe is broadly conservative as we await more long-term spending indications from customers. In the base, we forecast AI ASIC TAM to grow from US$12bn in 2024 to US$30bn in 2027
我们的基本情况(需求驱动)总可寻址市场(TAM)为 2380 亿美元,涵盖云计算 AI 半导体(包括 GPU 和 ASIC),我们认为这一预测总体上是保守的,因为我们在等待客户更多的长期支出指示。在基本情况下,我们预测 AI ASIC 的 TAM 将从 2024 年的 120 亿美元增长到 2027 年的 300 亿美元。 - Our bull case is based on TSMC’s CoWoS capacity (supply-driven), implying US$405bn in 2027e.
我们的牛市案例基于台积电的 CoWoS 产能(供给驱动),这意味着 2027 年预计为 4050 亿美元。
Cloud service providers (CSPs) need alternatives even though Nvidia's GPU is performing well
云服务提供商(CSP)需要替代方案,即使英伟达的 GPU 表现良好
Since Nvidia introduced Blackwell with an NVL72 server rack system design in March 2024, it has prompted a debate as to whether CSPs need to continue their own ASIC designs. We update Exhibit 10 to understand CSPs' strategic thinking, and see common reasons for continued ASIC development:
自从英伟达在 2024 年 3 月推出了带有 NVL72 服务器机架系统设计的 Blackwell 以来,这引发了关于云服务提供商(CSP)是否需要继续进行自己的 ASIC 设计的辩论。我们更新了展览 10,以了解 CSP 的战略思考,并看到继续进行 ASIC 开发的共同原因:
- Optimization for internal workload, or "core silicon to end-service" integration; and
内部工作负载的优化,或“核心硅到最终服务”的集成; - Better price-performance, echoing our TCO (total cost of ownership) analysis that chip cost matters a great deal if we assume four-year depreciation periods for AI data centers, meaning that internal chips are likely to bring lower costs after reaching scale.
更好的性价比,呼应我们的 TCO(总拥有成本)分析,如果我们假设 AI 数据中心的折旧期为四年,那么芯片成本非常重要,这意味着内部芯片在达到规模后可能会带来更低的成本。
Over the long term, if they execute well, CSPs may enjoy greater bargaining power in AI semi procurement with their own custom silicon. We estimate ASIC revenue size in 2025 at around US$22bn (almost doubling from US$12bn in 2024) based on CoWoS capacity booking at TSMC, which is mostly for CSPs. In addition, according to our US utilities analyst, Stephen Byrd, ASIC is likely to consume 22% of total power consumed for AI computing in 2025 – a segment that global investors cannot ignore.
从长远来看,如果执行得当,CSP 可能在 AI 半导体采购中享有更大的议价权,利用他们自己的定制硅。我们估计 2025 年 ASIC 的收入规模约为 220 亿美元(几乎是 2024 年 120 亿美元的两倍),这基于在台积电的 CoWoS 产能预定,主要是为 CSP 服务。此外,根据我们的美国公用事业分析师 Stephen Byrd 的说法,ASIC 在 2025 年可能会消耗 22%的 AI 计算总电力,这一细分市场是全球投资者无法忽视的。
While ASIC investment is poised to continue, we remain bullish on Nvidia
尽管 ASIC 投资有望继续增长,我们对 Nvidia 依然持乐观态度
We see its leadership position as sustainable for several reasons:
我们认为其领导地位可持续的原因有几个:
- Continued systems advantage relative to more point product-oriented competitors;
相较于更注重单一产品的竞争对手,持续保持系统优势; - Extensive R&D budget iterating what is already the best design, with increasing complexity/integration challenges favoring the most scaled player;
广泛的研发预算不断迭代已经是最佳设计的产品,随着复杂性/集成挑战的增加,最具规模的参与者更具优势; - Ubiquity across clouds isn't an option for ASICs and should help Nvidia to remain the platform of choice in the rental market. According to Joe Moore’s forecasts, Nvidia’s data center revenue CAGR will likely be 26% over 2024-2027e. We expect ASIC CAGR to be 34% over 2024-2027e (derived from TSMC's CoWoS capacity), indicating that ASIC could gain market share from its very low base.
ASIC 在云端的普遍性不是一个选择,这应该有助于 Nvidia 在租赁市场中保持首选平台。根据 Joe Moore 的预测,Nvidia 的数据中心收入年复合增长率(CAGR)在 2024-2027 年期间可能达到 26%。我们预计 ASIC 在 2024-2027 年期间的 CAGR 为 34%(来源于 TSMC 的 CoWoS 产能),这表明 ASIC 可能会从其非常低的基础上获得市场份额。
There are going to be winners and non-beneficiaries in the global ASIC supply chain – our job is to deduce which semi stocks will fall into which category.
在全球 ASIC 供应链中将会有赢家和非受益者——我们的任务是推断哪些半导体股票将落入哪个类别。
Who will win in the cloud AI semi market in future?
未来谁将赢得云 AI 半导体市场?
We identify six ways to play:
我们识别出六种玩法:
- AI GPU: AI GPU: 人工智能 GPU:
- OW Nvidia (NVDA.O, covered by Joe Moore, Top Pick) – large language model enthusiasm is transforming cloud capex
OW Nvidia (NVDA.O,由 Joe Moore 覆盖,首选) – 大型语言模型的热情正在改变云计算资本支出 - Global AI ASIC vendors: Highlighted OW plays:
全球 AI ASIC 供应商:重点关注的 OW 投资: - Broadcom (AVGO.O, covered by Joe Moore) – growth in AI, recovery in core semis, and potential VMW upside
博通(AVGO.O,由乔·摩尔覆盖)- 人工智能增长、核心半导体复苏以及 VMW 潜在上行空间 - Alchip (3661.TW, covered by Charlie Chan) – we see a continued re-rating in view of the enlarged 3nm opportunity and our reaffirmation of the 3nm project win.
Alchip (3661.TW,由 Charlie Chan 覆盖) – 鉴于扩大了 3nm 机会以及我们对 3nm 项目胜利的重申,我们看到持续的重新评级。 - Socionext (6526.T, covered by Kazuo Yoshikawa) – we expect growth to reaccelerate in 2026
Socionext(6526.T,由吉川和夫覆盖)– 我们预计 2026 年增长将重新加速 - EDA:
- OW Cadence (CDNS.O, covered by Lee Simpson) – should enjoy strong structural growth with its new AI-driven EDA tools
OW Cadence (CDNS.O,由 Lee Simpson 覆盖) – 应该会因其新的 AI 驱动 EDA 工具而享受强劲的结构性增长 - Foundry: Highlighted OW plays:
铸造:重点关注的 OW 游戏: - TSMC (2330.TW, covered by Charlie Chan) – AI demand to drive margin upside
台积电 (2330.TW,由 Charlie Chan 覆盖) – 人工智能需求将推动利润上升 - TSMC's CoWoS subcontractor, ASE (3711.TW, covered by Charlie Chan) – secular growth of AI chip packaging and testing should drive a re-rating
台积电的 CoWoS 分包商,ASE(3711.TW,由 Charlie Chan 覆盖)——人工智能芯片封装和测试的长期增长应推动重新评级 - Its tool supplier, AllRing (6187.TWO, covered by Ray Wu) – a key beneficiary of CoWoS capex expansion
其工具供应商 AllRing(6187.TWO,由 Ray Wu 覆盖)- CoWoS 资本支出扩张的主要受益者 - Test: Highlighted OW plays:
测试:突出显示的 OW 游戏: - Advantest (6857.T, covered by Tetsuya Wadaki, Top Pick) – earnings are solid for the company, with its strong focus on AI device testing
Advantest(6857.T,由和田哲也覆盖,首选)- 公司的收益稳健,专注于人工智能设备测试。 - The testing ecosystem, e.g., KYEC (2449.TW, covered by Charlie Chan) – AI testing business upside is still underappreciated
测试生态系统,例如 KYEC(2449.TW,由 Charlie Chan 覆盖) – 人工智能测试业务的上行潜力仍然被低估 - HBM:
- OW Samsung (005930.KS. covered by Shawn Kim) – key beneficiary of the trend of growing demand for ASICs as the share leader in the non-NVDA HBM market
买入三星电子 (005930.KS,由肖恩·金覆盖) – 作为非 NVDA HBM 市场的份额领导者,受益于对 ASICs 需求增长的趋势。
Winners in AI ASIC? AI ASIC 的赢家?
We compare US and Asian design service competitors, and offer our views as to which major CSP ASIC projects could be awarded to which vendors.
我们比较了美国和亚洲的设计服务竞争对手,并提供了我们对哪些主要的 CSP ASIC 项目可能会授予哪些供应商的看法。
For example, we believe major CSPs will continue to use Broadcom and Marvell to stay competitive in high-speed interface technology: As noted above, we are OW on Broadcom; as for Marvell (MRVL.O, covered by Joe Moore), we view its growth capability as attractive, but valuation & potential headwinds within Amazon ASIC leave us EW.
例如,我们相信主要的云服务提供商将继续使用博通和美满科技,以保持在高速接口技术中的竞争力:如上所述,我们对博通持超配评级;至于美满科技(MRVL.O,由乔·摩尔覆盖),我们认为其增长能力具有吸引力,但估值和亚马逊 ASIC 内的潜在阻力使我们对其持平评级。
We expect Asian vendors to gain share in 2026: The major CSPs are also looking for more efficient and cheaper alternatives.
我们预计亚洲供应商将在 2026 年获得市场份额:主要的云服务提供商也在寻找更高效和更便宜的替代方案。
- We cite Alchip... AWS Trainium 3 presents a sizable opportunity.
我们引用 Alchip... AWS Trainium 3 提供了一个巨大的机会。 - ...and MediaTek (2454.TW, covered by Charlie Chan): Here we point to winning Google 3nm TPU v7 for training). Alchip and MediaTek are taking market share from existing dominant players Broadcom and Marvell.
...和联发科技(2454.TW,由 Charlie Chan 覆盖):在这里我们指出赢得谷歌 3nm TPU v7 用于训练的优势。Alchip 和联发科技正在从现有的主导玩家博通和美满科技手中夺取市场份额。 - We also see Socionext gaining multiple CSP and automotive ASIC projects: We expect growth to reaccelerate in 2026, as these 3nm projects ramp in volume.
我们还看到 Socionext 获得多个 CSP 和汽车 ASIC 项目:我们预计随着这些 3nm 项目的产量增加,增长将在 2026 年重新加速。
Who are least preferred? 谁是最不受欢迎的?
Merchant GPU alternatives – if ASICs become successful, where does that leave AMD and Intel? We think the market dynamic would be quite challenging to AMD and Intel.
商用 GPU 替代品——如果 ASIC 成功,那么 AMD 和英特尔将处于何种境地?我们认为市场动态对 AMD 和英特尔将是相当具有挑战性的。
Mature node foundries that don’t really have AI exposure.
成熟的节点代工厂,实际上并没有接触到人工智能。
- EW on UMC and GlobalFoundries
对 UMC 和 GlobalFoundries 的评估
What are the key risks for our AI ASIC 2.0 thesis?
我们 AI ASIC 2.0 论点的关键风险是什么?
1. Quantum computing to disrupt to AI semi chip demand? Google has developed a new quantum chip called Willow, which significantly reduces errors as it scales up. Long-term it is indeed suitable for LLM training, which desires high performance. However, there are still the reliability (or the duration) and scalability (topology for quantum entanglement) issues. We also don't think quantum computing is suitable for inference workload as the performance requirement is much lower.
1. 量子计算将扰动人工智能半导体芯片需求吗?谷歌开发了一种名为 Willow 的新量子芯片,随着规模的扩大显著减少错误。从长远来看,它确实适合LLM训练,这需要高性能。然而,仍然存在可靠性(或持续时间)和可扩展性(量子纠缠的拓扑)问题。我们也认为量子计算不适合推理工作负载,因为性能要求要低得多。
2. ASIC to compete with retired GPU? We believe one real risk to ASICs in the near term is retired AI GPUs (e.g. Hopper) for inference, as these are fully depreciated in terms of accounting. We put together data to compare GPU and ASIC hourly rental costs (Nvidia Hopper vs. AWS Inferentia 2) to provide objective information on the commerciality of ASICs. We believe the cadence of ASICs needs to accelerate to stay competitive to GPU, which is structurally positive for TSMC and ASIC design services.
2. ASIC 会与退役 GPU 竞争吗?我们认为,近期 ASIC 面临的一个真正风险是退役的 AI GPU(例如 Hopper)用于推理,因为这些在会计上已经完全折旧。我们整理了数据来比较 GPU 和 ASIC 的每小时租赁成本(Nvidia Hopper 与 AWS Inferentia 2),以提供关于 ASIC 商业性的客观信息。我们认为,ASIC 的节奏需要加快,以保持与 GPU 的竞争力,这对台积电和 ASIC 设计服务在结构上是积极的。
3. CSP customers reduce their dependency on design service venders? After AI chip design technology matures, CSPs may indeed rely less on design service vendors to save costs. However, in the coming years, we think design service can continue to add value as the cadence and complexity of new chip designs are growing. Coupled with more companies likely to do custom chips, we think design service TAM will increase.
3. CSP 客户减少对设计服务供应商的依赖?随着 AI 芯片设计技术的成熟,CSP 可能确实会减少对设计服务供应商的依赖以节省成本。然而,在未来几年,我们认为设计服务仍然可以继续增加价值,因为新芯片设计的节奏和复杂性正在增长。加上更多公司可能会进行定制芯片,我们认为设计服务的市场总量将会增加。
04Key Investment Debate: Will AI ASIC make it this time?
04 关键投资辩论:这次 AI ASIC 能成功吗?
Key Investment Debate: Will AI ASIC make it this time?
关键投资辩论:这次 AI ASIC 能成功吗?
Nvidia continues to trade at a premium to alternative chips given its technological leadership
英伟达由于其技术领先性,继续以高于替代芯片的溢价交易
Since March 2024, when Nvidia announced its new Blackwell AI GPU with a sever rack system design, investors have believed Nvidia's dominance will continue, and the stock has traded at a premium to other GPU and ASIC vendors.
自 2024 年 3 月,英伟达宣布其新款 Blackwell AI GPU 及服务器机架系统设计以来,投资者相信英伟达的主导地位将持续,股票的交易价格也高于其他 GPU 和 ASIC 供应商。
We plot the P/E multiple expansion trend to see whether there are still opportunities to play the alternatives or laggards in the AI semis market:
我们绘制了市盈率倍数扩张趋势,以查看在人工智能半导体市场中是否仍然有机会投资于替代品或滞后者:
- Pioneer – Nvidia (NVDA.O, rated OW, covered by Joe Moore) is the pioneer of AI computing and the current major benchmark stock for global AI semis.
先锋 – 英伟达(NVDA.O,评级为超配,由乔·摩尔覆盖)是人工智能计算的先锋,也是全球人工智能半导体的主要基准股票。 - Alternatives – AMD, ASIC chips, and AI semi start-ups, who are essentially competing with Nvidia in cloud AI semi.
替代品 – AMD、ASIC 芯片和人工智能半导体初创公司,基本上与 Nvidia 在云端人工智能半导体领域竞争。 - Enablers – TSMC and leading-edge chip supply chain. For example KYEC, Advantest, Besi, Synopsys.
推动者 – 台积电和领先的芯片供应链。例如 KYEC、Advantest、Besi、Synopsys。
Back in 2023, Nvidia was trading near 40x – a clear premium to its competitors averaging 30x – on the strength of its “CUDA” (Compute Unified Device Architecture) SDK (Software Development Kit) that is widely used by AI developers. This, coupled with Nvidia’s ongoing improvement in its architecture and chip performance, all helped the company attain >90% share in the AI training market in 2023. Given the significant upward revision of Nvidia's earnings (hence falling P/E multiple at same the share price), and the launch of competitors' AI accelerator products (e.g. AMD's MI300 and CSP's ASICs), in late 2023 the "AI alternatives" group traded at a higher premium (40x-45x) to Nvidia (25x-30x).
在 2023 年,英伟达的交易倍数接近 40 倍——明显高于其竞争对手平均 30 倍——这得益于其广泛被 AI 开发者使用的“CUDA”(计算统一设备架构)SDK(软件开发工具包)。此外,英伟达在其架构和芯片性能上的持续改进,帮助公司在 2023 年获得了超过 90%的 AI 训练市场份额。考虑到英伟达盈利的显著上调(因此在相同股价下市盈率倍数下降),以及竞争对手 AI 加速器产品的推出(例如 AMD 的 MI300 和 CSP 的 ASIC),在 2023 年底,“AI 替代品”组的交易倍数对英伟达的溢价更高(40 倍-45 倍),而英伟达为 25 倍-30 倍。
Key foundry enabler TSMC has shown a significant re-rating, from its previous 15-20x P/E range, to now 20x-25x, as investors have recognized that TSMC will be a long-term winner no matter who else is going to win.
关键代工厂商台积电(TSMC)经历了显著的重新评级,从之前的 15-20 倍市盈率范围,提升至现在的 20 倍-25 倍,因为投资者已认识到无论其他谁会获胜,台积电将是长期赢家。
展览 7:市盈率倍数 - 英伟达在 2023 年重新评级

来源:公司数据,摩根士丹利研究;注:替代 AI 半导体组:AMD、Alchip、GUC、Andes、Marvell、Broadcom。AI 半导体支持者组:台积电、Synopsys、Cadence、ASML、BESI、ASMPT、Ibiden、KYEC、Advantest。
展览 8:市盈率倍数 - ASIC 集团曾超过英伟达,随后在英伟达加速后被降级

来源:公司数据,摩根士丹利研究;注:替代 AI 半导体组:AMD、Alchip、GUC、Andes、Marvell、博通。AI 半导体支持者组:台积电、Synopsys、Cadence、ASML、BESI、ASMPT、Ibiden、KYEC、Advantest。
Is it already end-game for AI ASIC, or we will see ASIC 2.0 coming?
AI ASIC 已经进入终局了吗,还是我们会看到 ASIC 2.0 的到来?
CSPs still need custom chips even with Nvidia providing powerful AI GPUs
尽管英伟达提供强大的 AI GPU,CSP 仍然需要定制芯片
We were too optimistic on ASIC’s share gain last year, and wrongly predicted that AI ASIC would gain share from AI GPU (namely Nvidia) in 2024. Put simply, AI accelerators are difficult to design, while Nvidia is a fast-moving target. However, we don't see CSPs such as Google, AWS, Microsoft or Meta stopping their ASIC design attempts. Microsoft announced that its Maia 100 would be used in Microsoft Ignite (Sept 2023), while Meta announced its MTIA v2 on April 2024. Google and AWS also both introduced their own generation of AI accelerators. Last but not least, we believe OpenAI is designing its own custom chip in tandem with Broadcom.
我们对 ASIC 去年的市场份额增长过于乐观,错误地预测 AI ASIC 将在 2024 年从 AI GPU(即 Nvidia)那里获得市场份额。简单来说,AI 加速器设计难度大,而 Nvidia 则是一个快速变化的目标。然而,我们并不认为谷歌、AWS、微软或 Meta 等云服务提供商会停止其 ASIC 设计尝试。微软宣布其 Maia 100 将在微软 Ignite(2023 年 9 月)上使用,而 Meta 则在 2024 年 4 月宣布了其 MTIA v2。谷歌和 AWS 也都推出了自己的 AI 加速器。最后但同样重要的是,我们相信 OpenAI 正在与博通一起设计自己的定制芯片。
As for AI start-ups, both Tenstorrent and Rain AI have introduced AI accelerators based on RISC-V. From an Asia supply chain perspective, we are seeing TSMC reserve 20% of its scarce CoWoS capacity for AI ASIC production.
关于人工智能初创公司,Tenstorrent 和 Rain AI 都推出了基于 RISC-V 的 AI 加速器。从亚洲供应链的角度来看,我们看到台积电为 AI ASIC 生产预留了 20% 的稀缺 CoWoS 产能。
展览 9:全球 CSP 云 AI 定制芯片

来源:谷歌,AWS,Meta,英特尔,特斯拉
Why would CSPs need alternatives (either their own ASIC, or a second source GPU like AMD or Habana) when Nvidia's GPU is performing so well? We update Exhibit 10 to understand CSPs’ strategic thinking, and see common reasons for ASIC development:
为什么 CSP 需要替代方案(无论是他们自己的 ASIC,还是像 AMD 或 Habana 这样的第二来源 GPU),当 Nvidia 的 GPU 表现如此出色时?我们更新了展览 10,以了解 CSP 的战略思维,并查看 ASIC 开发的共同原因:
- Optimization of internal workload, or "core silicon to end-service" integration;
内部工作负载的优化,或“核心硅到终端服务”的整合; - Better price-performance. We believe AI inference is the key area where ASIC can gain share in relation to AI training as AI inference depends less on Nvidia’s CUDA. We estimate 2025e ASIC revenue at c.US$22bn based on CoWoS capacity booking at TSMC. We don't rule out ASIC market share expanding in 2026.
更好的性价比。我们认为,AI 推理是 ASIC 在 AI 训练方面可以获得市场份额的关键领域,因为 AI 推理对 Nvidia 的 CUDA 依赖较少。我们估计 2025 年 ASIC 收入约为 220 亿美元,基于在台积电的 CoWoS 产能预定。我们不排除 ASIC 市场份额在 2026 年扩大的可能性。
On point #2, our TCO (total cost of ownership) analysis in the following section suggests semiconductor chip cost matters. If we assume four-year depreciation for AI data centers, the training cost of ASIC (e.g. Google TPU) could be lower than for Nvidia's GPU. One key factor is that the purchase cost is still very high by paying 75% gross margin to Nvidia's GPU, while internal chips can enjoy lower variable costs after reaching scale. So it is still a "chicken and egg" debate, but we see CSPs continuing with ASICs.
关于第 2 点,我们在以下部分的 TCO(总拥有成本)分析表明半导体芯片成本很重要。如果我们假设 AI 数据中心的折旧期为四年,ASIC(例如 Google TPU)的训练成本可能低于 Nvidia 的 GPU。一个关键因素是,购买成本仍然很高,因为需要向 Nvidia 的 GPU 支付 75%的毛利率,而内部芯片在达到规模后可以享受较低的可变成本。因此,这仍然是一个“鸡与蛋”的辩论,但我们看到 CSP 仍在继续使用 ASIC。
展览 10:CSPs 和特斯拉对 ASIC 战略和利益的评论

来源:公司数据,摩根士丹利研究
展览 11:CSP ASIC 项目公告时间表

来源:公司数据,摩根士丹利研究
Latest evidence of AI ASIC: AWS Trainium3
最新的 AI ASIC 证据:AWS Trainium3
AWS re:Invent
AWS held its latest re:Invent event on December 3 and announced the general availability of Trainium2 (‘Trn2’) chips with 30-40% better price/performance, with Trainium3 slated for 2H25 with 2x more computing performance and 40% more energy efficiency. Trn2 beta is now testing with Adobe, Qualcomm and Databricks.
AWS 于 12 月 3 日举行了最新的 re:Invent 活动,并宣布 Trainium2(‘Trn2’)芯片的正式发布,其价格/性能提升了 30-40%。Trainium3 计划于 2025 年下半年推出,计算性能提高 2 倍,能效提升 40%。Trn2 的测试版目前正在与 Adobe、Qualcomm 和 Databricks 进行测试。
- Amazon are building a massive ‘Project Rainier’ AI supercomputer powered by ‘hundreds of thousands’ of Trn2 chips.
亚马逊正在构建一个庞大的“雨果项目”人工智能超级计算机,采用“数十万”个 Trn2 芯片供电。 - Amazon has continued emphasizing its continued relationship with Nvidia as in-house chips with Blackwell P6 instances come early next year. Matt Garman, CEO of AWS, said that ‘the vast majority’ of GenAI workloads are running on Nvidia, which will likely continue in future. However, Amazon also wants to offer alternative solutions that are cheaper for certain workloads.
亚马逊继续强调与英伟达的持续关系,内部芯片将于明年初推出 Blackwell P6 实例。AWS 首席执行官 Matt Garman 表示,“绝大多数”GenAI 工作负载正在运行在英伟达上,这种情况在未来可能会继续。然而,亚马逊也希望为某些工作负载提供更便宜的替代解决方案。 - Amazon introduced its new in-house ‘Nova’ Foundation Models including several flavors, such as Nova Micro, Lite, Pro, Premiere, Canvas and Reels for text, image and video, while Nova Premiere's training won’t be completed until 1Q25.
亚马逊推出了其新的内部“Nova”基础模型,包括多个版本,如 Nova Micro、Lite、Pro、Premiere、Canvas 和 Reels,适用于文本、图像和视频,而 Nova Premiere 的训练将在 2025 年第一季度完成。 - Apple was on stage and said that it is using Trainium and Graviton chips to serve searching services and are also evaluating if Trainium2 chips can be used to pre-train AI models, while they have been using Google TPUs for training.
苹果在台上表示,他们正在使用 Trainium 和 Graviton 芯片来提供搜索服务,并且正在评估是否可以使用 Trainium2 芯片来预训练 AI 模型,同时他们一直在使用谷歌 TPU 进行训练。
展览 12:AWS 宣布 Trainium3 将在 2025 年晚些时候推出

AI semi TAM hinges on AI capex sustainability
AI 半导体市场总量依赖于 AI 资本支出的可持续性
We size the AI semi TAM into 2027e. Base case – US$238bn (demand-driven). Bull case – US$405bn, based on TSMC’s CoWoS capacity (supply-driven). Within the cloud AI semis, we forecast AI ASIC TAM to grow from US$12bn in 2024 to US$30bn in 2027.
我们将 AI 半导体的总可寻址市场规模预测到 2027 年。基本情况 - 2380 亿美元(需求驱动)。乐观情况 - 4050 亿美元,基于台积电的 CoWoS 产能(供应驱动)。在云计算 AI 半导体中,我们预测 AI ASIC 的总可寻址市场规模将从 2024 年的 120 亿美元增长到 2027 年的 300 亿美元。
Cloud AI semi TAM growing rapidly: The introduction of bigger LLMs (e.g. GPT-4 and Llama 3) is further boosting AI computing demand. According to our US semi analyst Joe Moore’s latest update, Nvidia’s data center GPU revenue alone could reach US$98bn in 2024. Adding sales of alternative cloud AI semis (e.g., ASIC), the cloud AI semi TAM is already >US$100bn in 2024e, based on our base case scenario. If we also include related system hardware, AI capex could reach US$170bn in 2024e (vs. our previous estimate of US$80bn) and US$240bn in 2025e, according to our base case.
云 AI 半导体市场规模快速增长:更大规模的LLMs(例如 GPT-4 和 Llama 3)的引入进一步推动了 AI 计算需求。根据我们美国半导体分析师 Joe Moore 的最新更新,Nvidia 的数据中心 GPU 收入在 2024 年可能达到 980 亿美元。加上替代云 AI 半导体(例如 ASIC)的销售,基于我们的基本情况,云 AI 半导体市场规模在 2024 年已经超过 1000 亿美元。如果我们还包括相关的系统硬件,AI 资本支出在 2024 年可能达到 1700 亿美元(相比我们之前的 800 亿美元的估计)以及在 2025 年达到 2400 亿美元,根据我们的基本情况。
If we look at the cloud AI semi market and global capex based on our supply-driven (TSMC CoWoS) bull case scenario, we think the cloud AI semi TAM could reach ~US$143bn in 2024e and US$235bn in 2025e. Adding the related system hardware, AI capex could reach ~US$187bn in 2024 and US$300bn in 2025, according to our bull case.
如果我们根据我们的供给驱动(台积电 CoWoS)牛市情景来看云计算 AI 半导体市场和全球资本支出,我们认为云计算 AI 半导体的总可寻址市场(TAM)在 2024 年可能达到约 1430 亿美元,在 2025 年可能达到 2350 亿美元。根据我们的牛市情景,加入相关的系统硬件,AI 资本支出在 2024 年可能达到约 1870 亿美元,在 2025 年可能达到 3000 亿美元。
Budget constraints may force a value shift to ASIC by 2030: One major constraint for AI hardware is AI chip buyers' budgets, and that depends on AI investment ROI, which is tied to successful launches of killer apps and AI hardware cost reductions. In our base case, we project cloud AI hardware capex rising to US$103bn by 2030, with cloud AI semis capturing US$343bn (implying only 34% AI semi sales CAGR over 2023-30e). For the bull case, we expect cloud AI semis to capture US$549bn in 2030e (implying only 43% AI semi sales CAGR over 2023-30e). We also expect the AI semi profile to change: inference computing demand may surpass training demand in the cloud, and most cloud inference (along with some training chips) may shift to lower-cost ASIC designs.
预算限制可能会在 2030 年前迫使价值转向 ASIC:AI 硬件的一个主要限制是 AI 芯片买家的预算,这取决于 AI 投资的投资回报率,而这又与杀手级应用的成功推出和 AI 硬件成本的降低相关。在我们的基本情况下,我们预计到 2030 年,云 AI 硬件的资本支出将上升至 1030 亿美元,云 AI 半导体将占据 3430 亿美元(这意味着 2023-2030 年期间 AI 半导体销售年复合增长率仅为 34%)。在乐观情况下,我们预计到 2030 年云 AI 半导体将占据 5490 亿美元(这意味着 2023-2030 年期间 AI 半导体销售年复合增长率仅为 43%)。我们还预计 AI 半导体的结构将发生变化:推理计算需求可能会超过云中的训练需求,大多数云推理(以及一些训练芯片)可能会转向成本更低的 ASIC 设计。
We use scenario analysis because it is too early to say whether the AI industry can generate sufficient returns and killer apps to maintain the current intensity of AI semi investment. According to our US hardware team, led by Erik Woodring, total hyperscaler capex is now expected to hit US$261bn in 2024, up 51% Y/Y, and 5ppt faster than they had expected, ahead of 3Q earnings (link). Total cloud capex for CY25e now points towards US$289bn, up 11% Y/Y (vs. +8% Y/Y prior to earnings). Overall, our cloud capex tracker now points to the top 10 global cloud players collectively spending nearly US$550bn on capex in 2024e and 2025e – 8% more than they spent across 2021-2023 combined.
我们使用情景分析,因为现在还为时尚早,无法判断人工智能行业是否能够产生足够的回报和杀手级应用,以维持当前的人工智能半导体投资强度。根据我们由 Erik Woodring 领导的美国硬件团队的预测,2024 年超大规模云计算资本支出预计将达到 2610 亿美元,同比增长 51%,比他们之前的预期快 5 个百分点,领先于第三季度财报(链接)。2025 年云计算资本支出现在预计为 2890 亿美元,同比增长 11%(与财报前的 8%同比增长相比)。总体而言,我们的云计算资本支出追踪器现在显示,全球前 10 大云计算公司在 2024 年和 2025 年预计将共同支出近 5500 亿美元的资本支出,比 2021 年至 2023 年期间的总支出多出 8%。
展览 13:云 AI 半导体市场规模到 2027 年的情景摘要

来源:摩根士丹利研究(e)估计
Longer-term perspective, 2023-2030 CAGR scenarios
长期视角,2023-2030 年复合年增长率情景
In our base case, we see cloud AI computing accounting for a major share of AI semi consumption, and we forecast this cloud AI computing market will grow from US$44bn in 2023 to US$343bn in 2030e (34% CAGR over 2023-30e), outperforming the entire AI semi market (which also counts Edge AI ). In the near term, we expect general purpose AI (including GPU, FPGA and merchant AI plus vendors like Nvidia, Intel and AMD) to enjoy most of the share inside the AI computing market, with around 90% market share in 2024e.
在我们的基本情况下,我们看到云 AI 计算占据了 AI 半导体消费的主要份额,我们预测这一云 AI 计算市场将从 2023 年的 440 亿美元增长到 2030 年的 3430 亿美元(2023-2030 年期间的年均增长率为 34%),表现优于整个 AI 半导体市场(其中也包括边缘 AI)。在短期内,我们预计通用 AI(包括 GPU、FPGA 和商用 AI,以及像 Nvidia、Intel 和 AMD 这样的供应商)将在 AI 计算市场中占据大部分份额,预计在 2024 年将占据约 90%的市场份额。
Our bull case (supply-driven model) indicates 2027e cloud AI semi revenue of US$405bn (similar to AMD CEO's forecast), based on TSMC's 2026 preliminary CoWoS capacity. AMD (covered by Joe Moore) forecasts the AI accelerator (cloud AI semis) market to total US$400bn in 2027 and US$500bn in 2028. We see room for our bull case to play out if increasing numbers of companies acquire a competitive edge and continue investing heavily in AI computing to enhance their productivity. A 43% CAGR over 2023-30e is hence possible, in our view, implying a US$549bn market for cloud AI semis in 2030e.
我们的牛市案例(供给驱动模型)表明,2027 年云 AI 半导体收入为 4050 亿美元(与 AMD 首席执行官的预测相似),基于台积电 2026 年的初步 CoWoS 产能。AMD(由乔·摩尔覆盖)预测,2027 年 AI 加速器(云 AI 半导体)市场总额将达到 4000 亿美元,2028 年将达到 5000 亿美元。如果越来越多的公司获得竞争优势并继续在 AI 计算上进行大量投资以提高生产力,我们认为我们的牛市案例有可能实现。因此,2023-2030 年期间 43%的年复合增长率是可能的,这意味着 2030 年云 AI 半导体市场将达到 5490 亿美元。
In our bear case, we see a lack of business models that can effectively monetize generative AI. It also assumes the cost-benefit of custom chips tracks below expectations and hence the return on AI investment does not improve effectively. We also bake in potential supply chain risk (e.g., foundry capacity shortage, energy consumption issues of AI computing), and assume a 19% CAGR over 2023-30e for our bear case of cloud AI semi growth.
在我们的悲观情况下,我们看到缺乏能够有效变现生成性人工智能的商业模型。它还假设定制芯片的成本效益低于预期,因此人工智能投资的回报没有有效改善。我们还考虑了潜在的供应链风险(例如,代工厂产能短缺、人工智能计算的能源消耗问题),并假设 2023-2030 年云人工智能半导体增长的年均复合增长率为 19%。
展览 14:根据我们的人工智能资本支出假设,云人工智能的半年度复合增长率在 2023-2030 年期间将为 34%。

来源:摩根士丹利研究估计。注意:CAGR 指的是 2023-27 年期间的增长。
05Global ASIC design service landscape
05 全球 ASIC 设计服务市场概况
Global ASIC design service landscape
全球 ASIC 设计服务市场概况
We were too optimistic on ASIC share gain last year
我们对去年 ASIC 市场份额的增长过于乐观
In Tech Diffusion – Fulfilling the surge in AI demand with custom chips (11 Jun 2023), we predicted that AI ASICs would gain share from AI GPUs (namely Nvidia products). Eighteen months later, the cloud AI semi TAM estimate for 2025 is expected double to US$174bn in our base case (vs. our previous forecast of US$80bn), while Nvidia’s market has grown >80% since then.
在《技术扩散 - 通过定制芯片满足人工智能需求的激增》(2023 年 6 月 11 日)中,我们预测人工智能 ASIC 将从人工智能 GPU(即 Nvidia 产品)中获得市场份额。十八个月后,2025 年云人工智能半导体总可寻址市场(TAM)在我们的基本情况下预计将翻倍至 1740 亿美元(相比我们之前预测的 800 亿美元),而 Nvidia 的市场自那时以来增长超过 80%。
We think ASICs have continued to improve, but Nvidia's strong execution continues to raise the bar for its competitors. The move to a one-year product cadence and broadening out of Nvidia's offering to complete systems only adds to the challenge of developing a competing offering from scratch. When it comes to AI ASIC competition, we have seen developments for years, yet, outside of Google, none have managed to see broad deployments.
我们认为 ASIC 仍在不断改进,但 Nvidia 的强大执行力继续提高了竞争对手的门槛。转向一年一度的产品节奏以及 Nvidia 将其产品扩展到完整系统,只会增加从零开始开发竞争产品的挑战。在 AI ASIC 竞争方面,我们已经看到多年的发展,然而,除了谷歌,没有其他公司能够实现广泛部署。
Maybe this time is different, but history is certainly on Nvidia's side. The only scale ASIC competitor to Nvidia today is Google's TPU, which was a project that began back in 2013. That year Nvidia's market cap finished below US$9bn (Google's was above US$300bn) and its R&D budget was a little more than US$1bn annually. By contrast, today Nvidia spends more than US$10bn a year on R&D which goes towards improving on what is already the industry-leading solution. Google has also had the benefit of inventing the transformer, and could design hardware optimized for those architectures well before they became the most important class of AI model. Competitors like AMD and Intel have been developing GPUs via acquisitions and internally for decades, yet have struggled to make a dent in Nvidia's market share. We have seen several start-ups launch with initial enthusiasm only to see minimal revenue. Certainly since then the importance of managing costs around AI infrastructure has grown considerably, and the scope of Nvidia's ambitions along with its gross margins certainly leaves room for competitive encroachment. But our point is that Nvidia's success has not been achieved in a vacuum – the company has executed very well against smaller, more nimble start-ups and much larger and deep-pocketed competitors for many years. Something will need to look different for ASIC penetration to increase meaningfully from here, in our view. One such example is the TCO analysis in our next section.
也许这次有所不同,但历史无疑站在英伟达这一边。今天,英伟达唯一的规模 ASIC 竞争对手是谷歌的 TPU,这是一个始于 2013 年的项目。那一年,英伟达的市值低于 90 亿美元(谷歌的市值超过 3000 亿美元),其研发预算每年略高于 10 亿美元。相比之下,如今英伟达每年在研发上的支出超过 100 亿美元,这些支出用于改进已经是行业领先的解决方案。谷歌还受益于发明了变压器,并能够在这些架构成为最重要的 AI 模型类别之前设计出优化的硬件。像 AMD 和英特尔这样的竞争对手通过收购和内部开发 GPU 已经有几十年,但在英伟达的市场份额上却难以有所突破。我们看到一些初创公司以最初的热情推出,但收入却微乎其微。自那时以来,管理 AI 基础设施成本的重要性显著增加,英伟达的雄心壮志及其毛利率无疑为竞争者的侵入留出了空间。 但我们的观点是,英伟达的成功并不是在真空中实现的——该公司多年来在与更小、更灵活的初创公司以及更大、资金雄厚的竞争对手的竞争中表现出色。我们认为,要使 ASIC 的渗透率在此基础上显著增加,某些方面需要有所不同。一个这样的例子是我们下一部分的 TCO 分析。
AI ASIC market share breakdown
AI ASIC 市场份额细分
Although GPUs will likely still account for the majority of cloud semis as far as we can see, we believe the ASIC proportion will keep gaining share from around 11% in 2024e to 15% in 2030e in our cloud AI semi base case. The major driver behind this in our view is still capex from major CSPs who are unwilling to be committed to Nvidia as their single and most expensive source in terms of AI applications.
尽管我们认为 GPU 在云半导体中仍将占据大多数,但我们相信 ASIC 的比例将在我们的云 AI 半导体基本情况下,从 2024 年的约 11%上升到 2030 年的 15%。我们认为,主要推动因素仍然是主要云服务提供商的资本支出,他们不愿意将 Nvidia 作为其在 AI 应用方面唯一且最昂贵的来源。
If we further break down the market share of AI ASIC, we expect Google will still be the leader in 2025 with nearly 55% market share based on its long-history TPU development, which has helped it to accumulate a larger customer base around Google Cloud and to develop an advanced ecosystem around its TPU. Google's latest-generation TPU will be the 7th generation (not officially announced yet so we call it TPU v7), and is currently under development with both Broadcom and MediaTek for inference and training purposes, respectively, according to our industry checks in Taiwan.
如果我们进一步细分 AI ASIC 的市场份额,我们预计到 2025 年,谷歌仍将是领导者,市场份额接近 55%,这得益于其长期的 TPU 开发,这帮助其在谷歌云周围积累了更大的客户基础,并围绕其 TPU 发展了先进的生态系统。谷歌的最新一代 TPU 将是第七代(尚未正式宣布,因此我们称之为 TPU v7),根据我们在台湾的行业调查,目前正在与博通和联发科技分别进行推理和训练目的的开发。
For AWS, we expect its AI ASIC to be the second largest in the custom AI ASIC market with around 38% market share from its well rounded custom ASIC build-up, including CPU, AI training and AI inference. AWS's latest custom CPU, Graviton, has already been built to 4th generation and provides significant complementarity to its Trainium and Inferentia chips. The second generation for both AWS Trainium and Inferentia have already been announced. From our industry checks, we believe Alchip is highly likely to win the design contract for AWS's third generation of Trainium with TSMC executing 3nm production (likely to enter mass production in 2026).
对于 AWS,我们预计其 AI ASIC 将在定制 AI ASIC 市场中占据第二大份额,市场份额约为 38%,这得益于其全面的定制 ASIC 构建,包括 CPU、AI 训练和 AI 推理。AWS 最新的定制 CPU Graviton 已经发展到第四代,并为其 Trainium 和 Inferentia 芯片提供了显著的互补性。AWS 的 Trainium 和 Inferentia 的第二代已经宣布。从我们的行业调查来看,我们认为 Alchip 很可能赢得 AWS 第三代 Trainium 的设计合同,台积电将执行 3nm 生产(预计将在 2026 年进入量产)。
For other CSPs, we expect them to contribute around 7% market share in 2025. Several notable projects include Intel's Habana Gaudi 3, Meta's MTIA v2 and Microsoft's Maia 200. We do not think these volumes are comparable to those at AWS and Google yet, due to their later developments and smaller customer base.
对于其他 CSP,我们预计它们在 2025 年将占据约 7%的市场份额。一些显著的项目包括英特尔的 Habana Gaudi 3、Meta 的 MTIA v2 和微软的 Maia 200。我们认为这些数量尚无法与 AWS 和谷歌的相比,因为它们的发展较晚且客户基础较小。
展览 15:定制 AI 芯片将超越通用芯片(基本情况)

来源:Gartner,摩根士丹利研究(e)估计
展览 16:定制 AI ASIC 芯片的价值预计将在 2025 年达到 220 亿美元

来源:公司数据,摩根士丹利研究(e)估计
展览 17:2027 年预计 ASIC 设计服务市场份额分布

来源:摩根士丹利研究(e)估计
展览 18:设计服务:定制芯片项目映射表

来源:公司数据,摩根士丹利研究
ASIC design service comparison between Asian and US vendors
亚洲和美国供应商之间的 ASIC 设计服务比较
With more than three decades of ASIC design experience, a hierarchical design methodology, and a rich IP portfolio including leading-edge SerDes cores, Broadcom has been the industry leader in providing complex ASIC solutions with time-to-market and performance advantages for HPC, networking and storage applications for over a decade. Several CSPs and notable startups like OpenAI have chosen Broadcom as their ASIC service provider.
凭借超过三十年的 ASIC 设计经验、分层设计方法论以及丰富的 IP 组合,包括领先的 SerDes 核心,博通在过去十多年中一直是提供复杂 ASIC 解决方案的行业领导者,为高性能计算、网络和存储应用提供了市场时效和性能优势。多个云服务提供商和知名初创公司如 OpenAI 已选择博通作为其 ASIC 服务提供商。
However, the service charge of Broadcom is also the highest among its ASIC peers, which has prompted a number of CSPs and other cost-sensitive companies to look for other ASIC design options.In terms of cost, Asian ASIC service providers such as MediaTek, Alchip, and GUC are charging lower service fees than their US competitors due to their lower labor costs, shorter track records of ASIC successes, and their intention to gain market share from Broadcom. Besides cost, a close relationship with TSMC, especially being TSMC's VCA (Value Chain Alliance) members, has induced customers to choose Asian ASIC service providers to secure CoWoS capacity, which is one of the most precious strategic resources amid the AI chip demand upturn.
然而,博通的服务费用在其 ASIC 同行中也是最高的,这促使许多 CSP 和其他对成本敏感的公司寻找其他 ASIC 设计选项。在成本方面,亚洲 ASIC 服务提供商如联发科技、Alchip 和 GUC 的服务费用低于其美国竞争对手,这主要是由于他们的劳动力成本较低、ASIC 成功的历史较短,以及他们希望从博通那里获得市场份额的意图。除了成本,和台积电的紧密关系,尤其是作为台积电的 VCA(价值链联盟)成员,促使客户选择亚洲 ASIC 服务提供商以确保 CoWoS 产能,这在 AI 芯片需求上升的背景下成为最珍贵的战略资源之一。
展览 19:ASIC 亚洲和美国供应商比较

来源:摩根士丹利研究(e)估计;注:亚洲公司的收入估计由查理·陈估计,美国公司的收入估计由乔·摩尔估计。
AI ASIC wins differ between US and Asia (technology vs. cost-saving)
AI ASIC 的胜利在美国和亚洲之间存在差异(技术与节省成本)
We all agree that ASICs will likely start to gain market share after 2026e in a further expanding cloud AI semi TAM. This informs our AI ASIC TAM of US$30bn in 2027e. We expect ASIC CAGR of 34% over 2024-2027 (derived from TSMC's CoWoS capacity), while Joe estimates Nvidia’s revenue CAGR at 24% over 2024-2027, suggesting ASICs will likely gain some market share from their very low base.
我们都同意,ASICs 可能会在 2026 年后开始获得市场份额,进一步扩大的云 AI 半导体总可寻址市场(TAM)。这使我们对 2027 年 AI ASIC TAM 的预期为 300 亿美元。我们预计 2024-2027 年 ASIC 的年均增长率为 34%(基于台积电的 CoWoS 产能),而 Joe 预计 Nvidia 的收入年均增长率为 24%(2024-2027 年),这表明 ASICs 可能会从其非常低的基数中获得一些市场份额。
Overall, we think ASICs are still an important and strategic investment to keep ROI high for future AI capex. Instead of comparing those paper roadmap of ASICs (in some cases overstated), we think it's worth analyzing why CSPs are not keen to abandon ASICs and see the following reasons:
总体而言,我们认为 ASIC 仍然是一个重要且战略性的投资,以保持未来 AI 资本支出的高投资回报率。与其比较那些 ASIC 的纸面路线图(在某些情况下被夸大),我们认为分析云服务提供商为何不愿意放弃 ASIC 是值得的,以下是原因:
- CSPs want a second source, internally or externally, to have bargaining power with Nvidia;
CSP 希望有第二个来源,无论是内部还是外部,以便与 Nvidia 进行谈判 - If cost and performance can be matched, AI capex savings can be made;
如果成本和性能能够匹配,可以实现人工智能资本支出的节省; - Some workloads can be made significantly more efficient using internal chips such as inferencing; and
某些工作负载可以通过使用内部芯片(例如推理)显著提高效率;并且 - Nvidia chips are fundamentally very expensive regardless of their strong performance.
Nvidia 芯片从根本上来说非常昂贵,无论其性能多么强大。
However, the market share views between Joe and Charlie are quite different in that Joe believes US ASIC service provider will still dominate the market despite of higher fee due to their own IP portfolios and numerous track records on helping CSPs' ASIC project. However, from Charlie's checks in Asia, we are more optimistic about the success of Asian ASICs service providers from multiple sources of channel checks with both TSMC and ASIC design service vendors.
然而,乔和查理对市场份额的看法截然不同,乔认为美国的 ASIC 服务提供商将继续主导市场,尽管费用更高,但由于他们拥有自己的知识产权组合和在帮助 CSP 的 ASIC 项目方面的众多成功案例。然而,从查理在亚洲的调查来看,我们对亚洲 ASIC 服务提供商的成功持更乐观的态度,这得益于与台积电和 ASIC 设计服务供应商的多方渠道调查。
展览 20:情景分析围绕 AI 半导体总可寻址市场和 ASIC 外包策略展开

来源:摩根士丹利研究(e)估计
In our cloud AI semi TAM x AI ASIC vendor matrix, our forecasts can be separated into four scenarios:
在我们的云 AI 半导体市场规模与 AI ASIC 供应商矩阵中,我们的预测可以分为四种情景:
- Aggressive Diversification: This scenario is mostly based on TSMC CoWoS capacity allocation forecasts from our Asia semi team (Charlie et al.), suggesting larger cloud AI semi TAM. This scenario also indicates Asia ASIC service providers winning market share against US names.
激进的多元化:这一情景主要基于我们亚洲半导体团队(Charlie 等)对台积电 CoWoS 产能分配的预测,表明云 AI 半导体的总可寻址市场更大。该情景还表明,亚洲 ASIC 服务提供商在与美国公司竞争中赢得了市场份额。 - Budget Control: As Asia ASIC service providers are charging lower service fees vs. US providers, in this scenario the lower cloud AI semi TAM and increasing market share of Asia ASIC service providers both suggest potential budget controls from customers, mostly CSPs.
预算控制:由于亚洲 ASIC 服务提供商的服务费用低于美国提供商,在这种情况下,较低的云 AI 半导体总可寻址市场和亚洲 ASIC 服务提供商市场份额的增加都表明客户,主要是云服务提供商,可能会实施预算控制。 - Conservative Move: This scenario is mostly based on the demand forecasts from our US semi team (Joe et al). Our base case for cloud AI Semi TAM bakes in our conservatism, which we hope to reserve until we have more long-term spending indications from customers.
保守举措:此情景主要基于我们美国半导体团队(Joe 等)的需求预测。我们对云 AI 半导体总可寻址市场的基本情况考虑了我们的保守态度,我们希望在获得客户更多长期支出指示之前保留这一态度。 - Determinative Compete: In this scenario, both Nvidia and US ASIC service providers enjoy AI proliferation while in Asia only AI enablers like TSMC and KYEC, etc. can share the growth.
决定性竞争:在这种情况下,Nvidia 和美国 ASIC 服务提供商都享受着人工智能的普及,而在亚洲只有像台积电和 KYEC 等人工智能赋能者能够分享这一增长。
06Our TCO analysis shows ASICs are still competitive vs. GPUs
06 我们的 TCO 分析显示 ASIC 仍然在与 GPU 的竞争中保持竞争力
Our TCO analysis shows ASICs are still competitive vs. GPUs
我们的总拥有成本分析显示,ASIC 仍然在与 GPU 的竞争中具有优势。
Marvell CEO's recent comment on ASIC TCO
马维尔首席执行官最近关于 ASIC 总拥有成本的评论
According to Marvell on its latest earnings call, the CEO, Matthew J. Murphy, also indicated that besides optimization between computing chips and the system, the reason why ASIC is happening and growing right now is all about TCO. "It's not a zero-sum game that if somebody implements a custom silicon design, it's going to just completely usurp and take over whatever the merchant offering is. These are going to coexist. Where there's workloads that are big enough to get the bang for the buck on the optimization, it makes a ton of sense to go to custom from a TCO basis."
根据 Marvell 在最新的财报电话会议上,首席执行官 Matthew J. Murphy 还表示,除了计算芯片与系统之间的优化外,ASIC 之所以正在发生并增长,完全是关于 TCO。“这不是一个零和游戏,如果有人实施定制硅设计,它就会完全取代和占据任何商用产品。这些将会共存。在工作负载足够大的情况下,从 TCO 的角度来看,进行定制是非常有意义的。”
Refining our proprietary TCO analysis
精炼我们的专有 TCO 分析
Marvell CEO's comment echoes our view that ASIC's TCO is still competitive to GPU, which is the main growth driver for major CSPs to develop their own ASICs.
马维尔首席执行官的评论呼应了我们的观点,即 ASIC 的总拥有成本仍然对 GPU 具有竞争力,这也是主要云服务提供商开发自有 ASIC 的主要增长动力。
We performed a total cost of ownership (TCO) analysis comparing AI GPUs and AI ASICs for LLM training. Our findings indicate that while ASIC chips such as AWS Inferentia and Google TPU might not match Nvidia’s GB200 in energy efficiency (performance per watt), they generally have lower initial costs, assuming no markup is added to chip costs.
我们进行了总拥有成本(TCO)分析,比较了用于LLM训练的 AI GPU 和 AI ASIC。我们的研究结果表明,尽管 AWS Inferentia 和 Google TPU 等 ASIC 芯片在能效(每瓦特性能)方面可能无法与 Nvidia 的 GB200 相匹配,但它们通常具有较低的初始成本,假设芯片成本没有加价。
Our model based on the key assumptions includes:
我们的模型基于关键假设包括:
- Training a 400B Llama-3 model, requiring a total computing capacity of 17,418 ZetaFLOPs.
训练一个 400B 的 Llama-3 模型,所需的总计算能力为 17,418 ZetaFLOPs。 - Based on US$500mn budget, which can build a 24,000-chip cluster of H100 GPU.
基于 5 亿美元的预算,可以构建一个 24,000 芯片的 H100 GPU 集群。 - We use the same budget of US$500mn for other AI hardware, while the chip units (or the cluster size) are based on the chip cost for hyperscalers.
我们将其他人工智能硬件的预算定为 5 亿美元,而芯片单元(或集群规模)则基于超大规模数据中心的芯片成本。 - AI GPU costs based on sales prices, and AI ASIC costs estimated using manufacturing costs from foundries.
基于销售价格的 AI GPU 成本,以及使用代工厂制造成本估算的 AI ASIC 成本。 - We assume a 40% utilization rate, and similar reliability .
我们假设 40%的利用率,以及类似的可靠性。 - Four years for hardware depreciation.
硬件折旧四年。
Given the high computational capability of Nvidia's Blackwell series, a GB200 and B200 cluster with 24,000 GPUs could complete Llama-3 (400B) model training in under 10 days. In contrast, AI ASIC chips, such as Google’s TPU v5p and AWS’s Trainium 2, would require more than 40 days to achieve the same result. However, in terms of TCO, AI ASICs are competitive with Nvidia's Blackwell series for Llama-3 training due to their lower chip costs. Intel's Habana Gaudi 3 also shows a low TCO, estimated at US$3.5mn for 400B Llama-3 model training.
考虑到 Nvidia 的 Blackwell 系列的高计算能力,配备 24,000 个 GPU 的 GB200 和 B200 集群可以在不到 10 天的时间内完成 Llama-3(400B)模型的训练。相比之下,AI ASIC 芯片,如谷歌的 TPU v5p 和 AWS 的 Trainium 2,则需要超过 40 天才能达到相同的结果。然而,在总体拥有成本(TCO)方面,由于芯片成本较低,AI ASIC 在 Llama-3 训练中与 Nvidia 的 Blackwell 系列具有竞争力。英特尔的 Habana Gaudi 3 也显示出较低的 TCO,估计为 400B Llama-3 模型训练的 350 万美元。
We also anticipate opportunities for AI ASICs and alternative AI GPUs to diversify the vendor landscape. This suggests there will likely be sustained demand for ASIC solutions, as well as Habana Gaudi 3 AI accelerators.
我们还预期 AI ASIC 和替代 AI GPU 的机会将多样化供应商格局。这表明对 ASIC 解决方案以及 Habana Gaudi 3 AI 加速器的需求可能会持续。
Why we are using four years to depreciate?
我们为什么要用四年进行折旧?
According to our Microsoft analyst, Keith Weiss and his team, Microsoft mentions its AI data center capex being split 50/50 between 6-year useful life server assets and 15-year other data center assets = i.e., 10.5-year weighted average life (although it admittedly will be lower over time as the % of capex on servers increases).
根据我们的微软分析师 Keith Weiss 及其团队的说法,微软提到其人工智能数据中心资本支出在 6 年使用寿命的服务器资产和 15 年其他数据中心资产之间分配为 50/50,即 10.5 年的加权平均寿命(尽管随着服务器资本支出比例的增加,这一数字在未来会降低)。
In Asia, we also consulted AI datacenter operators (e.g., Coreweave's industry peers ) about their payback periods. Basically, they hope in the first two years they can cover the cash investment, and the third year can achieve their IRR target. So they would use three or four years to depreciate. We think three is too aggressive, so we choose to use four years – making a judgment call that today's AI GPU will be useless four years later, given every year the chip performance is improving 4-5x; so, three years out, it would be a 100x gap. Given the electricity supply constraints, we think those old chips will be retired sooner than their physical use life. Hence, we decide to use four years as the depreciation base for AI computing chips, no matter ASIC or GPU.
在亚洲,我们还咨询了 AI 数据中心运营商(例如 Coreweave 的行业同行)关于他们的回报期。基本上,他们希望在前两年能够覆盖现金投资,第三年能够实现他们的内部收益率目标。因此,他们会选择三到四年进行折旧。我们认为三年太激进,所以选择使用四年——做出判断,认为今天的 AI GPU 在四年后将毫无用处,因为每年芯片性能提高 4-5 倍;因此,三年后将会有 100 倍的差距。考虑到电力供应的限制,我们认为这些旧芯片将比其物理使用寿命更早退役。因此,我们决定将四年作为 AI 计算芯片的折旧基础,无论是 ASIC 还是 GPU。
If we also refer to crypto mining in the ASIC industry, their lifetime is only 2-3 years, realistically, given the very quick refresh of new ASIC. This is similar to what we are seeing for the fast cadence of AI computing chip designs these days.
如果我们还提到 ASIC 行业中的加密矿业,考虑到新 ASIC 的快速更新,它们的使用寿命仅为 2-3 年。这与我们目前看到的 AI 计算芯片设计的快速节奏类似。
Conclusion of our TCO analysis for AI training - ASIC can be competitive
我们对 AI 训练的总拥有成本(TCO)分析的结论 - ASIC 可以具有竞争力
- Based on a 24,000-chip cluster of H100 GPU (costing US$500mn to build), we estimate around US$8mn would be spent to train a 400B Llama-3 model. That includes around US$1mn of electricity cost, and US$7mn of hardware depreciation cost for 21 days.
基于一个 24,000 芯片的 H100 GPU 集群(建设成本为 5 亿美元),我们估计大约需要 800 万美元来训练一个 400B 的 Llama-3 模型。这包括大约 100 万美元的电力成本和 700 万美元的硬件折旧成本,持续 21 天。 - If we use the same budget of US$500mn to build a cluster of AWS Trainium2, it should cost around US$4.8mn to train a 400B Llama-3 model. That includes around US$1mn of electricity cost, and US$4mn of hardware depreciation cost for 11 days. Of note, the Trainium2 chip cost is calculated based on foundry and HBM cost with mark-up from design service vendor.
如果我们使用相同的预算 500 百万美元来构建一组 AWS Trainium2,训练一个 400B Llama-3 模型的成本应该在 480 万美元左右。这包括大约 100 万美元的电力成本和 400 万美元的硬件折旧成本,持续 11 天。值得注意的是,Trainium2 芯片的成本是基于代工和 HBM 成本计算的,并加上设计服务供应商的加价。
展览 21:总拥有成本摘要(人工智能培训)

来源:公司数据,摩根士丹利研究
展览 22:谷歌 TPU、英伟达和 AMD 的节奏比较

来源:公司数据,摩根士丹利研究
展览 23:谷歌 TPU 与英伟达 GPU 保持同步

来源:公司数据,摩根士丹利研究估计
展览 24:ASIC 设计的性能与成本比仍具竞争力

来源:公司数据,摩根士丹利研究估计
展览 25:主要 AI GPU 和 ASIC 的规格及成本比较(训练)

来源:公司数据,摩根士丹利研究
Caveats of our TCO analysis
我们总拥有成本分析的注意事项
Our analysis is not without limitations, and real-world TCO is subject to a number of factors. For years we have seen studies that suggest a much higher TCO is possible by leveraging ASICs or cheaper alternatives to Nvidia for both training and inference (here, here, here), and our math once again supports that conclusion. Yet Nvidia remains firmly ahead of the competition from a market share perspective, and ASIC's have yet to proliferate broadly outside of Google's internal use cases.
我们的分析并非没有局限性,现实世界的总拥有成本(TCO)受到多种因素的影响。多年来,我们看到的研究表明,通过利用 ASIC 或更便宜的 Nvidia 替代品进行训练和推理,可能实现更高的 TCO(这里,这里,这里),我们的计算再次支持了这一结论。然而,从市场份额的角度来看,Nvidia 仍然稳居竞争对手之上,而 ASIC 尚未在谷歌的内部用例之外广泛普及。
A few things to keep in mind when doing this kind of analysis:
进行这种分析时需要记住的几点:
1. TCO is not limited to one or two workloads, flexibility (or "general purpose") is an important factor in deciding hardware usage. Nvidia consistently demonstrates the highest level of performance across workloads (training, inference, HPC, vision, LLMs, etc.) which allows for CSPs to have more flexibility in how they use the hardware over its lifetime. Nvidia's default status within the rental market also allows transferability between internal and external workloads as well. Ease of use when it comes to handling software changes and running different workloads matters as well, and minimizing "software developer hours" is an often overlooked aspect to TCO math. That said, the industry continues to work towards true hardware interoperability, and in a world where any GPU can be used in any cloud, upfront costs will be more of a consideration.
1. TCO 并不限于一两个工作负载,灵活性(或“通用性”)是决定硬件使用的重要因素。Nvidia 在各类工作负载(训练、推理、高性能计算、视觉等)中始终展现出最高水平的性能,这使得 CSP 在硬件的使用上具有更大的灵活性。Nvidia 在租赁市场的默认地位也允许内部和外部工作负载之间的可转移性。处理软件变更和运行不同工作负载的易用性同样重要,最小化“软件开发者工时”是 TCO 计算中常被忽视的一个方面。尽管如此,行业仍在努力实现真正的硬件互操作性,在任何 GPU 都可以在任何云中使用的世界里,前期成本将成为一个更重要的考虑因素。
2. We lean on published performance specs, which are difficult to rely on for real-life performance. Reported flops metrics via company data sheets are typically a best case based on ideal conditions; actual operating conditions may be much different and can depend greatly on the associated software stack. To give a sense for how material those software optimizations can be, between MLPref 2.1 and 3.0, H100 saw up to 54% performance improvement depending on the workload, and in the latest set of training benchmarks from mid-November, H100 showed 30% performance improvement on GPT-3 175-B training vs June 2023.
2. 我们依赖于发布的性能规格,但这些规格在实际性能上很难依赖。通过公司数据表报告的浮点运算性能指标通常是基于理想条件的最佳情况;实际操作条件可能大相径庭,并且可能在很大程度上依赖于相关的软件堆栈。为了说明这些软件优化的影响有多大,在 MLPref 2.1 和 3.0 之间,H100 在不同工作负载下的性能提升高达 54%,而在 11 月中旬的最新训练基准测试中,H100 在 GPT-3 175-B 训练中相比 2023 年 6 月显示出 30%的性能提升。
3. Stability is the key advantage of Nvidia's NVL system when conducting frontier LLM training
3. 稳定性是 Nvidia 的 NVL 系统在进行前沿LLM训练时的关键优势
The scale of growing workloads means that other components and large cluster scaling play a larger and larger role in performance. During Meta's 54-day Llama 3 405B pre-training, it experienced 419 unexpected interruptions, of which the spilt was about 60/40 GPU vs non-GPU issues (see: here). In the past, training jobs could be done on dozens of GPUs or less, but as we expand to clusters in the tens of thousands and maybe even hundreds of thousands of GPUs, the cost associated with a hardware failure continues to grow.
随着工作负载的不断增加,其他组件和大规模集群扩展在性能中扮演的角色越来越重要。在 Meta 进行的为期 54 天的 Llama 3 405B 预训练中,经历了 419 次意外中断,其中大约 60%的问题与 GPU 相关,40%与非 GPU 相关(见:这里)。过去,训练任务可以在几十个 GPU 或更少的设备上完成,但随着我们扩展到数万个甚至数十万个 GPU,硬件故障相关的成本也在不断增加。
And given that a single failure can require an entire restart of a training workload, minimizing component overheads and thus maximizing uptime becomes more of a consideration for customers when they make hardware choices. Nvidia, with its NVL systems, is operating on a scale that will make it difficult for ASIC and merchant competitors to compete, with integrated CPU/GPU/Networking in the liquid-cooled form factor, that kind of computing density isn't possible to recreate with off-the-shelf parts today. The industry is working to bridge that gap, but only time will tell how successful those efforts become.
考虑到单个故障可能需要整个训练工作负载的重新启动,最小化组件开销,从而最大化正常运行时间,成为客户在选择硬件时更为重要的考虑因素。Nvidia 凭借其 NVL 系统,正在以一种规模运作,这将使得 ASIC 和商用竞争对手难以竞争,液冷形式因素中的集成 CPU/GPU/网络,这种计算密度在今天的现成部件中是无法重现的。行业正在努力弥补这一差距,但只有时间才能证明这些努力的成功程度。
Introducing our new TCO analysis for AI inference
推出我们的新人工智能推理总拥有成本分析
Similar to the previous analysis, we conducted a TCO calculation on AI ASIC and AI GPU for AI inference. In terms of cost-performance, although Nvidia's Blackwell series still holds a significant advantage due to its powerful computational performance, from a TCO perspective, the AI ASIC's TCO is much lower than that of AI GPUs. This is primarily due to the lower production costs of ASIC chips.
与之前的分析类似,我们对 AI 推理进行了 AI ASIC 和 AI GPU 的 TCO 计算。在性价比方面,尽管 Nvidia 的 Blackwell 系列由于其强大的计算性能仍然具有显著优势,但从 TCO 的角度来看,AI ASIC 的 TCO 远低于 AI GPU。这主要是由于 ASIC 芯片的生产成本较低。
For inference on the TCO model, our key assumptions include:
对于 TCO 模型的推断,我们的关键假设包括:
- A 24,000-chip cluster with a 20% utilization rate.
一个 24,000 芯片的集群,利用率为 20%。 - All devices function for one entire year.
所有设备的使用期限为整整一年。 - AI GPU costs based on sales prices, and AI ASIC costs estimated using manufacturing costs from foundries.
基于销售价格的 AI GPU 成本,以及使用代工厂制造成本估算的 AI ASIC 成本。
In terms of AI inference, we found that AI ASIC chips, such as Google's next-generation TPU v6, require only 26% of the annual TCO needed for Nvidia's H100 and 15% of that for the B200. We believe that for tasks not requiring rapid inference response, ASIC chips can perform well – and their high cost-effectiveness may be particularly attractive to CSPs.
在人工智能推理方面,我们发现,像谷歌下一代 TPU v6 这样的 AI ASIC 芯片,仅需 Nvidia H100 年总拥有成本的 26%,以及 B200 年总拥有成本的 15%。我们认为,对于不需要快速推理响应的任务,ASIC 芯片表现良好——它们的高性价比可能对云服务提供商特别具有吸引力。
展览 26:TCO 摘要(AI 推断)

来源:公司数据,摩根士丹利研究
展览 27:主要 AI GPU 和 ASIC(推理)的规格和成本比较

来源:公司数据,摩根士丹利研究
展览 28:人工智能推理的性能与成本比率

来源:公司数据,摩根士丹利研究
We put together data to compare GPU and ASIC hourly rental costs (Nvidia Hopper vs. AWS Inferentia 2) to provide objective information on the commerciality of ASICs. We believe the cadence of ASICs needs to accelerate to stay competitive to GPU, which is structurally positive for TSMC and ASIC design services.
我们整理了数据以比较 GPU 和 ASIC 的每小时租赁成本(Nvidia Hopper 与 AWS Inferentia 2),以提供关于 ASIC 商业性的客观信息。我们认为 ASIC 的更新频率需要加快,以保持与 GPU 的竞争力,这对台积电和 ASIC 设计服务在结构上是积极的。
展览 29:AI GPUC H100 每个 GPU 每小时 - 截至 11 月初

来源:公司数据,摩根士丹利研究
展览 30:AI ASIC 等效计算能力 - 每小时 12 个 Inferentia 2 - 约 4.50 美元

来源:公司数据,摩根士丹利研究
07Global semi implications and our key action points
07 全球半导体的影响及我们的关键行动点
Global semi implications and our key action points
全球半导体影响及我们的关键行动点
Who are the winners in the future cloud AI semi TAM expansion?
未来云 AI 半导体市场扩展的赢家是谁?
We identify six ways to play the semiconductors market expansion:
我们确定了六种参与半导体市场扩张的方法:
- Nvidia (Top Pick) 英伟达(首选)
- Global AI ASIC vendors – e.g., Broadcom, Alchip, Socionext
全球 AI ASIC 供应商 - 例如,博通、阿尔芯、Socionext - EDA
- TSMC and its CoWoS subcontractor (ASE) and tool supplier (Allring)
台积电及其 CoWoS 分包商(ASE)和工具供应商(Allring) - Advantest (Top Pick) and the testing ecosystem
Advantest(首选)及测试生态系统 - HBM
Potential read-across to other sectors
潜在的跨行业影响
US Internet (covered by Brian Nowak)
美国互联网(由布赖恩·诺瓦克报道)
AMZN: AWS (AMZN) has been developing its own custom silicon since 2018 with the launch of Graviton (CPU), and today Graviton is used by almost every AWS customer (90% of top 1000 EC2 customers use Graviton) and contains 40% better price performance vs. x86 instances. Customers, such as PINS (covered by Brian Nowak), saw 47% compute cost savings after switching thousands of instances from x86 to Graviton. In 2019, AWS began developing custom silicon purpose-built for AI with the launch of Inferentia (custom silicon for Inference workloads). In 2022, AWS launched Trainium (custom silicon for Training workloads), and, in 2023, began building Trainium2 ("Trn2") for both training and inference, which recently became generally available. Trn2 instances offer 30-40% better price performance than current gen GPU-based instances, and customers such as AAPL, ADBE, Databricks have beta tested with promising early results. We will be monitoring if Trn2 gains adoption as it begins to roll out. In our view, adoption of Trn2 would provide evidence that Trn2 is meeting customer performance needs and potentially capturing share in the vast majority of GPU workloads that run on Nvidia chips today. AWS/AMZN remains committed to a custom silicon strategy as further evidenced by the recent announcement that Trainium3 is coming in late 2025.
AMZN:AWS(AMZN)自 2018 年推出 Graviton(CPU)以来,一直在开发自定义硅片,如今几乎每个 AWS 客户都在使用 Graviton(前 1000 名 EC2 客户中有 90%使用 Graviton),其价格性能比 x86 实例提高了 40%。例如,PINS(由 Brian Nowak 覆盖)在将数千个实例从 x86 切换到 Graviton 后,计算成本节省了 47%。2019 年,AWS 开始开发专为 AI 设计的自定义硅片,推出了 Inferentia(用于推理工作负载的自定义硅片)。2022 年,AWS 推出了 Trainium(用于训练工作负载的自定义硅片),并在 2023 年开始构建 Trainium2(“Trn2”),用于训练和推理,最近已普遍可用。Trn2 实例的价格性能比当前一代基于 GPU 的实例提高了 30-40%,AAPL、ADBE、Databricks 等客户进行了测试,初步结果令人鼓舞。我们将关注 Trn2 在推出后是否会获得采用。在我们看来,Trn2 的采用将证明其满足客户性能需求,并可能在目前运行在 Nvidia 芯片上的绝大多数 GPU 工作负载中占据市场份额。 AWS/AMZN 坚持定制硅策略,最近宣布 Trainium3 将于 2025 年底推出,进一步证明了这一点。
GOOGL: Google began working on its first purpose-built AI accelerator, TPU v1, in 2013, and it designed, verified, built and deployed the processor in its data center in just 15 months. Since then, Google has continued to innovate on its custom ASICs, and, in May of this year, launched its sixth generation TPU, called Trillium. Trillium TPUs achieve a ~5x increase in peak compute performance per chip compared to TPU v5e, and have both double the High Bandwidth Memory capacity and double the Interchip Interconnect bandwidth. The chips are also 67% more energy efficient and can scale up to 256 TPUs in a single pod. Beyond this pod-level scalability, Trillium TPUs can scale to hundreds of pods, connecting tens of thousands of chips. Since its creation, Google has used TPUs for many of its core services, including real-time voice search, photo object recognition, and interactive language translation. This innovation also supports a number of innovations core to Google's AI initiatives, including Google's Gemini, Imagen and Gemma models. Customers of Google can access TPUs through Google cloud, where companies like Cohere have used TPUs to accelerate LLM training. We will continue to monitor improvements across Google's TPUs and importantly look to evaluate how they compare on a price/performance basis to state-of-the-art Nvidia GPUs.
GOOGL:谷歌于 2013 年开始研发其首个专用人工智能加速器 TPU v1,并在仅 15 个月内设计、验证、构建并部署了该处理器到其数据中心。从那时起,谷歌继续在其定制 ASIC 上进行创新,并在今年 5 月推出了第六代 TPU,名为 Trillium。Trillium TPU 在每个芯片的峰值计算性能上相比 TPU v5e 实现了约 5 倍的提升,并且具有双倍的高带宽内存容量和双倍的芯片间互连带宽。这些芯片的能效也提高了 67%,并且可以在单个机架中扩展到 256 个 TPU。除了这种机架级的可扩展性,Trillium TPU 还可以扩展到数百个机架,连接数万个芯片。自创建以来,谷歌已将 TPU 用于其许多核心服务,包括实时语音搜索、照片物体识别和互动语言翻译。这项创新还支持了谷歌人工智能计划中的许多核心创新,包括谷歌的 Gemini、Imagen 和 Gemma 模型。谷歌的客户可以通过谷歌云访问 TPU,像 Cohere 这样的公司已经使用 TPU 加速LLM训练。 我们将继续监测谷歌 TPU 的改进,并重要的是评估它们在价格/性能基础上与最先进的英伟达 GPU 的比较。
US software (covered by Keith Weiss)
美国软件(由基思·韦斯报道)
Across our US software coverage, Microsoft remains the primary AI infrastructure-as-a-service (IaaS) provider with a material custom silicon strategy, while other AI IaaS providers, such as Oracle, Digital Ocean (covered by Josh Baer), and Cloudflare (covered by Hamza Fodderwala), remain focused primarily on procuring Nvidia GPUs.
在我们对美国软件的覆盖中,微软仍然是主要的人工智能基础设施即服务(IaaS)提供商,拥有重要的定制硅战略,而其他人工智能 IaaS 提供商,如甲骨文、Digital Ocean(由 Josh Baer 覆盖)和 Cloudflare(由 Hamza Fodderwala 覆盖),则主要专注于采购 Nvidia GPU。
While we believe it is still early for Microsoft’s AI ASICs strategy, we see a promising path for the company to:
虽然我们认为微软的 AI ASIC 策略仍然处于早期阶段,但我们看到公司有一个有前景的发展路径:
- Lower vendor lock-in to Nvidia (and/or AMD);
降低对 Nvidia(和/或 AMD)的供应商锁定 - Create higher switching costs for cloud customers, as software applications will be (re-)written to perform best on the custom silicon; and
为云客户创造更高的转换成本,因为软件应用程序将被(重新)编写以在定制硅片上表现最佳; - Lower the total cost of ownership and price-performance of internal applications (e.g., Copilot).
降低内部应用程序(例如,Copilot)的总体拥有成本和性价比。
As such, over a multi-year horizon, we believe this model could allow Microsoft to deliver its own AI SKUs at a more attractive gross margin as both the cost of model inference comes down and the hardware becomes more optimized for the specific workload, while at the same time creating greater stickiness with custom AI applications being built on top of Maia/Cobalt. In the near term, we believe that this shift will play out gradually with Nvidia remaining the preference for Microsoft, followed by AMD, before custom silicon moves into greater focus.
因此,从多年的视角来看,我们相信这一模型可以使微软以更具吸引力的毛利率提供自己的人工智能 SKU,因为模型推理的成本降低,同时硬件也变得更加优化以适应特定的工作负载,同时在 Maia/Cobalt 之上构建的定制人工智能应用程序将创造更大的粘性。在短期内,我们认为这一转变将逐渐展开,英伟达仍将是微软的首选,其次是 AMD,然后定制硅将更加受到关注。
US utilities (covered by Steven Byrd)
美国公用事业(由史蒂文·伯德报道)
According to our US utilities analyst Stephen Byrd, custom silicon (i.e., ASIC) will likely consume 26% of total power consumed for AI computing in 2024 before decreasing to 22% in 2025. This trend is in line with our Greater China Semiconductor Team's estimates of 24% in 2024 and 20% in 2025.
根据我们美国公用事业分析师 Stephen Byrd 的说法,定制硅(即 ASIC)在 2024 年可能会占 AI 计算总能耗的 26%,然后在 2025 年降至 22%。这一趋势与我们大中华区半导体团队对 2024 年 24%和 2025 年 20%的估计一致。
展览 31:美国公用事业:定制 AI 硅电力消耗估算

来源:公司数据,摩根士丹利研究
展览 32:大中华区半导体团队预测到 2025 年,AI ASIC 将消耗 AI 计算总电力的 20%

来源:公司数据,摩根士丹利研究
Global semiconductors implications
全球半导体的影响
1. Nvidia (covered by Joe Moore)
1. 英伟达(由乔·摩尔报道)
We are still bullish on Nvidia (Top Pick) given its continued earnings momentum as Blackwell begins to ramp. At the peak of this cycle, we see earnings power that should make the stock cheaper than many other AI beneficiaries, while other semis players grapple with cyclical pressures. ASIC penetration remains a risk to Nvidia's high margins and market share; we expect some share loss over time but continue to see Nvidia maintaining a strong lead over the next few years.
我们仍然看好英伟达(首选),因为随着 Blackwell 的逐步提升,其持续的盈利势头依然强劲。在这个周期的高峰期,我们看到的盈利能力应该使该股票比许多其他人工智能受益者更便宜,而其他半导体公司则面临周期性压力。ASIC 渗透仍然是对英伟达高利润率和市场份额的风险;我们预计随着时间的推移会有一些市场份额的损失,但我们仍然认为英伟达在未来几年将保持强劲的领先地位。
2. Global AI ASIC vendors
全球人工智能 ASIC 供应商
We identify which are most preferred and least preferred within ASIC design service:
我们识别出在 ASIC 设计服务中最受欢迎和最不受欢迎的选项:
US ASIC vendors: AVGO and MRVL opportunity (Joe's view)
美国 ASIC 供应商:AVGO 和 MRVL 的机会(乔的观点)
One of the most successful custom accelerators, and Broadcom's largest ASIC customer, is the tensor processing unit (TPU) from Google. In our base case, we assume that Broadcom retains this core TPU business with some smaller projects possibly going to MediaTek. The success of the TPU is unique, in our view, given the difficulties launching custom silicon, and can be attributed to Google's long history of AI development. Google invented the transformer and has been running their internal workloads on their machine learning library since 2016. The TPU has been developed for over ten years, and they are currently on their seventh generation. Given that Google pioneered the machine learning model and has spent 10+ years on the development process, we hesitate to use TPU as a replicable model for other ASIC opportunities. In general, we approach custom silicon as highly uncertain. So many attempts have come up short that we tend to take a "show-me" view towards any new ASIC, and this is also the case for anyone looking to replace Broadcom.
谷歌的张量处理单元(TPU)是最成功的定制加速器之一,也是博通最大的 ASIC 客户。在我们的基本情况下,我们假设博通保留这一核心 TPU 业务,同时一些较小的项目可能会转向联发科技。我们认为,TPU 的成功是独特的,因为推出定制硅的困难,这可以归因于谷歌在人工智能开发方面的悠久历史。谷歌发明了变换器,并自 2016 年以来一直在其机器学习库上运行内部工作负载。TPU 已经开发了超过十年,目前已进入第七代。鉴于谷歌开创了机器学习模型,并在开发过程中投入了超过 10 年的时间,我们对将 TPU 作为其他 ASIC 机会的可复制模型持谨慎态度。一般来说,我们将定制硅视为高度不确定的领域。许多尝试都未能成功,因此我们对任何新的 ASIC 持“展示给我看”的态度,对于任何希望取代博通的公司也是如此。
In terms of numbers, we forecast Broadcom's ASIC revenue to grow from US$8.9bn in CY24 to US$11.4bn in CY25 (link). There is more uncertainty in CY2026, but we think with the addition of new customers and keeping TPU, ASIC revenue could grow to ~US$14.2bn. We expect new customers, Meta and Bytedance, to start ramping in 2H of 2025, in addition to Google TPU. New customers are expected in the 2027 time frame, and while these opportunities are more distant, it gives us enthusiasm for the long-term trajectory.
从数字上看,我们预测博通的 ASIC 收入将从 2024 年日历年的 89 亿美元增长到 2025 年日历年的 114 亿美元(链接)。2026 年日历年存在更多不确定性,但我们认为随着新客户的增加和 TPU 的保持,ASIC 收入可能增长到约 142 亿美元。我们预计新客户 Meta 和字节跳动将在 2025 年下半年开始 ramping,此外还有谷歌 TPU。预计在 2027 年时间框架内会有新客户,尽管这些机会更为遥远,但这让我们对长期发展轨迹充满热情。
Custom silicon has large growth prospects in Marvell's data center segment, but it's an opportunity with significant optionality, long term. We find this business to be very difficult to forecast beyond the shorter term time horizon, but clearly there is significant potential, near term. Multiple checks are pointing to very strong Trainium builds next year, including 3x y/y CoWoS builds at TSMC, Amazon's public comments on increasing production, and Marvell commentary. In addition, the recent warrant agreement with AWS gives us more confidence on MRVL having content with them in 2026. However, we think it's possible that Alchip will win the compute die, and MRVL will have the networking chip. This would mean a significant step down in Trainium content in 2026, putting the burden of growth on projects with new customers.
定制硅在 Marvell 的数据中心部门具有巨大的增长前景,但这是一个长期的机会,具有显著的选择性。我们发现这个业务在短期时间范围之外非常难以预测,但显然在短期内有显著的潜力。多个迹象表明明年 Trainium 的构建非常强劲,包括 TSMC 的 3 倍同比 CoWoS 构建、亚马逊关于增加生产的公开评论以及 Marvell 的评论。此外,与 AWS 达成的最近的认股权协议让我们对 MRVL 在 2026 年与他们的合作内容更加有信心。然而,我们认为 Alchip 有可能赢得计算芯片,而 MRVL 将拥有网络芯片。这将意味着 2026 年 Trainium 内容的显著下降,将增长的重担放在与新客户的项目上。
We expect at least US$3bn in ASIC revenue next year, exceeding Marvell's guidance of US$2.5bn total AI revenue in CY2025. This assumes roughly 1mn units of ASICs and an ASP of US$3k, but we would caution that there are many unknowns. We would also caution against using total Amazon CoWoS numbers as a direct read-through to Marvell; CoWoS allotment is for Annapurna Labs, and they can choose to allocate to either Marvell or Alchip. Predicting who will win future ASIC projects is difficult, and ASICs are frequently late and take longer to ramp than expected. Multiple scenarios are possible, and the recently announced strategic agreement gives us confidence that they will keep some type of ASIC engagement with AWS, but the nature of it remains uncertain.
我们预计明年 ASIC 收入至少为 30 亿美元,超过 Marvell 对 2025 年日历年总 AI 收入 25 亿美元的指导。这假设大约有 100 万个 ASIC,平均售价为 3000 美元,但我们要警惕有许多未知因素。我们还要警惕不要将亚马逊 CoWoS 的总数直接解读为 Marvell 的情况;CoWoS 的分配是给 Annapurna Labs 的,他们可以选择分配给 Marvell 或 Alchip。预测谁将赢得未来的 ASIC 项目是困难的,ASIC 通常会延迟,并且比预期需要更长的时间来提升产量。多种情景都是可能的,最近宣布的战略协议让我们有信心他们将与 AWS 保持某种类型的 ASIC 合作,但其性质仍然不确定。
The main debate for Marvell is in CY26, when we should see revenue contribution from MSFT and possibly the loss of Trainium 3 to Alchip. We forecast ASICs conservatively at US$3.5bn in CY26, modeling a more modest sequential y/y increase as we see a wide range of outcomes. Custom silicon has significant optionality and in general is very difficult to forecast beyond the shorter-term time horizon. The success rate of custom silicon in AI has been mixed at best so far, in our view, and we are reluctant to assume that every project can meet its potential.
Marvell 的主要争论在于 CY26,届时我们应该看到来自 MSFT 的收入贡献,以及可能将 Trainium 3 转让给 Alchip 的损失。我们保守预测 CY26 的 ASIC 收入为 35 亿美元,建模时考虑到更温和的季度同比增长,因为我们看到结果范围广泛。定制硅具有显著的选择性,通常在短期时间范围之外很难预测。我们认为,迄今为止,定制硅在人工智能领域的成功率充其量是参差不齐,我们不愿假设每个项目都能实现其潜力。
Taiwan's ASIC vendors (Charlie's view)
台湾的 ASIC 供应商(查理的观点)
We do not believe there are many players that can design 3nm or 2nm AI accelerators and networking chips. Based on our observations, major US hyperscalers would likely want to keep working with US design service vendors such as Broadcom and Marvell, but adding Asian vendors into the mix to keep costs competitive. We expect Alchip to secure AWS’s 3nm project at the expense of Marvell in 2025. On the Google TPU competition, we think MediaTek has secured the Google v7 at 3nm, which focuses largely on training purposes, while we believe Broadcom could do another generation of TPU focusing on inference during 2026-2027. On the other hand, GUC/Egis/Novatek/Realtek have relatively less experience and could be considered new to this field; we will continue to monitor their progress.
我们认为能够设计 3nm 或 2nm AI 加速器和网络芯片的参与者并不多。根据我们的观察,主要的美国超大规模云服务商可能希望继续与美国设计服务供应商合作,如博通和美满电子,但也会加入亚洲供应商以保持成本竞争力。我们预计 Alchip 将在 2025 年获得 AWS 的 3nm 项目,取代美满电子。在谷歌 TPU 竞争方面,我们认为联发科技已经在 3nm 上获得了谷歌 v7,该项目主要集中在训练目的上,而我们相信博通可能会在 2026-2027 年推出另一代专注于推理的 TPU。另一方面,GUC/Egis/Novatek/瑞昱在这一领域的经验相对较少,可以被视为新进入者;我们将继续关注他们的进展。
Japanese ASIC vendor - Socionext (covered by Kazuo Yoshikawa)
日本 ASIC 供应商 - Socionext(由吉川和夫报道)
During its October earnings call, Socionext management stated that they had formally signed contracts and had started developing a 3nm ASIC for a North American data center operator as well as an HPC 5nm ASIC for an Indian C-DAC (Centre for Development of Advanced Computing). Management stated that the lifetime value (LTV) of the North American data center contract is more than US$500mn but less than US$1bn (note: LTV is calculated at US$1 = ¥100), and mass production is expected to start in FY3/27 after about one year of development. While some sales decline is likely in 2025, we expect re-acceleration of growth in 2026, as volume production of automotive 5nm ASICs as well as 3nm ASIC for the North American data center operator ramp up. We also believe the North American data center design win could lead to recognition of Socionext's track record by other customers and SoC ecosystem partners, which could open up increased opportunities to participate in future data center design wins.
在 10 月份的财报电话会议上,Socionext 管理层表示,他们已正式签署合同,并开始为一家北美数据中心运营商开发 3nm ASIC,以及为印度 C-DAC(高级计算发展中心)开发 HPC 5nm ASIC。管理层表示,北美数据中心合同的生命周期价值(LTV)超过 5 亿美元,但低于 10 亿美元(注:LTV 按 1 美元=100 日元计算),预计在经过约一年的开发后,量产将在 2027 财年 3 月开始。虽然 2025 年可能会出现一些销售下降,但我们预计 2026 年将重新加速增长,因为汽车 5nm ASIC 和北美数据中心运营商的 3nm ASIC 的量产将逐步增加。我们还相信,北美数据中心的设计胜利可能会使其他客户和 SoC 生态系统合作伙伴认可 Socionext 的业绩记录,这可能会为未来的数据中心设计胜利提供更多参与机会。
展览 33:美国/亚洲 ASIC 服务提供商在主要 ASIC 项目上的竞争

来源:公司数据,摩根士丹利研究
3. EDA (covered by Lee Simpson)
3. EDA(由李·辛普森负责)
The electronic design automation (EDA) market is dominated by two US-listed players – Cadence and Synopsys – which together hold 70%+ share. They supply the chipmaking industry with the design tools (software and hardware) necessary for managing the chip design flow and are an integral part of the design and manufacture process. Their sales growth has been driven in recent years by strong momentum in AI chip development both in the cloud and now at the edge. The companies’ tools are becoming popular with leading GPU makers (Nvidia, Intel, AMD) and the new AI ASICs makers (including AWS, META, Google and Microsoft) alike. Indeed, the latter have been heavy users of software tools from both players as they lean into making chips for AI/HPC. We believe AI-related design projects including Rubin and Rubin Ultra at Nvidia, the recent Artemis project at Meta and work on companion CPUs (including Microsoft Cobalt and Google Axion) have been recent enablers of EDA sales growth.
电子设计自动化(EDA)市场由两家美国上市公司主导——Cadence 和 Synopsys,它们共同占据 70%以上的市场份额。它们为芯片制造行业提供管理芯片设计流程所需的设计工具(软件和硬件),并且是设计和制造过程的不可或缺的一部分。近年来,它们的销售增长受到云端和边缘 AI 芯片开发强劲势头的推动。这些公司的工具在领先的 GPU 制造商(Nvidia、Intel、AMD)和新的 AI ASIC 制造商(包括 AWS、META、Google 和 Microsoft)中变得越来越受欢迎。实际上,后者在为 AI/HPC 制造芯片时,重度使用了这两家公司的软件工具。我们认为,与 AI 相关的设计项目,包括 Nvidia 的 Rubin 和 Rubin Ultra、Meta 最近的 Artemis 项目以及对伴随 CPU(包括 Microsoft Cobalt 和 Google Axion)的工作,最近推动了 EDA 销售的增长。
Our EDA names saw volatile share price reactions this year as the strong top-line momentum seen across 2022-23 slowed into 1H24, not because design momentum in AI had stalled but because of cyclical headwinds in the broader semis market. Recovery in core EDA alongside momentum from the fast-emerging systems and analysis market, IP and even new hardware offerings (including into China) are key growth drivers for 2025 at both Cadence (CDNS) and Synopsys (SNPS).
我们 EDA 公司的股票价格在今年经历了波动的反应,因为 2022-23 年强劲的收入增长势头在 2024 年上半年放缓,这并不是因为人工智能的设计势头停滞,而是由于更广泛半导体市场的周期性逆风。核心 EDA 的复苏以及快速发展的系统和分析市场、知识产权甚至新的硬件产品(包括进入中国)是 2025 年 Cadence(CDNS)和 Synopsys(SNPS)的主要增长动力。
Recovering into 2025: Core EDA visibly slowed through 1H24 from a y/y growth rate of c.15% in 2023 to just c. 3% in 1H24. Momentum had slowed and the market fretted about slowing design activity, single customer vulnerability and weak edge AI traction, yet we think much of this was cyclical. The market slowdown in semis was driven mainly by an inventory correction in industrials and a weak consumer spend environment. Together, this brought about a notable dip in R&D as chipmakers sought to match costs to declining sales – somewhat typical at a cycle trough. As such, we saw a pullback in spend at the EDA names where sales are often a function of R&D. SNPS weathered the storm better given its new product cycles (Zebu, Haps and synopsys.AI), yet the signs of slowing spending in the underlying business (core EDA) were evident at both businesses.
恢复至 2025 年:核心 EDA 在 2024 年上半年明显放缓,从 2023 年约 15%的同比增长率降至 2024 年上半年仅约 3%。市场动能减弱,市场对设计活动放缓、单一客户脆弱性和边缘 AI 吸引力不足感到担忧,但我们认为这在很大程度上是周期性的。半导体市场的放缓主要是由于工业领域的库存调整和疲软的消费支出环境。两者共同导致研发显著下降,因为芯片制造商试图将成本与下降的销售相匹配——这在周期低谷时是相当典型的。因此,我们看到 EDA 公司在支出上的回落,因为销售通常与研发相关。SNPS 由于其新产品周期(Zebu、Haps 和 synopsys.AI)更好地度过了这场风暴,但在两家公司中,核心 EDA 业务支出放缓的迹象显而易见。
展览 34:总 EDA 收入、SNPS 核心 EDA 收入、CDNS 核心 EDA 收入和半导体研发的同比增长

来源:Refinitiv Eikon,公司数据和摩根士丹利研究
展览 35:CDNS 按部门划分的季度同比增长

What has been popular with AI chip designers:
AI 芯片设计师们最受欢迎的是什么:
- We think with Synopsys there has been broad take-up of late of its Fusion compiler for verification work and use of the suite of Synopsys.ai tools as well (in the synthesis stage mainly), particularly where we have seen faster time-to-market gains. At CDNS, we have seen a lot of momentum in their systems business (SD&A) driven by broad take-up of their tools for AI chip designs – both with GPU makers and elsewhere. This includes use of the Allegro X advanced packaging platform and the Integrity systems planner for use with leading edge AI chips.
我们认为,Synopsys 的 Fusion 编译器在验证工作中的广泛应用以及 Synopsys.ai 工具套件的使用(主要是在综合阶段)最近得到了广泛采用,特别是在我们看到更快的市场推出时间的地方。在 CDNS,我们看到他们的系统业务(SD&A)有很大的势头,这主要是由于他们的工具在 AI 芯片设计中的广泛应用——无论是与 GPU 制造商还是其他地方。这包括使用 Allegro X 先进封装平台和 Integrity 系统规划器,以便与领先的 AI 芯片一起使用。 - We have also seen growing interest in the use of new hardware offerings from both players for emulation and prototyping. From SNPS we have seen growing use of their HAPS and ZeBu products. ZeBu (upgraded in April 2023), in particular, has seen growing use given its ability to stand up (validate) new software running on large complex chip designs. We also note growing take-up of Cadence’s new Palladium Z3 emulator and Protium X3 prototyping tools (launched April 2024) by AI/HPC chipmakers, and we expect momentum to continue deep into next year at least for these.
我们还看到对两家公司的新硬件产品在仿真和原型制作中的使用兴趣日益增长。我们注意到,SNPS 的 HAPS 和 ZeBu 产品的使用不断增加。特别是 ZeBu(在 2023 年 4 月升级),由于其能够在大型复杂芯片设计上验证新软件的能力,使用量不断上升。我们还注意到,AI/HPC 芯片制造商对 Cadence 的新 Palladium Z3 仿真器和 Protium X3 原型工具(于 2024 年 4 月推出)的采用不断增加,我们预计这种势头至少会持续到明年。 - We also note the adoption of Cadence’s new Millennium platform by lead partner Nvidia. The large GPU maker was lead development partner on this new mixed hardware/software tool design, and we see first systems shipping in 2H24 with designs for other GPU makers to follow next year. This tool specializes in handling the multi-physics in system design and analysis – especially those arising from computation fluid dynamics issues in 3DIC designs. Our note from May 2024 explains more about the emerging systems design challenges and opportunities for the EDA names (link).
我们还注意到领先合作伙伴英伟达采用了 Cadence 的新千年平台。这家大型 GPU 制造商是这一新混合硬件/软件工具设计的主要开发合作伙伴,我们预计首批系统将在 2024 年下半年发货,其他 GPU 制造商的设计将在明年跟进。该工具专注于处理系统设计和分析中的多物理问题,特别是那些源于 3DIC 设计中的计算流体动力学问题。我们 2024 年 5 月的报告详细解释了新兴系统设计面临的挑战和 EDA 公司的机会(链接)。 - Elsewhere, we have seen growing use of IP from the EDA names, particularly IP relevant in memory interface technologies and for networking across servers in data centres (typically PCIe).
在其他地方,我们看到 EDA 公司提供的 IP 使用越来越广泛,特别是在内存接口技术和数据中心服务器之间的网络(通常是 PCIe)方面的相关 IP。
展览 36:简单芯片设计流程

来源:摩根士丹利研究
4. Wafer and CoWoS supply chain (covered by Charlie Chan)
4. 晶圆和 CoWoS 供应链(由 Charlie Chan 覆盖)
We expect TSMC to re-rate further, and if strong AI demand is sustained, we cannot rule out potential revenue upside for the company in 2024. We think alternative AI semis (such as AI ASICs) will also proliferate, which positions TSMC well in terms of future AI semi demand and competition. Whether ASICs or GPUs win, the competition will drive AI chip cadence and spec enhancement, which increases demand from leading edge production. We believe AI ASICs will become a more significant revenue contributor, at nearly 25% of TSMC's AI revenue by 2027e.
我们预计台积电将进一步重新评级,如果强劲的人工智能需求持续,我们不能排除该公司在 2024 年潜在的收入上行。我们认为替代的人工智能半导体(如人工智能专用集成电路)也将大量涌现,这使得台积电在未来的人工智能半导体需求和竞争中处于有利位置。无论是专用集成电路还是图形处理单元获胜,竞争将推动人工智能芯片的节奏和规格提升,从而增加前沿生产的需求。我们相信到 2027 年,人工智能专用集成电路将成为台积电更重要的收入贡献者,预计占台积电人工智能收入的近 25%。
We believe AI semi revenue in 2025 should be supply-driven, as TSMC’s CoWoS capacity is tight. We are OW ASE on the oS (on Substrate) and chip probing opportunity. CoW is a minor growth driver in 2025. ASE's management restated its guidance for advanced packaging and testing revenue to exceed US$1bn in 2025, or around 10-15% of 2025 ATM revenue.
我们相信,2025 年 AI 半导体收入应以供应为驱动,因为台积电的 CoWoS 产能紧张。我们对 ASE 在 oS(基板)和芯片探测机会持超配评级。CoW 在 2025 年是一个小的增长驱动因素。ASE 管理层重申其对先进封装和测试收入的指导,预计 2025 年将超过 10 亿美元,约占 2025 年 ATM 收入的 10-15%。
We are OW AllRing, as CoWoS accounts for 60%-70% of its equipment revenue in 2024 and 2025. TSMC has not yet finalized its 2026 CoWoS capacity plan yet as it is too early to judge demand. But if that will be above 120kwpm by 2026, that should lead to another strong growth year for AllRing in 2026.
我们是 OW AllRing,因为 CoWoS 在 2024 年和 2025 年占其设备收入的 60%-70%。台积电尚未最终确定其 2026 年的 CoWoS 产能计划,因为现在判断需求还为时尚早。但如果到 2026 年这一数字超过 120kwpm,那么这应该会为 AllRing 在 2026 年带来另一个强劲的增长年。
5. Testing 5. 测试
Based on the current AI ASIC and GPU business environments, we update our view on the tester market, which we now expect to grow 24% in 2025, outpacing the WFE growth rate. We focus on Micronics Japan, the world’s top DRAM probe card maker, and Advantest, which has a high market share in GPU testers.
基于当前的 AI ASIC 和 GPU 商业环境,我们更新了对测试市场的看法,预计到 2025 年将增长 24%,超过 WFE 的增长率。我们关注的是 Micronics Japan,全球顶级的 DRAM 探针卡制造商,以及在 GPU 测试仪中市场份额较高的 Advantest。
In Semiconductor Production Equipment: Still Bullish on the Test Market (2 Dec 2024), we addressed two market debates: "Will computing test growth slow down given pull-forwards in 2025?" and "Will DRAM test grow in 2025 given the market slowdown?". While we acknowledge that 2024 has seen some pull-forward of demand from 2025, especially for GPU testers, we maintain our positive view on both computing and DRAM tests in 2025. We forecast computing test to grow 36% y/y and GPU test to grow 33% in 2025, following 233% y/y growth in computing test in 2024, and we forecast DRAM test to grow 12% as test intensity per GB decline is offset by total industry GB growth.
在《半导体生产设备:对测试市场仍持乐观态度》(2024 年 12 月 2 日)中,我们讨论了两个市场争论:“考虑到 2025 年的需求提前,计算测试的增长会放缓吗?”以及“考虑到市场放缓,2025 年 DRAM 测试会增长吗?”虽然我们承认 2024 年确实出现了部分 2025 年的需求提前,特别是针对 GPU 测试仪,但我们对 2025 年的计算和 DRAM 测试仍持积极看法。我们预测 2025 年计算测试将同比增长 36%,GPU 测试将增长 33%,而 2024 年计算测试同比增长 233%。我们预测 DRAM 测试将增长 12%,因为每 GB 的测试强度下降被整个行业 GB 的增长所抵消。
KYEC (OW) is the key proxy for AI in our coverage. We believe the whole final testing for Blackwell will stay at KYEC in 2025 with B200/300 (dual-die version) shipments potentially reaching ~5mn units in 2025 based on TSMC's CoWoS-L capacity. We note that Blackwell (3x longer than Hopper) and MI355 (2x) all require longer testing times. The new Advantest tester's lead time has expanded from 3 to 6 months owing to the strong demand. Longer term, some AI ASICs, such as AWS's 3nm AI accelerators, may start burn-in tests, according to our supply chain checks.
KYEC(OW)是我们覆盖范围内 AI 的关键代理。我们相信,Blackwell 的最终测试将在 2025 年保持在 KYEC,基于台积电的 CoWoS-L 产能,B200/300(双芯片版本)出货量可能在 2025 年达到约 500 万台。我们注意到,Blackwell(比 Hopper 长 3 倍)和 MI355(长 2 倍)都需要更长的测试时间。由于需求强劲,新款 Advantest 测试仪的交货时间已从 3 个月延长至 6 个月。从长远来看,根据我们的供应链检查,一些 AI ASIC,例如 AWS 的 3nm AI 加速器,可能会开始进行老化测试。
6. HBM
The memory sector faces: 1) the memory cycle entering a downturn; 2) HBM competition increasingly likely from 2025; and 3) Chinese competition already affecting mainstream DDR5 pricing. We cannot see any scenario where all three negatives would play out and the stocks still outperform (Memory – Addressing Three Key Debates).
存储行业面临:1)存储周期进入下行;2)从 2025 年开始,HBM 竞争越来越可能;3)中国竞争已经影响主流 DDR5 定价。我们无法看到任何情景,其中所有三个负面因素都会出现,而股票仍然表现优于市场(存储 - 解决三个关键争论)。
We prefer Samsung Electronics over SK hynix in the ASIC secular growth trend given that perceptions of "HBM shortages" are backward-looking: Clearly, SK hynix is in a league of its own today and showing the best idiosyncratic growth/returns, but can this last? In our view, competition is set to heat up throughout 2025 with significant excess capacity already built. Samsung's ability to qualify for Nvidia is no longer a binary event; it is catching up and will be highly significant in 2025 given the significant implications for both the AI supply chain and commodity DRAM markets. We prefer Samsung Electronics as a key beneficiary of the trend of growing demand for ASICs as the share leader in the non-NVDA HBM market, with AMD and AWS its key customers.
我们更倾向于三星电子而非 SK 海力士,因为 ASIC 的长期增长趋势中对“HBM 短缺”的看法是向后看的:显然,SK 海力士今天处于一个独特的地位,并展现出最佳的特有增长/回报,但这种情况能持续吗?在我们看来,竞争将在 2025 年加剧,已经建立了显著的过剩产能。三星获得 Nvidia 资格的能力不再是一个二元事件;它正在赶上,并将在 2025 年变得极为重要,因为这对 AI 供应链和商品 DRAM 市场都有重大影响。我们更倾向于三星电子,作为 ASIC 需求增长趋势的主要受益者,作为非 NVDA HBM 市场的市场份额领导者,AMD 和 AWS 是其主要客户。
Who are least preferred then?
谁是最不受欢迎的呢?
1) AMD and Intel if their merchant AI accelerators get marginalized.
1) 如果 AMD 和英特尔的商用 AI 加速器被边缘化。
2) Mature node foundries without significant AI exposure.
2) 成熟的节点代工厂没有显著的人工智能曝光。
展览 37:优先顺序 - 全球人工智能半导体

来源:FactSet,摩根士丹利研究(e)估计
08Our base case for global cloud AI semi TAM
我们的全球云 AI 半导体总可寻址市场的基本情况
Our base case for global cloud AI semi TAM
我们对全球云 AI 半导体总可寻址市场的基本预测
Cloud capex remains robust from major CSPs
云资本支出在主要云服务提供商中保持强劲
Based on our US Hardware team's (led by Erik Woodring) research, total hyperscaler capex is now expected to hit US$261bn in 2024, up 51% Y/Y, 5 points faster than our growth expectations ahead of 3Q24 earnings season, with total cloud capex for CY25 now pointing to US$289bn of spending, up 11% Y/Y (vs. +8% Y/Y prior to earnings). Overall, our cloud capex tracker now points to the top 10 global cloud players collectively spending nearly US$550bn on capex in 2024 and 2025, 8% more than what this cohort spent across the prior three years combined.
根据我们美国硬件团队(由 Erik Woodring 领导)的研究,预计 2024 年超大规模资本支出将达到 2610 亿美元,同比增长 51%,比我们在 2024 年第三季度财报季节之前的增长预期快 5 个百分点,而 2025 年日历年云计算资本支出现在预计为 2890 亿美元,同比增长 11%(相比于财报前的 8%)。总体而言,我们的云计算资本支出跟踪器现在显示,全球前 10 大云计算公司在 2024 年和 2025 年合计将花费近 5500 亿美元的资本支出,比这组公司在过去三年合计支出的金额多出 8%。
During the 3Q24CY earnings call, Microsoft guided for capex in the December quarter to increase sequentially and for FY25 capex to be above FY24 levels, driven by cloud and AI demand. Google guided for capex in the December quarter to be flat Q/Q, with CY25 capex increasing Y/Y but the growth magnitude not as significant as in CY24 (which is expected to grow 59% Y/Y). Meta narrowed its 2024 capex guidance range to US$38-40bn (vs. US$37-40bn prior), up 39% Y/Y at the midpoint, despite September quarter capex coming in below consensus estimates, while expecting a "significant" acceleration in infrastructure spend in CY25. Lastly, Amazon guided CY24 capex of US$75bn and signaled continued growth in 2025 to support the growing need for AWS infrastructure driven by strong demand for Gen AI and non-Gen AI workflows. Based on all the recent comments, AI remains the key focus area for these hyperscalers, which we think could support our theses on the AI semi market, and also the nonstop expansion in the overall semi supply chain.
在 2024 年第三季度财报电话会议上,微软预计 12 月季度的资本支出将环比增长,并且 2025 财年的资本支出将高于 2024 财年的水平,这主要受到云计算和人工智能需求的推动。谷歌预计 12 月季度的资本支出将与上一季度持平,2025 年的资本支出将同比增长,但增长幅度不如 2024 年显著(预计 2024 年同比增长 59%)。Meta 将其 2024 年资本支出指导范围缩小至 380 亿至 400 亿美元(之前为 370 亿至 400 亿美元),在中间值上同比增长 39%,尽管 9 月季度的资本支出低于市场共识预期,同时预计 2025 年基础设施支出将“显著”加速。最后,亚马逊预计 2024 年资本支出为 750 亿美元,并表示 2025 年将继续增长,以支持由于对生成性人工智能和非生成性人工智能工作流的强劲需求而不断增长的 AWS 基础设施需求。根据所有最近的评论,人工智能仍然是这些超大规模云服务商的关键关注领域,我们认为这可能支持我们对人工智能半导体市场的论点,以及整体半导体供应链的持续扩张。
展览 38:我们的云资本支出跟踪器现在显示 C3Q24 的资本支出同比增长 60%,高于上周 3Q 财报前的同比增长 58%,而 4Q24 的资本支出同比增长 57%,显著高于 3Q 财报季前预期的 41%同比增长

来源:公司数据,摩根士丹利研究估计。注意:云计算资本支出包括 Alphabet、亚马逊、微软、Meta Platforms、腾讯、百度、苹果、IBM 和甲骨文的资本支出。前瞻性估计包括腾讯和百度的摩根士丹利估计,其他公司的共识估计。
展览 39:我们的云资本支出跟踪器现在显示 2024 年同比增长 51%,高于之前预测的同比增长 46%,而 2025 年的资本支出增长也加速至同比增长 11%(相比之前预测的同比增长 8%)

来源:公司数据,Refinitiv,摩根士丹利研究估计,irs.gov。注意:云资本支出包括 Alphabet、亚马逊、微软、Meta Platforms、腾讯、百度、苹果、IBM 和甲骨文的资本支出。前瞻性估计包括腾讯和百度的摩根士丹利估计,其他公司的共识估计。
展览 40:我们的云资本支出跟踪器估计 2024-2025 年云资本支出将接近 5500 亿美元,比前三年总和多出约 400 亿美元

来源:公司数据,Refinitiv,摩根士丹利研究。注意:云计算资本支出包括 Alphabet、亚马逊、微软、Meta Platforms、腾讯、百度、苹果、IBM 和甲骨文的资本支出。未来估计包括腾讯和百度的摩根士丹利估计,其他公司的共识估计。
展览 41:我们的供应链数据驱动的牛市假设云 AI 半导体总可寻址市场可能在 2025 年增长至 2350 亿美元

来源:公司数据,摩根士丹利研究(e)估计
展览 42:2024 年第三季度超大规模企业的评论

来源:公司数据,FactSet,摩根士丹利研究。
Health checks on top four CSPs' AI capex sustainability
对四大云服务提供商的人工智能资本支出可持续性进行健康检查
We think cash flow analysis could support our expectations for ongoing rising capex from large cloud service providers. Morgan Stanley Research forecasts that the top four US hyperscalers will generate US$168bn in operating cash flow in calendar 2025. Thus, we believe these companies have the capability to keep investing in data centers for AI purposes.
我们认为现金流分析可以支持我们对大型云服务提供商持续增加资本支出的预期。摩根士丹利研究预测,2025 年日历年美国四大超大规模云服务商将产生 1680 亿美元的运营现金流。因此,我们相信这些公司有能力继续投资于用于人工智能的数据中心。
We also look at depreciation as a percentage of total expenses. In past years, given the heavy investment in data centers, depreciation on data center customers' total expenses has risen substantially (from 3-7% in 2012 to 5-10% in 2023). We expect depreciation to continue to grow amid rising capex from the top four US hyperscalers, up to 6-14% in 2025e.
我们还将折旧视为总费用的百分比。在过去几年中,由于对数据中心的重投资,数据中心客户的总费用中的折旧大幅上升(从 2012 年的 3-7%上升到 2023 年的 5-10%)。我们预计,随着美国四大超大规模云服务商资本支出的增加,折旧将继续增长,预计到 2025 年将达到 6-14%。
We also see average AI capex/EBITDA at around 40% in 2024, suggesting that hyperscalers' pockets are still deep enough for further spending.
我们还看到 2024 年平均人工智能资本支出/息税折旧摊销前利润约为 40%,这表明超大规模云服务商的资金仍然足够用于进一步支出。
展览 43:资本支出与息税折旧摊销前利润比率应保持在约 40%的稳定水平

来源:公司数据,摩根士丹利研究(E)估计
展览 44:美国四大超大规模云服务商的资本支出强度保持稳定

来源:公司数据,摩根士丹利研究(E)估计
展览 45:近年来,折旧已成为费用中更大的一部分

来源:公司数据,摩根士丹利研究(e)估计
展览 46:不断上升的运营现金流可能继续支持超大规模企业的资本支出增长

来源:公司数据,摩根士丹利研究(e)估计
展览 47:我们的云资本支出跟踪器估计,资本强度将在 2025 年保持在超过 10 年的高位 12.0%,与 2024 年的水平一致。

来源:公司数据,Refinitiv,摩根士丹利研究估计。注意:云计算资本支出包括 Alphabet、亚马逊、微软、Meta Platforms、腾讯、百度、苹果、IBM 和甲骨文的资本支出。前瞻性估计包括腾讯和百度的摩根士丹利估计,其他公司的共识估计。
展览 48:在公司层面,Meta、Oracle、Microsoft 和 Google 预计在 2024 年和 2025 年将投入最多的资本支出,占收入的百分比

来源:公司数据,Refinitiv,摩根士丹利研究估计,irs.gov。注意:云资本支出包括 Alphabet、微软、Meta Platforms、腾讯、百度、苹果、IBM 和甲骨文的资本支出。前瞻性估计包括腾讯和百度的摩根士丹利估计,其他公司的共识估计。
Based on the latest assumptions from SIA, our base case assumption expects total semiconductor revenue to reach US$1 trillion by 2030e, of which we think AI will account for 48%, growing from 15% in 2023 and 26% in 2024e. (Exhibit 52)
根据 SIA 的最新假设,我们的基本案例假设预计到 2030 年,半导体总收入将达到 1 万亿美元,其中我们认为人工智能将占 48%,而 2023 年和 2024 年的比例分别为 15%和 26%。(展览 52)
Breaking into the details, we see Cloud AI computing occupying the major consumption, where we also see the market to grow from US$44bn in 2023 to US$343bn in 2030e, riding a 34% CAGR during 2023-30e, which would also outperform the entire AI semi market. In the near term, we expect general purpose AI (including GPU, FPGA and merchant AI; including vendors like Nvidia, Intel and AMD) to enjoy most of the share inside the AI computing market, with around 90% of market share in 2024e. Ongoing cost reduction is also important to increase the ROI of AI data centers; therefore, we think Custom AI chips will bring in better efficiency and lower costs over the long term. Although custom ASIC will only grow from 11% of market share in 2024e to 15% in 2030e, we think the growth rate would outpace the GP GPU solution.
深入细节,我们看到云人工智能计算占据了主要消费市场,预计市场将从 2023 年的 440 亿美元增长到 2030 年的 3430 亿美元,2023-2030 年期间的年均增长率为 34%,这也将超过整个人工智能半导体市场。在短期内,我们预计通用人工智能(包括 GPU、FPGA 和商用人工智能;包括 Nvidia、Intel 和 AMD 等供应商)将在人工智能计算市场中占据大部分份额,预计 2024 年市场份额将达到约 90%。持续的成本降低对于提高人工智能数据中心的投资回报率也至关重要;因此,我们认为定制人工智能芯片将在长期内带来更高的效率和更低的成本。尽管定制 ASIC 的市场份额预计将从 2024 年的 11%增长到 2030 年的 15%,但我们认为其增长率将超过通用 GPU 解决方案。
On the other hand, within cloud AI computing, inference AI is the natural next step after big models are trained, so we believe this segment's growth rate will improve continuously into the next few years. In 2024, we expect training to account for around 60% of the total AI semi market, while inferencing is likely to increase from 40% in 2024e to 80% in 2030e, as we believe there will be more AI applications across different end-sectors and markets.
另一方面,在云 AI 计算中,推理 AI 是在大模型训练后自然的下一步,因此我们相信这一细分市场的增长率将在未来几年持续改善。我们预计到 2024 年,训练将占总 AI 半导体市场的约 60%,而推理预计将从 2024 年的 40%增加到 2030 年的 80%,因为我们相信在不同的终端行业和市场中会有更多的 AI 应用。
展览 49:云人工智能半导体总可寻址市场在 2023 年至 2030 年期间以 34%的年复合增长率增长。定制人工智能芯片(ASIC)在长期内将超过通用芯片。

来源:Gartner,摩根士丹利研究(e)估计
展览 50:尽管定制 ASIC 的市场份额仅从 2024 年的 11%增长到 2030 年的 15%,但我们认为其增长率将超过 GP GPU 解决方案

来源:Gartner,摩根士丹利研究
展览 51:我们对 AI 半导体收入的基本预测(包括云和边缘 AI),2023-2030 年预计年均增长率为 30%

来源:Gartner,摩根士丹利研究
展览 52:2030 年人工智能半导体:全球半导体的 48%

来源:Gartner,摩根士丹利研究
09What AI semis mean for TSMC's long-term revenue growth and capex
09 人工智能半导体对台积电长期收入增长和资本支出的影响
What AI semis mean for TSMC's long-term revenue growth and capex
人工智能半导体对台积电长期收入增长和资本支出的影响
TSMC should re-rate further, and if strong AI demand is sustained, we cannot rule out potential revenue upside for the company in 2024. We think alternative AI semis (such as AI ASICs) will also proliferate, which positions TSMC well for future AI semi demand and competition.
台积电应该进一步重新评级,如果强劲的人工智能需求持续,我们不能排除该公司在 2024 年潜在的收入上行空间。我们认为替代的人工智能半导体(如人工智能专用集成电路)也将大量涌现,这使得台积电在未来的人工智能半导体需求和竞争中处于良好位置。
Compared to Nvidia and AMD, which focus on the “chip idea,” enablers (or the so-called AI semi supply chain) provide the tools and manufacturing support required for AI chips. Enablers can support both the pioneer (i.e., Nvidia) and its challengers, so if the AI semi pie gets bigger and more competition arises, the enablers may benefit.
与专注于“芯片理念”的英伟达和 AMD 相比,赋能者(或所谓的 AI 半导体供应链)提供了 AI 芯片所需的工具和制造支持。赋能者可以支持先行者(即英伟达)及其竞争对手,因此如果 AI 半导体市场规模扩大,竞争加剧,赋能者可能会受益。
TSMC is the major enabler here, but we also wanted to include the surrounding AI semi supply chain. Today, most AI computing semis are manufactured at TSMC's 4nm or 7nm nodes. Moreover, TSMC is the dominant supplier of AI semi advanced packaging (CoWoS, or chips-on-wafers-on-substrate), which can significantly enhance AI semis' performance by connecting computing chips with HBM chips.
台积电是这里的主要推动者,但我们也希望包括周围的人工智能半导体供应链。今天,大多数人工智能计算半导体是在台积电的 4nm 或 7nm 节点上制造的。此外,台积电是人工智能半导体先进封装(CoWoS,或晶圆上芯片)的主导供应商,这可以通过将计算芯片与 HBM 芯片连接显著提升人工智能半导体的性能。
How could TSMC benefit from accelerating AI semi demand?
台积电如何从加速的人工智能半导体需求中受益?
TSMC defines AI semis as AI server CPU, along with AI accelerators performing training and inferencing; this does not include networking, edge or on-device AI. AI semis should account for a midteens (%) share of its total 2024 revenue.
台积电将人工智能半导体定义为人工智能服务器 CPU,以及执行训练和推理的人工智能加速器;这不包括网络、边缘或设备上的人工智能。人工智能半导体应占其 2024 年总收入的中十几(%)份额。
Many investors believe most of TSMC's AI semi foundry service caters to Nvidia, but we estimate that Nvidia's AI GP GPU revenue will only actually contribute ~70% of TSMC's AI semi revenue in 2025e. We believe AI ASICs will become a more significant revenue contributor, at nearly 25% of TSMC's AI revenue, by 2027e, vs. ~65% from Nvidia GPUs. Our analysis is as follows:
许多投资者认为,台积电的大部分人工智能半导体代工服务是针对英伟达的,但我们估计,英伟达的人工智能 GP GPU 收入在 2025 年实际只会贡献约 70%的台积电人工智能半导体收入。我们认为,到 2027 年,人工智能 ASIC 将成为更重要的收入贡献者,预计将占台积电人工智能收入的近 25%,而英伟达 GPU 的贡献约为 65%。我们的分析如下:
1. AI GPU revenue contribution to TSMC
1. AI GPU 对台积电的收入贡献
Last year, we only forecasted around 25% Nvidia revenue contribution to TSMC being from AI revenue. However, this year, the increasing percentage is mostly due to the strong demand for both Hopper and Blackwell, and the higher-than-our-previous expectation for the price for CoWoS and testing (including the probe pad cost from TSMC). We therefore believe TSMC can generate US$6.8bn revenue from Nvidia's AI GPU in 2024e, accounting for 8% of TSMC's overall revenue this year.
去年,我们仅预测 Nvidia 对台积电的收入贡献中约有 25%来自人工智能收入。然而,今年这一比例的增加主要是由于 Hopper 和 Blackwell 的强劲需求,以及对 CoWoS 和测试(包括来自台积电的探针垫成本)价格的预期高于我们之前的预期。因此,我们相信台积电在 2024 年可以从 Nvidia 的人工智能 GPU 中产生 68 亿美元的收入,占台积电今年整体收入的 8%。
2. AI ASIC revenue contribution
2. AI ASIC 收入贡献
For AI ASIC, we expect it to account for 4% of TSMC 2024e total revenue this year (including CoWoS and testing),equating to ~US$3.5bn, which is mostly Google TPU v5 and AWS Trainuium/Inferentia 2. Into 2025, we expect this revenue grow 70%, to around US$6.7bn revenue contribution to TSMC.
对于 AI ASIC,我们预计它将在 2024 年占 TSMC 总收入的 4%(包括 CoWoS 和测试),约为 35 亿美元,主要来自 Google TPU v5 和 AWS Trainium/Inferentia 2。进入 2025 年,我们预计这一收入将增长 70%,贡献约 67 亿美元的收入给 TSMC。
3. CoWoS revenue contribution
3. CoWoS 收入贡献
Advanced packaging (CoWoS and SoIC, excluding InFO), which is heavily used by AI semis, may generate more than US$6.5bn revenue for TSMC in 2024e based on 32kwpm of CoWoS capacity and more than double in 2025. Please see Exhibit 56 for a detailed explanation of advanced packaging later in this report.
先进封装(CoWoS 和 SoIC,不包括 InFO),在 AI 半导体中被广泛使用,预计在 2024 年将为台积电创造超过 65 亿美元的收入,基于 32kwpm 的 CoWoS 产能,2025 年将超过翻倍。请参见本报告后面的附录 56 以获取先进封装的详细说明。
4. AI sever CPU 4. AI 服务器 CPU
For AI server CPU, Nvidia's Grace CPU accounts for the majority of TSMC AI CPU production, which will have larger contribution in 2025. We expect it to contribute US$8bn/114bn of front-end wafer revenue in 2024 and 2025, respectively. Then, we expect the revenue contribution from AI CPU to double in 2026, accounting for ~1% of total TSMC AI-related revenue in 2026.
对于人工智能服务器 CPU,Nvidia 的 Grace CPU 占据了台积电 AI CPU 生产的主要部分,预计在 2025 年将有更大的贡献。我们预计它将在 2024 年和 2025 年分别贡献 80 亿美元和 1140 亿美元的前端晶圆收入。然后,我们预计 AI CPU 的收入贡献将在 2026 年翻倍,预计占 2026 年台积电 AI 相关收入的约 1%。
展览 53:我们估计 AI 半导体将占台积电 2027 年预计收入的约 34%

来源:公司数据,摩根士丹利研究估计
2024 AI semi foundry wafer revenue calculation
2024 年人工智能半导体代工晶圆收入计算
展览 54:2024 年预计按客户划分的前沿晶圆消费

来源:公司数据,摩根士丹利研究(e)估计
展览 55:按客户划分的前沿晶圆消费,2025 年预测

来源:公司数据,摩根士丹利研究(e)估计
展览 56:2024 年预计人工智能计算晶圆消费 - 收入高达 51 亿美元

来源:公司数据,摩根士丹利研究(e)估计
展览 57:2025 年预计人工智能计算晶圆消费 - 收入高达 140 亿美元

来源:公司数据,摩根士丹利研究(e)估计
Given the growth from all AI semi segments, we estimate cloud AI semis will account for ~34% of TSMC's 2027e revenue, assuming a 24% revenue CAGR, 2023-27e, for TSMC. We believe HPC (including AI and CPU computing) will increase to 55-60% of TSMC's total revenue (currently 51%). Thanks to AI and HPC, we believe TSMC's 2025e and 2026e revenue CAGR will maintain 22-27% Y/Y growth.
鉴于所有人工智能半导体细分市场的增长,我们估计云人工智能半导体将占台积电 2027 年预计收入的约 34%,假设台积电 2023-2027 年的收入年复合增长率为 24%。我们相信高性能计算(包括人工智能和中央处理器计算)将增加到台积电总收入的 55-60%(目前为 51%)。得益于人工智能和高性能计算,我们相信台积电 2025 年和 2026 年的收入年复合增长率将保持 22-27%的同比增长。
展览 58:定制芯片(ASIC)将是台积电 AI 收入的主要增长动力

来源:公司数据,摩根士丹利研究估计
展览 59:台积电:长期增长驱动因素——人工智能和高性能计算将超越移动计算

来源:公司数据,摩根士丹利研究估计
展览 60:台积电:高性能计算成为台积电最大的收入贡献者

来源:公司数据,摩根士丹利研究估计
展览 61:人工智能半导体可能是台积电下一个增长动力

来源:公司数据,摩根士丹利研究估计
展览 62:台积电的一年期前瞻市盈率:作为关键的 AI 半导体推动者,估值并未过高

来源:公司数据,Refinitiv,摩根士丹利研究估计
AI semi consumes big CoWoS and testing capacity, while extending Moore's Law
AI 半导体消耗大量 CoWoS 和测试能力,同时延续摩尔定律
Chip scaling (Moore’s Law), advancements in packaging (CoWoS, SoIC), faster memory interface (HBM 3), and design customization (ASIC design service) will together help generative AI’s proliferation.
芯片缩放(摩尔定律)、封装技术的进步(CoWoS、SoIC)、更快的内存接口(HBM 3)以及设计定制(ASIC 设计服务)将共同推动生成式人工智能的普及。
We think fast-growing demand for computing (e.g., GPT model training) and TSMC's good execution of process improvement are the key reasons why Moore's Law (chip scaling) can be pushed further.
我们认为,对计算(例如,GPT 模型训练)的快速增长需求以及台积电在工艺改进方面的良好执行是推动摩尔定律(芯片缩放)进一步发展的关键原因。
Dr. Lisa Su, CEO of AMD, thinks that Moore's Law is still relevant (link). She has stated, “I think Moore’s Law has slowed down. We have to do other things to continue to get that performance and that energy efficiency. We developed chiplets – that was a big step. Now we’ve done 3-D packaging. We believe there are a number of other innovations to come.”
AMD 首席执行官 Lisa Su 博士认为摩尔定律仍然适用(链接)。她表示:“我认为摩尔定律已经放缓。我们必须做其他事情来继续获得那种性能和能效。我们开发了芯片组——这是一个重大进展。现在我们已经实现了 3D 封装。我们相信还有许多其他创新即将到来。”
Jensen Huang, CEO of Nvidia, during GTC 2013, stated that "Moore's Law is dead." And then on October 23, 2018, John Hennessy, Chairman of Alphabet, said "It's the end of Moore's Law as we know it." However, after a decade, in 2023, even Nvidia's latest H100 GPU chip is migrating to 4nm in 2023, from the A100 chip at 7nm. We expect Nvidia to speed up and tape out its 3nm Rubin chip by 2025 (and mass produce in mid 2026), as custom AI chip (ASIC) vendors will migrate to 3nm in 2026 to pursue better performance than Nvidia. AI semi helps Moore's Law extension, positive to TSMC's pricing power.
黄仁勋,Nvidia 的首席执行官,在 2013 年 GTC 大会上表示“摩尔定律已经死了。” 然后在 2018 年 10 月 23 日,Alphabet 的董事长约翰·亨尼西说“摩尔定律如我们所知的已经结束。” 然而,经过十年,到 2023 年,即使是 Nvidia 最新的 H100 GPU 芯片也在 2023 年从 7nm 的 A100 芯片迁移到 4nm。我们预计 Nvidia 将在 2025 年前加速并完成其 3nm Rubin 芯片的流片(并在 2026 年中期实现量产),因为定制 AI 芯片(ASIC)供应商将在 2026 年迁移到 3nm,以追求比 Nvidia 更好的性能。AI 半导体有助于延续摩尔定律,对台积电的定价能力是积极的。
展览 63:未来的变压器 AI 模型可能需要更强大的计算性能

展览 64:高可用性 2N 数据中心典型机架的总拥有成本组成部分的细分

来源:Tri-Paragon,摩根士丹利研究
展览 65:台积电每个节点代的电力效率提升为 15-20%,有助于节省电费成本

来源:公司数据,摩根士丹利研究估计
展览 66:台积电的 3nm 和 2nm 在逻辑晶体管密度上领先于行业同行

来源:公司数据,摩根士丹利研究估计
10Appendix 1: ASIC contribution to global AI computing capacity
附录 1:ASIC 对全球人工智能计算能力的贡献
Appendix 1: ASIC contribution to global AI computing capacity
附录 1:ASIC 对全球人工智能计算能力的贡献
ASIC as a percentage of global AI computing capacity and power consumption in 2025
2025 年 ASIC 占全球人工智能计算能力和电力消耗的百分比
With our latest AI chip shipment forecasts, we also estimate the share of additional computing power contributed by AI ASIC chips in 2025 and the corresponding energy consumption. We expect ASIC chips to account for 11% of the additional computing power in 2025 globally, vs. 9% in 2024.
根据我们最新的人工智能芯片出货预测,我们还估计了 2025 年 AI ASIC 芯片所贡献的额外计算能力的份额及相应的能耗。我们预计到 2025 年,ASIC 芯片将在全球额外计算能力中占 11%,而 2024 年为 9%。
However, with regard to power consumption, the share of AI computing energy consumed by AI ASIC chips in 2025 will decrease slightly, to 20% from 24%, in 2024. The power consumption of ASIC chips may be higher than what they can provide in terms of computing power. But we believe that while ASIC chips still lag AI GPUs in performance-to-power consumption ratio, the gap is narrowing.
然而,关于电力消耗,2025 年 AI ASIC 芯片所消耗的 AI 计算能量的份额将略微下降,从 2024 年的 24%降至 20%。ASIC 芯片的电力消耗可能高于它们在计算能力方面所能提供的。但我们相信,尽管 ASIC 芯片在性能与电力消耗比率上仍落后于 AI GPU,但这一差距正在缩小。
Our power consumption calculation includes these assumptions: 1) all chip shipments happen at end of year; 2) 20% utilization rate; 3) all chips operate for the entire year.
我们的电力消耗计算包括以下假设:1)所有芯片出货在年末发生;2)20%的利用率;3)所有芯片全年运行。
展览 67:按供应商划分的额外计算能力,2024 年

来源:公司数据,摩根士丹利研究估计
展览 68:预计到 2025 年,AI ASIC 将占总新增计算能力的 11%

来源:公司数据,摩根士丹利研究估计
展览 69:按供应商划分的人工智能计算(CPU+GPU)功耗(毫瓦)

来源:公司数据,摩根士丹利研究估计
展览 70:预计 AI ASIC 的电力消耗结构将在 2025 年减少

来源:公司数据,摩根士丹利研究估计
展览 71:ASIC 占全球云 AI 半导体预算的百分比,2025 年预测(对 ASIC 收入应用 50%的加价)

来源:Gartner,摩根士丹利研究(e)估计
11Appendix 2: Cloud AI semi Bull Case based on TSMC's CoWoS capacity
附录 2:基于台积电 CoWoS 产能的云 AI 半牛市案例
Appendix 2: Cloud AI semi Bull Case based on TSMC's CoWoS capacity
附录 2:基于台积电 CoWoS 产能的云 AI 半牛市案例
TSMC expanding CoWoS capacity to 80kwpm by 2025, implying larger Nvidia GPU shipment
台积电计划到 2025 年将 CoWoS 产能扩大到 80kwpm,这意味着 Nvidia GPU 出货量将增加
We believe AI semi revenue in 2025 should be supply-driven as TSMC’s CoWoS capacity is tight. Our recent industry checks suggest that cloud AI semi customers such as Nvidia are willing to prepay to secure TSMC’s CoWoS capacity in 2025. Prepayment is a signal of customers’ confidence in demand.During its 3Q24 earning call, TSMC assured investors that AI demand is real. It cited the benefits of adopting AI for its own fab operation and R&D. A 1% productivity gain equals almost US$1 billion to TSMC – which is a tangible ROI benefit. The AI demand is real – and it’s just at the beginning. Almost all AI innovators are working with TSMC, and it believes it has the deepest and widest understanding of demand of all players in this industry. TSMC suggested that it may double, or more than double, its CoWoS capacity Y/Y again in 2025, after more than doubling it Y/Y in 2024.
我们相信,2025 年的 AI 半导体收入应该是供给驱动的,因为台积电的 CoWoS 产能紧张。我们最近的行业调查显示,云 AI 半导体客户如 Nvidia 愿意预付款以确保 2025 年台积电的 CoWoS 产能。预付款是客户对需求信心的信号。在其 2024 年第三季度财报电话会议上,台积电向投资者保证 AI 需求是真实的。它提到了采用 AI 对其自身晶圆厂运营和研发的好处。1%的生产力提升几乎等于台积电近 10 亿美元的收益,这是一种切实的投资回报。AI 需求是真实的——而且这仅仅是个开始。几乎所有的 AI 创新者都在与台积电合作,台积电相信它对这个行业所有参与者的需求有着最深刻和最广泛的理解。台积电表示,2025 年它可能会再次将 CoWoS 产能同比翻倍或更多,此前在 2024 年同比已翻倍以上。
It is difficult to pinpoint when AI capex might reach an inflection point, but supply chain data points are still positive, in our view. 2026 may reach a demand/supply balance, which we believe is why TSMC is yet to finalize its 2026 CoWoS capacity plan (too early to judge demand). TSMC has not issued a concrete forecast for CoWoS capacity beyond 90kwpm in 1Q26, so we only expect minor equipment supply schedule changes given TSMC's limited clean room space. We still assume around 80kwpm of CoWoS capacity to be built by 4Q25. We expect mid-2025 to be when TSMC places new 2026 orders to equipment vendors, and that is subject to the sustainability of AI capex.
很难确定人工智能资本支出何时会达到拐点,但我们认为供应链数据仍然是积极的。2026 年可能会达到供需平衡,这也是我们认为台积电尚未最终确定其 2026 年 CoWoS 产能计划的原因(判断需求还为时尚早)。台积电尚未对 2026 年第一季度 90kwpm 以上的 CoWoS 产能发布具体预测,因此考虑到台积电有限的洁净室空间,我们只预计设备供应时间表会有小幅变化。我们仍然假设到 2025 年第四季度将建造约 80kwpm 的 CoWoS 产能。我们预计 2025 年中期是台积电向设备供应商下达 2026 年新订单的时间,这取决于人工智能资本支出的可持续性。
展览 72:按年份和供应商划分的全球 CoWoS 产能扩张

来源:公司数据,摩根士丹利研究(e)估计
展览 73:按客户划分的全球 CoWoS 消费情况

来源:公司数据,摩根士丹利研究(e)估计
展览 74:什么是 CoWoS?晶片在晶圆(中介层)上的基板上

展览 75:CoWoS 对 Nvidia AI GPU 出货量的隐含及其对台积电的收入贡献

来源:公司数据,摩根士丹利研究(e)估计
Framework of our Bull Case for cloud AI semis (supply-driven)
我们对云 AI 半导体的看涨案例框架(供给驱动)
We had global cloud AI semi bull case revenue to be US$153bn in 2025e previously, but based on the current supply chain visibility, we raise our bull case forecast to US$235bn for 2025e. Our bull case now reflects the cloud AI semi market growing at a 43% CAGR during 2023-2030e. Highlights of our assumptions for cloud AI semi capex, based on our bull case assumptions:
我们之前预计 2025 年全球云计算 AI 半导体的乐观案例收入为 1530 亿美元,但基于当前的供应链可见性,我们将 2025 年的乐观案例预测提高至 2350 亿美元。我们的乐观案例现在反映了云计算 AI 半导体市场在 2023-2030 年期间以 43%的年复合增长率增长。基于我们的乐观案例假设,云计算 AI 半导体资本支出的假设要点:
- 2025 Global Cloud capex to reach US$400bn, including spending from both hyperscalers and Sovereign.
2025 年全球云计算资本支出将达到 4000 亿美元,包括超大规模云服务商和主权投资的支出。 - AI capex to be US$300bn.
人工智能资本支出将达到 3000 亿美元。 - Breaking into details, US$235bn comes from AI semi chips, while the rest of the US$65bn of AI capex spending is coming from hardware components (e.g., thermal, power supply) and system integration, we expect the CAGR to be relatively slower (Exhibit 62).
深入细节来看,2350 亿美元来自人工智能半导体芯片,而其余的 650 亿美元的人工智能资本支出来自硬件组件(例如,散热、电源)和系统集成,我们预计复合年增长率将相对较慢(见图 62)。 - Within the US$235bn cloud AI semis for 2025e, around c.US$205bn is AI GPU production value, and c.US$30bn merchant ASIC, other GP GPU solutions. This is also in-line with our bottom-up analysis according to current CoWoS and AI wafer bookings in 2025e. (See bottom-up details in Exhibit 57.)
在预计 2025 年 2350 亿美元的云 AI 半导体市场中,约 2050 亿美元是 AI GPU 的生产价值,约 300 亿美元是商用 ASIC 和其他 GP GPU 解决方案。这与我们根据当前 CoWoS 和 2025 年 AI 晶圆预订的自下而上分析一致。(请参见第 57 展览中的自下而上详细信息。)
展览 76:我们的供应链数据驱动的牛市假设云 AI 半导体总地址 able 市场可能在 2025 年增长至 2350 亿美元

来源:公司数据,摩根士丹利研究(e)估计
展览 77:在全球人工智能资本支出 3000 亿美元中,我们预计 2350 亿美元来自云人工智能半导体支出,其余 700 亿美元来自系统硬件支出

来源:Gartner,摩根士丹利研究
Longer-term, in our bull case, we expect the 2025 global Cloud AI semi market to further reach US$405bn in 2027e and US$498bn in 2028e, which is in line with AMD's expectations during its recent Advancing AI demand (see Joe's takeaway report). We see room for our bull case to play out if increasing numbers of companies acquire competitive edge and continue investing heavily in AI computing to enhance their productivity, and the early pull-in schedule for AGI applications such as humanoid. Thus, a 43% CAGR over 2023-30e is possible, in our view, implying a >US$500bn market for cloud AI semis in 2030e. In our bull case scenario, we also see the global semi market size expanding to US$1.2trillion, where cloud AI semi would account for ~45% of the total spending and the incremental increases are also driven by AI investments.
从长远来看,在我们的牛市情景中,我们预计 2025 年全球云 AI 半导体市场将在 2027 年进一步达到 4050 亿美元,在 2028 年达到 4980 亿美元,这与 AMD 在其最近的推动 AI 需求中所预期的相符(参见乔的总结报告)。我们认为,如果越来越多的公司获得竞争优势并继续在 AI 计算上进行大量投资以提高生产力,以及人形等 AGI 应用的提前引入计划,我们的牛市情景将有可能实现。因此,我们认为 2023-2030 年期间 43%的年复合增长率是可能的,这意味着到 2030 年云 AI 半导体市场将超过 5000 亿美元。在我们的牛市情景中,我们还看到全球半导体市场规模扩大到 1.2 万亿美元,其中云 AI 半导体将占总支出的约 45%,增量增长也受到 AI 投资的推动。
In our bear case, we assume lack of business models that can effectively monetize generative AI. It also assumes the cost-benefit of custom chips tracks below expectations, and hence the ROI of AI investment cannot improve effectively, leading hyperscalers to slow or cease their investments within this market. We also bake in potential supply chain risk (e.g., component yield rate issues or shortage, energy consumption issues of AI computing), and assume a 19% CAGR, 2023-30e, for our bear case of cloud AI semi growth.
在我们的悲观情况下,我们假设缺乏能够有效变现生成性人工智能的商业模型。它还假设定制芯片的成本效益低于预期,因此人工智能投资的投资回报率无法有效改善,导致超大规模企业放缓或停止在该市场的投资。我们还考虑了潜在的供应链风险(例如,组件良率问题或短缺、人工智能计算的能耗问题),并假设 2023-2030 年云人工智能半导体增长的年均复合增长率为 19%。
展览 78:我们的牛市情景:定制 AI ASIC 设计服务收入预计在 2025 年进一步增长至约 220 亿美元(云 AI 半导体总额为 2350 亿美元),并在 2030 年增长至约 500 亿美元(云 AI 半导体总额增长至 5500 亿美元)

来源:公司数据,摩根士丹利研究(e)估计
展览 79:云 AI 半导体:通用 AI 与定制 ASIC(牛市案例)

来源:公司数据,摩根士丹利研究(e)估计
展览 80:我们的乐观案例预计全球云 AI 半导体市场在 2023-2030 年期间将以 43%的年复合增长率增长

来源:Gartner,摩根士丹利研究
12Appendix 3: Implied HBM consumption by TSMC's CoWoS capacity
附录 3:台积电 CoWoS 产能所隐含的 HBM 消费量
Appendix 3: Implied HBM consumption by TSMC's CoWoS capacity
附录 3:台积电 CoWoS 产能的隐含 HBM 消费量
We introduced 2025 AI semi wafer consumption forecasts in Global Technology: AI Supply Chain: dynamics ahead of Nvidia's print (18 Nov 2024).
我们在《全球科技:人工智能供应链》中介绍了 2025 年人工智能半导体晶圆消费预测:在英伟达发布前的动态(2024 年 11 月 18 日)。
We expect 2025's GPGPU and ASICs' total HBM consumption in 2025e to go up to Gp18bn, increasing from Gp8bn in 2024. Nvidia remains the largest consumer of HBM, which accounts for 76% of the total usage and increased Y/Y from 44% in 2024. Google remains the second-largest consumer of HBM, with AMD also staying as the third-largest consumer in 2025e.
我们预计 2025 年 GPGPU 和 ASIC 的总 HBM 消费将上升至 Gp18 亿,较 2024 年的 Gp8 亿有所增加。Nvidia 仍然是 HBM 的最大消费商,占总使用量的 76%,同比 2024 年的 44%有所增加。Google 仍然是 HBM 的第二大消费商,AMD 在 2025 年预计仍为第三大消费商。
AI chip HBM demand calculation
AI 芯片 HBM 需求计算
展览 81:2024 年预计 HBM 消费 - Nvidia 被视为最大客户

来源:公司数据,摩根士丹利研究(e)估计
展览 82:2025 年预计 HBM 消费 - 英伟达仍然是最大的客户

来源:公司数据,摩根士丹利研究(e)估计
展览 83:HBM TSV 产能似乎将在 2025 年翻倍

来源:TrendForce,摩根士丹利研究(e)估计
展览 84:2025 年 HBM 需求几乎是 2024 年水平的两倍

来源:TrendForce,摩根士丹利研究(e)估计
展览 85:2024 年 HBM 消费 - 高达 80 亿 Gb

来源:公司数据,摩根士丹利研究(e)估计
展览 86:2025 年 HBM 消费 - 高达 190 亿 Gb

来源:公司数据,摩根士丹利研究(e)估计