By 2030, the bull case is not “more PCs,” it’s AMD proving it can deliver a full, production-standard AI rack alternative at scale: silicon cadence (
Instinct +
EPYC), cluster-level reference designs, and enough software maturity and tooling that large buyers can multi-source without paying a large productivity tax. If that conversion happens, revenue mix shifts structurally toward data center (higher ASPs, longer programs, stickier platform decisions) and AMD earns a more durable premium than a typical cyclical CPU vendor. The additive opportunities (OPEX pods, enterprise marketplace/clearing layer,
chiplet/IP licensing, security/verification attach) matter mainly as mix and predictability improvements—less as standalone giants—because they stabilize demand signals and improve customer time-to-scale. Against comparables, AMD’s
multiple plausibly remains below the category leader but above “mid-cycle semis” if it becomes the default second platform for large-scale AI infrastructure procurement.