Thesis
The default AI compute, systems, and networking stack keeps compounding into a rack‑scale platform (CUDA/NVLink + Blackwell/Rubin + Spectrum‑X/Photonics + NIM/DGX Cloud). Demand expands with power‑constrained giga‑scale AI factories, but today’s size and premium compress upside to disciplined, non‑linear yet moderate multiple expansion by 2030.