If it keeps converting power+GPU supply into high-
utilization contracted AI cloud revenue, then attaches higher-stickness layers (
managed inference outcomes, trusted workloads, and partner distribution), it can grow from “GPU-hours” into a platform-like
neocloud while its balance-sheet scale funds the compute cycle.