Comment Despite persistent worries that vast spending on AI infrastructure may not pay for itself, cloud providers, hyperscalers, and datacenter operators have continued to shovel billions of dollars into ever-larger GPU clusters.
Those who worry the world is spending too much, too fast on AI usually say there is thin evidence of machine-learning investments in this LLM-era producing substantial revenue or profit, and point out cautious corporate adoption. They also highlight DeepSeek’s claims that it could have trained its V3 model in the cloud for low millions of dollars, thanks to its efficient design, news that saw the value of AI-centric stocks slump. In reality, the Chinese lab spent a pretty penny on its own on-prem cluster of GPUs to build the model, though it still appears a more lightweight operation than its Western rivals while being roughly as capable.
That all said, plenty of investors remain optimistic.
OpenAI’s $500 billion Stargate tie up with SoftBank, Oracle, MGX, and others was a massive vote of confidence in demand for AI infrastructure
In a release, Together AI claimed it’d secured 200 megawatts of datacenter capacity which it intends to fill with yet more of Nvidia’s flashy new Blackwell GPUs. Earlier this month the startup announced availability of Nvidia’s B200-based systems and is working to deploy a cluster of 36,000 GB200 GPUs in partnership with Hypertech.