It’s not electricity. It’s lifespan. If you train on the GPUs and peg them at 100% for weeks and months on end, as /u/fredandlunchbox said, physics starts to take over and degrade the device. Think of it like the rusting process.
In the non-GPU world, data center hardware sold by commodity vendors like Dell, HPE, etc would come with contracted enterprise support and warranties for roughly 2 years. And that was for infra that you are just running databases, and app and web servers on.
What companies currently appear to be doing is a stepped wedge adjustment procedure, where they start the GPUs off at extremely high loads (i.e. 90-100% running for 3-4 weeks at a time on >1 trillion + paramater training runs). A batch of your supply is segmented off for pure inference (serving user requests + inputs), and the other batches of your supply are older hardware. Basically after beating the *ish out of your GPU until it’s about to start degrading, they’ll rotate it down to a throttled inference-only workload that runs in a grid/array of other GPUs, each running at ~80-90% capacity and dynamically being moved off of the grid for cool down periods in a systematic manner.
But to be sure, in the recent past a GPU that’s being trained on constantly (or even used for crypto mining) will begin to degrade within the first year.
From an investor perspective, blackwells are an extremely fast depreciating asset… it’s like you’re paying for a ferrari to drive you across the country as many times as possible before you dispose of it. The idea is that whatever models the blackwells are ultimately training have greater value than the GPU itself. Also, the overall infra of the cabling, racks, networking, etc. adds an operational capacity and scale for the firm (in essence betting that their data center investments will continue to be necessary at that scale for the foreseeable future. Lastly, the thing I’m looking at will be innovation in the networking space. It’s highly likely within the next 2-3 years we see a major revolution in data center networking with photonics-based networks that can integrate with existing hyper-converged infra investments, likely being deployed within ~5 years across tier 1 and 2 DCs.
HPE the silent AI sleeper $26-$28 🧲 pt
📍Major Government Contract: In late November 2025, HPE was awarded a US$931 million, 10-year contract by the US Defense Information Systems Agency (DISA) to modernize their data centers using the HPE GreenLake solution.
📍Expansion of Existing Partnerships: In late 2024, Barclays expanded its private cloud contract with HPE, making GreenLake Cloud a core pillar of its hybrid cloud strategy.
📍New Product Availability and Integrations: HPE recently announced a wave of new product integrations and availability timelines, which often precede or enable new customer contracts.
🌟HPE Alletra Storage MP X10000 Data Intelligence Nodes will be available in January 2026.
🌟HPE Zerto Software integration and Compute Ops Management will also be available in January and December 2025, respectively.
📍Focus on AI and Networking: HPE's recent acquisition of Juniper Networks and numerous AI-related product announcements such as the HPE Private Cloud AI solution with Nvidia position GreenLake for significant growth in the high-demand AI and networking sectors, which is expected to drive future deals and growing Annual Recurring Revenue
💵 Overall, with strong recent contract wins and a clear roadmap for new AI and networking offerings, the outlook suggests a high likelihood of additional new deals being announced in the near term