AMD ROCm support may extend from Linux to Windows, which may hugely benefit users and developers.
AMD's Vice President of AI Software Replies in Affirmative When Requested for ROCm Support on Windows
The request for ROCm support on Windows has been pending on AMD for years and users, especially developers, have been eagerly waiting for AMD to make its move. Several years ago, it promised to bring ROCm support to Windows, but we haven't heard much about its progress except that AMD did offer ROCm support on Windows 10 and 11, starting with version 5.5.1.
The currently supported version is 6.2.4, but this isn't widely supported across all the Radeon GPUs and only select models from very few families. This includes the AMD Instinct GPUs and some Radeon GPUs, like the Radeon RX 7900 XT and XTX. This means not everyone with a Radeon GPU can run ROCm on Windows, and the Radeon RX 9000 GPUs aren't offered support yet.
Yes
— Anush Elangovan (@AnushElangovan) March 7, 2025
Thankfully, AMD's Vice President of the AI Software department, Anush Elangovan, has replied in the affirmative to bring ROCm support to Windows for more GPUs as well. He didn't say much, but it does indicate AMD's willingness to extend the software stack support to more Radeon GPUs. At the moment, there are only a few Radeon GPUs that are supported for ROCm on Windows, while you can easily use any RDNA 2 GPU on Linux.
Even though in some cases, you could use slower GPUs than the RX 7900 GRE for Windows, compatibility and performance issues will arise for sure. With the cheapest supported GPU being RX 7900 GRE for Windows, not many users can leverage the powerful tools ROCm has to offer. Even with the officially supported GPUs, you may see crashes, driver timeouts, script hangs or app freezes using various applications.
The issues aren't limited to just one or two, but are many and can be pretty complex at times. If AMD resolves all of these issues and brings better support for ROCm on Windows, not only will older GPU owners be able to run deep-learning tasks, but with support for the latest RDNA 4 GPUs, there is a lot of potential that users and developers with Windows OS can leverage through improved support. It still might take quite some time to see full ROCm support for Windows, especially when we don't see AMD announcing much on it.
CUDA isn’t really the moat people think it is, it is just an early ecosystem. tiny corp has a fully sovereign AMD stack, meaning we have rewritten the full stack from the hardware to PyTorch (with the exception of LLVM), and soon we’ll port it to the MI300X. You won’t even have to use tinygrad proper to use it, tinygrad has a torch frontend now.
Either NVIDIA is super overvalued or AMD is undervalued. If the petaflop gets commoditized (tiny corp’s mission), the current situation doesn’t make any sense. The hardware is similar, AMD even got the double throughput Tensor Cores on RDNA4 (NVIDIA artificially halves this on their cards, soon market pressure will force them not to).
I’m betting on AMD being undervalued, and that the demand for AI has barely started. With good software, the MI300X should outperform the H100.
In other news, tinygrad will be receiving two MI300X boxes directly from AMD, which is a huge development for the AI segment.
AMD 💕 @__tinygrad__
we are looking forward to working closely with @__tinygrad__ to help commoditize the petaflophttps://t.co/LEjsUaPWHV
— Anush Elangovan (@AnushElangovan) March 7, 2025
The developer states that if AMD can nail the software portion, then there should be no reason for NVIDIA being worth 16 times more than AMD.