
AI Blue Book: Adds AMD graphics card support to Ollama
Hey everyone
While Ollama on Windows now officially supports AMD's ROCm framework, some newer AMD graphics cards (like the latest 90-series) might not be supported right out of the box. AMD has stated that full ROCm coverage for Windows is on its way, but if you're like me and want to get your shiny new GPU crunching AI models right now, then follow along with this guide!
1. Install Ollama
This step is pretty straightforward. I've written a tutorial on this before, which you can jump to here to get it set up and also learn some AI basics.
2. Check if Your Graphics Card is Officially Supported
Head over to the official Ollama website. They usually list the AMD graphics cards that are currently supported. If your model is on that list, fantastic! You don't need to do anything extra. Just follow my Ollama tutorial to add the models you want to run, and you should be good to go.
3、Add ROCm for unsupported graphics cards
If your graphics card isn't on the officially supported list, don't worry! Here’s how you can manually add support:
- Find the ROCm Build:
Go to this GitHub project: [Insert Link to the specific GitHub project you are referring to, e.g., a community ROCm build for Ollama or a specific ROCm release page that provides these files. If no specific link, describe it, e.g., "the relevant ROCm build repository for Windows"].
On the right side of the page, look for "Releases" and click on the latest version (often marked "Latest"). Keep in mind that this version will be updated over time, so the exact version number might change. (The image below is just for reference).
(You'd ideally include a screenshot here if this were a real blog post)

Identify Your Card's gfx
Number:
You need to find the gfx
(Graphics IP version) number for your specific AMD graphics card. The easiest way is usually to search on Google using: [Your Graphics Card Model] + rocm gfx
(e.g., "RX 7900 XTX rocm gfx").
If a simpler method becomes available in the future, I'll be sure to update this post.
As an example, for many of the newer 90-series cards that might currently be unsupported, you might find (by checking the GitHub project's documentation or community discussions) that the RX 9070XT and RX 9070 use the gfx
number 1201
.

Download the ROCm Package:
On the GitHub Releases page, scroll down to the "Assets" section. Look for the ROCm package that matches your card's gfx
number (e.g., gfx1201.zip
or similar). Download this file and then extract its contents.

Extracted Files:
After unzipping, you should typically find two main items:
A folder named library
A file named rocblas.dll

Copy the Files:
Now, you need to place these files into your Ollama installation directory:
Copy the library
folder to the following path. Important: Replace YourUsername
with your actual Windows username.C:\Users\YourUsername\AppData\Local\Programs\Ollama\lib\ollama\rocm\rocblas\library

Copy the rocblas.dll
file and use it to overwrite the existing rocblas.dll
file in the following path. Again, replace YourUsername
with your actual Windows username.C:\Users\YourUsername\AppData\Local\Programs\Ollama\lib\ollama\rocm

Restart Ollama:
Finally, restart the Ollama application (you might need to quit it from the system tray and then relaunch it).
That's it! Your previously unsupported AMD graphics card should now be recognized and utilized by Ollama. To confirm, try running a large language model, and you should see your GPU utilization increase in Task Manager.
Happy AI experimenting!

