Strix Halo Wiki
Attachments
History
Blame
View Source
A - Z
Changelog
Documentation
About An Otter Wiki
Toggle dark mode
Login
Menu
Home
Buyer's Guide
AI Guides
General Guides
Boards
PCs
GitHub Mirror
Discord Server
Page Index
AI
AI Capabilities Overview
llamacpp-performance
llamacpp-with-ROCm
vLLM
AI
AI-Capabilities-Overview
62f79a
Commit
62f79a
2025-09-14 03:08:03
lhl
: link to a user report/writeup
AI/AI-Capabilities-Overview.md
..
@@ 158,6 158,7 @@
- Deep dive into LLM usage on Strix Halo: https://llm-tracker.info/_TOORG/Strix-Halo
- Newbie Linux inference guide: https://github.com/renaudrenaud/local_inference
- Ready to use Docker containers: https://github.com/kyuz0/amd-strix-halo-toolboxes
+
- A nice writeup on using Lemonade on Ubuntu 25.04: https://netstatz.com/strix_halo_lemonade/
## Image/Video Generation
For Windows you can give AMUSE a try. It's probably the easiest way to get started quickly: https://www.amuse-ai.com/
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9