Strix Halo Wiki
A - Z Changelog Documentation About An Otter Wiki
Toggle dark mode Login
Menu
  • Home

  • Buyer's Guide
  • AI Guides
  • General Guides

  • Boards
  • PCs

  • GitHub Mirror
  • Discord Server
Page Index
  • AI
    • AI Capabilities Overview
    • llamacpp-performance
    • llamacpp-with-ROCm
    • vLLM
  • Guides
    • Sixunited AXB35
      • Power Mode and Fan Control
      • Replacing Thermal Interfaces On GMKtec EVO-X2
      • Restoring Corrupted BIOS
    • Buyer's Guide
    • C-States
    • External GPU
    • Hardware Monitoring
    • Power Modes and Performance
    • VM iGPU Passthrough
  • Hardware
    • Boards
      • Sixunited AXB35
        • Firmware
      • Framework Desktop Mainboard
      • MeeGoPad 100AM
      • Sixunited STHT1
    • PCs
      • Beelink GTR9 Pro
      • Bosgame M5
      • Corsair AI Workstation 300
      • FEVM FA-EX9
      • Framework Desktop
      • GEEKOM A9 Mega
      • GMKtec EVO-X2
      • HP Z2 Mini G1a
      • Minisforum MS-S1 MAX
      • NIMO AI MiniPC
  • Home
  • AI

Page Index

A

AI Capabilities Overview
Intro
GPU Compute
Comparison to other available options
Setup
Basic System Setup
Docker Images
Memory Limits
ROCm
Performance Tips
LLMs
llama.cpp
Additional Resources
Image/Video Generation

L

llamacpp-performance
llama-bench Basics
Performance Testing
Long Context Length Testing
Bonus ROCm numbers
Bonus Tuned ROCm numbers
llamacpp-with-ROCm
Building llama.cpp with ROCm
ROCm
rocWMMA
2025-10-31 rocWMMA

V

vLLM
Current Status
Build Steps
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9