Strix Halo HomeLab
Documentation About An Otter Wiki
Toggle dark mode Login
Home A - Z Changelog
Menu
  • Boards
  • PCs
  • General Guides
  • AI Guides
  • GitHub Mirror
  • Discord Server
Page Index
  • AI
    • AI-Capabilities-Overview
    • llamacpp-performance
    • llamacpp-with-ROCm
    • vLLM
  • Guides
    • Sixunited-AXB35
      • Power-Mode-and-Fan-Control
      • Replacing-Thermal-Interfaces-On-GMKtec-EVO-X2
    • C-States
    • External-GPU
    • Hardware-Monitoring
    • Power-Modes-and-Performance
    • VM-iGPU-Passthrough
  • Hardware
    • Boards
      • Sixunited-AXB35
        • Firmware
      • Framework-Desktop-Mainboard
      • MeeGoPad-100AM
      • Sixunited-STHT1
    • PCs
      • Beelink-GTR9-Pro
      • Bosgame-M5
      • Corsair-AI-Workstation-300
      • FEVM-FA-EX9
      • Framework-Desktop
      • GMKtec-EVO-X2
      • HP-Z2-Mini-G1a
      • Minisforum-MS-S1-MAX
      • NIMO-AI-MiniPC
      • Peladn-YO1
  • Home
  • AI

Page Index

A

Capabilities-Overview
Intro
GPU Compute
Comparison to other available options
Setup
Basic System Setup
Docker Images
Memory Limits
ROCm
Performance Tips
LLMs
llama.cpp
Additional Resources
Image/Video Generation

L

llamacpp-performance
llama-bench Basics
Performance Testing
Long Context Length Testing
Bonus ROCm numbers
Bonus Tuned ROCm numbers
llamacpp-with-ROCm
Building llama.cpp with ROCm
ROCm
rocWMMA
2025-10-31 rocWMMA

V

vLLM
Current Status
Build Steps
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9