Commit c361e8

2026-03-19 22:56:30 lhl: up-to-date vLLM build scripts
AI/vLLM.md ..
@@ 2,6 2,8 @@
[@lhl](https://github.com/lhl) got implemented the [first public vLLM build recipe](https://github.com/lhl/strix-halo-testing/tree/main/vllm) and shortly after Discord member @ssweens created Arch-based dockerfiles. [@kyuz0](https://github.com/kyuz0) adapted these into a [amd-strix-halo-vllm-toolboxes](https://github.com/kyuz0/amd-strix-halo-vllm-toolboxes) and that is probably the easiest way currently (2025-09-13) to bring up vLLM on Strix Halo.
+ 2026-03-20: There are some recent notes/scripts for building vLLM from source here: https://github.com/paudley/ai-notes/tree/main/strix-halo
+
## Current Status
While vLLM can be built, and runs for basic models like llama2, newer models (gpt-oss for example) or those with different kernel/code dependencies may not work.
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9