Official sample apps for Google's on-device ML framework with GPU/NPU acceleration
Run LLMs on AMD Ryzen AI NPUs — like Ollama, but purpose-built for NPU performance.
High-performance AI compute kernels for Huawei Ascend NPU using a Pythonic DSL