💩
A terminal tool that automatically detects your system's RAM, CPU, and GPU capabilities and recommends which LLM models will run best on your hardware. Supports multi-GPU setups, quantization selection, and local runtime providers like Ollama, llama.cpp, and MLX. Features an interactive TUI with download management and hardware simulation.