Ollama

A popular open-source tool that makes running LLMs locally as easy as a single terminal command. Ollama handles model downloading, quantization, and serving, built on top of llama.cpp. It’s the fastest way to go from zero to running a local chatbot and supports macOS, Linux, and Windows. If you’re buying hardware for local AI, Ollama compatibility is a good baseline test.

Related Articles

More Terms