OpenAltFinder
Ollama

Ollama

Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.

Open source alternative to:

Ollama is a tool for running large language models locally on your own hardware. It provides a simple command-line interface and a REST API compatible with the OpenAI API format, making it straightforward to pull, run, and switch between models from a growing library. Models run entirely on your machine, with no data sent to external services.

The platform integrates with thousands of applications and developer tools — including coding assistants, RAG pipelines, document processing workflows, and AI chat interfaces — through its API layer. Custom models can be created and shared using a Modelfile format similar to a Dockerfile, and multi-modal models that handle text and images are supported.

Ollama is aimed at developers, researchers, and organisations who want to run AI inference locally for privacy, cost control, or offline use, and who need a straightforward way to manage and serve multiple models without complex infrastructure setup.

Categories:

Build with:

Visit Ollama
License
MIT
Self hostable
Yes
Repository details
Version
v0.18.3
Created
6/26/2023
Stars
166322
Forks
15201
Open issues
2739
Last commit
3/28/2026

Updated 3/28/2026, 11:00:30 AM

View Repository

Similar open source alternatives