Easily Run Local AI Language Models on Your Computer Using Open WebUI

Easily Run Local AI Language Models on Your Computer Using Open WebUI

Open WebUI stands out as a robust, self-hosted, and open-source platform that empowers users to operate AI language models right from their machines, ensuring complete data control. It accommodates various local models, including Ollama, while also supporting OpenAI-compatible APIs. Installation flexibility is a key feature, as Open WebUI can be set up using Docker, Python, or Kubernetes. In the following guide, we present a comprehensive, step-by-step process to install Open WebUI on your personal computer.

Benefits of Using Open WebUI

Open WebUI provides an intuitive and flexible platform for utilizing AI, tailored to fit your specific requirements. Supporting a variety of AI models, it’s compatible with all major operating systems and features a user-friendly interface reminiscent of ChatGPT. Notable offerings include Markdown, LaTeX, plugin integration, and an advanced memory system for effective content storage.

This versatile tool allows users to connect plugins, manage multiple chat threads, and save prompts for future reference. As an open-source initiative, Open WebUI thrives on community-driven enhancements, facilitating rapid evolutionary upgrades and fresh features.

Installing Open WebUI

To get started with Open WebUI via Docker, you’ll first need to establish a project directory and navigate into it:

mkdir openwebuicd openwebui

Next, create a new file named “docker-compose.yml” using your preferred text editor (e.g., Notepad):

nano docker-compose.yml

Insert the following configuration into the newly created “docker-compose.yml” file:

services: ollama: image: ollama/ollama:latest container_name: ollama ports: - "11434:11434" environment: - OLLAMA_USE_GPU=false volumes: - ollama_data:/root/.ollama restart: unless-stopped openwebui: image: ghcr.io/open-webui/open-webui:main container_name: openwebui ports: - "3000:8080" environment: - OLLAMA_BASE_URL=http://ollama:11434 depends_on: - ollama volumes: - open-webui:/app/backend/data restart: unless-stopped volumes: open-webui: ollama_data:

This configuration initializes two essential services: ollama and OpenWebUI. The ollama service utilizes the ollama/ollama container, maps to port 11434, disables GPU acceleration, and saves data in the ollama_data volume. Meanwhile, the OpenWebUI service accesses the open-webui container, routing traffic from port 3000 to port 8080 while depending on the ollama service for operations. Both services include automatic restart options unless manually stopped, with named volumes ensuring data persistence.

After saving the Docker Compose configuration, start the Docker service using the following command:

docker compose up -d

Run Docker Compose Up D

Accessing Open WebUI

Once the containers are up and running, open your preferred web browser and navigate to http://localhost:3000. This action will load the Open WebUI interface directly from your machine. To commence the setup process, simply click on the Get Started button.

Open Webui Get Started

Next, enter your Name, Email, and Password, then click on the Create Admin Account button to establish your Admin account.

Create Admin Account

With your account now created, you can log in to access the Open WebUI dashboard.

Open Webui Set Up

Installing an AI Model with Ollama

While Open WebUI provides a comprehensive interface, it requires you to install at least one local AI model to function correctly. Fortunately, using Ollama simplifies the process. You can choose from a variety of models—including llama3, mistral, gemma, or vicuna—based on your requirements and available system resources.

For this demonstration, we’ll install gemma:2b, known for its efficient resource usage compared to larger models. To initiate the installation, click on your profile icon and select the Admin Panel option to access the management dashboard.

Access Admin Panel

In the dashboard, locate and click the download icon in the top-right corner to begin downloading the model.

Download Model

After specifying the model name, click the download button to proceed.

Pull Model From Ollama

Upon successful download of your model, a confirmation success message will be displayed:

Model Successfully Pulled

At this point, you can select a model from the Open WebUI interface and commence utilizing it for your queries.

Select Model

Utilizing Open WebUI

Once you have chosen a model, you can begin posing questions. For instance, when I inquired, “What is Docker Compose?”, Open WebUI provided the following valuable response:

Start Using Openwebui

To initiate a new conversation without carrying over context from prior discussions, simply click New Chat from the left menu. This feature is particularly beneficial when you wish to shift to an entirely different topic without previous influences.

Start New Chat

The Search section enables you to uncover past conversations or specific keywords within your saved chats. Just enter a term or phrase, and it will filter the results, allowing you to quickly revisit previous insights or prompts.

Search Chats
Create Search Notes

The Workspace provides a structured environment for managing various projects seamlessly, safeguarding against mixing them up. This is especially useful for coding, writing, or any long-term work. The Open WebUI includes the following tabs:

  • Models Tab – Discover and download community models or presets, import models from external sources, and manage installed models.
  • Knowledge Tab – Browse community knowledge packs or import your files (PDF, text, CSV) for the AI to utilize in responses.
  • Prompts Tab – Explore community templates, import existing prompts, and apply them across different chats.
  • Tools Tab – Find or import tools such as code executors, scrapers, or summarizers, enabling direct usage in chats for automation or specialized tasks.
Workspace Openwebui

The Chats section displays your conversation history with the AI, allowing you to reopen previous chats or delete those you no longer need:

Chat History

Chat Controls in Open WebUI

The Chat Controls panel provides options to adjust the AI’s conversational style and responses. You can set a System Prompt to influence tone or behavior, along with customizing Advanced Parameters such as streaming chat replies, chunk size, function calling, seed, stop sequence, temperature, and reasoning effort. You have the freedom to customize these parameters or keep them at their default settings for standard performance.

Chat Controls

By clicking on the profile icon, you can access the user menu which includes options for settings, archived chats, playground, admin panel, documentation, release notes, keyboard shortcuts, signing out, and viewing active users.

Access User Menu

Conclusion

While setting up Open WebUI involves an initial time investment, the advantages far outweigh the efforts. The platform allows users the freedom to exercise full control over their data, select preferred models, and personalize their interface, eliminating the dependence on third-party servers. Post-installation, you can operate the model completely offline, much like using the Gemini CLI AI Agent in your terminal.

Source & Images

Leave a Reply

Your email address will not be published. Required fields are marked *