Getting Started with Foundry Local AI: A Step-by-Step Guide

Getting Started with Foundry Local AI: A Step-by-Step Guide

Harnessing the power of language learning models (LLMs) on your Windows PC has become simpler with Microsoft’s innovative Foundry Local AI tool. Best of all, it’s free! This robust solution opens the door to a vast array of AI models for exploration and experimentation.

Understanding Foundry Local AI

Microsoft’s Foundry Local is a groundbreaking platform tailored predominantly for developers who wish to deploy LLMs directly on their systems without requiring an Internet connection. While it may not replicate the experiences offered by popular cloud AI services like ChatGPT or Copilot, it provides an excellent avenue for testing various AI models.

As it is still in its public preview phase, keep in mind that the tool is relatively basic, with enhancements and new features expected as development progresses.

Prerequisites for Foundry Local AI

Before diving into the installation process, ensure your Windows PC is equipped to run Foundry Local AI. Though Microsoft suggests a high-end Copilot+ setup, lower specifications can still suffice.

Your system requirements are as follows:

  • 64-bit version of Windows 10 or 11, Windows Server 2025, or macOS
  • At least 3GB of hard drive space (15GB recommended for multiple model installations)
  • A minimum of 8GB RAM (16GB is ideal for optimal performance)
  • While not mandatory, having an NVIDIA GPU (2000 series or newer), Qualcomm Snapdragon X Elite (8GB or more), AMD GPU (6000 series or newer), or Apple silicon is recommended

Note: Internet access is only necessary during installation and when adding new AI models. After installation, feel free to operate offline. Also, ensure that you have administrative rights for the installation process.

Step-by-Step Installation of Foundry Local AI

Unlike conventional software, Foundry Local’s installation is executed through the command line using winget. Fear not! You don’t need advanced technical skills to carry out this process.

If you prefer a standard installation method, you can instead download it from GitHub.

To begin with Windows installation, access a terminal window by pressing Win + X, followed by selecting Terminal (admin). Next, execute this command:

winget install Microsoft. FoundryLocal

Installing Foundry Local AI via Windows terminal.

Read and agree to the installation terms and conditions, then patiently wait as the installation progresses, which may take a few moments.

For macOS users, input the command below in a terminal window:

brew tap microsoft/foundrylocal && brew install foundrylocal

If you’re looking to install an LLM on a Raspberry Pi, follow the respective installation instructions available on the official site.

Installing Your First AI Model

When starting, Microsoft recommends installing a lightweight model, such as the Phi-3.5-mini. This model is ideal for those with limited disk space but for an effortless introduction to AI capabilities.

To install your first model, simply open up your terminal and type:

foundry model run phi-3.5-mini

The installation duration may vary based on your model choice; in my case, Phi-3.5-mini completed in just two minutes. A significant advantage of Foundry Local AI is its ability to install the most suitable model variant based on your hardware configuration.

Installing Phi 3.5 mini model.

foundry model list

This command will display each installed model along with the storage space required and its specific use case. Currently, all available models are designed for chat completion tasks.

Full model list on Foundry Local.

Engaging with Local AI Models

Interacting with your installed AI models occurs entirely through the command line, as there’s no comprehensive graphical interface provided yet. Engaging with the models is as straightforward as communicating with any traditional AI chatbot. You simply enter your text at the prompt, which reads Interactive mode, please enter your text.

Each model has its unique limitations. For example, I tested the Phi-3.5-mini model with the question “What is Foundry Local?”It acknowledged its limitations regarding knowledge up until early 2023, delivering a response nonetheless. However, the accuracy of its response may leave something to be desired.

Using Foundry Local to answer questions about foundry local.

For optimal results, stick to straightforward inquiries that don’t demand exhaustive research or immediate updates.

If you’d like to switch between previously installed models, use the following command:

foundry model run modelname

Remember to replace “modelname”with your selected model’s name.

If you’re in Interactive Mode (chatting mode), note that you’ll need to close the terminal window to switch sessions, as there currently is no exit command available—definitely a feature that users are eagerly anticipating!

Essential Foundry Local AI Commands

While a comprehensive list exists, knowing a few core commands will suffice to operate Foundry Local effectively. These commands cover the main categories of model, service, and cache commands.

To view all general commands for Foundry Local, utilize the following:

foundry --help

To explore model-specific commands, input:

foundry model --help

To check service commands, just type:

foundry service --help

And for cache commands, use:

foundry cache --help

By mastering these commands, you will navigate the Foundry Local interface with ease, even as new commands emerge along with future updates. For those needing more capabilities than Foundry Local offers, consider exploring established alternatives like ChatGPT or other innovative AI tools.

Frequently Asked Questions

1. What can I do with Foundry Local AI?

Foundry Local AI enables you to run various language learning models on your PC without needing an internet connection, allowing for experimentation and development with AI technologies. Currently, it focuses on chat completion tasks.

2. Is an expensive computer necessary to run Foundry Local AI?

No, while a higher-end setup is recommended, Foundry Local AI can operate on mid-range hardware. As long as you meet the specified system requirements, you should be able to enjoy the tool’s capabilities.

3. Are there any known limitations of using Foundry Local AI?

Yes, models like Phi-3.5-mini have limitations related to the knowledge cutoff date, meaning they may provide outdated or incorrect information. Additionally, the command-line interface, while functional, lacks graphical components that newer users may find more intuitive.

Source & Images

Leave a Reply

Your email address will not be published. Required fields are marked *