DeepSeek AI on VMware Homelab – no GPU needed

Reading Time: < 1 minute

Instead of throwing hardware and resources at AI models, let’s actually try and make them efficient while running it locally without the need for internet connectivity. Ohw, and let’s make it so we don’t need all those very expensive GPU’s either! This is exactly what DeepSeek does with their AI model. This is groundbreaking and performing very well in a self-hosted homelab. The efficient AI model is here!

Command list to install this in your homelab:

  • curl -fsSL https://ollama.com/install.sh | sh
  • systemctl is-active ollama.service
  • sudo systemctl start ollama.service
  • sudo systemctl enable ollama.service
  • sudo apt install python3
  • python3 –version
  • sudo apt install python3-pip
  • pip3 –version
  • sudo apt install git
  • git –version
  • ollama run deepseek-r1:7b
  • sudo apt install python3-venv
  • python3 -m venv ~/open-webui-venv
  • source ~/open-webui-venv/bin/activate
  • pip install open-webui
  • open-webui serve

(Visited 138 times, 3 visits today)

About Vikash Jhagroe

Equipped with more than 15 years of experience working on applications and systems, Vikash is a master at connecting businesses with the tech that is right for them. He is passionate about computers and computer systems, and he is committed to serving his clients well. He is a tech-wizard.

View all posts by Vikash Jhagroe

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.