From ab863141ca1ae74874e45fe95a7a4d7470ed5207 Mon Sep 17 00:00:00 2001 From: Matt Curfman Date: Wed, 31 Jul 2024 21:50:48 -0700 Subject: [PATCH] Update README.md --- README.md | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 7855512..1677412 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # ollama-intel-gpu -Using Ollama for Intel based GPUs is not as straight forward as other natively Ollama supported platforms. As a workaround, this repo provides a quick sample showing the use of Ollama built with support for Intel ARC GPU based from the information provided by the references bellow. Run the recently released [Meta llama3](https://llama.meta.com/llama3) or [Microsoft phi3](https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential) models on your local Intel ARC GPU based PC using Linux or Windows WSL2. +This repo illlustrates the use of Ollama with support for Intel ARC GPU based via SYCL. Run the recently released [Meta llama3.1](https://llama.meta.com/) or [Microsoft phi3](https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential) models on your local Intel ARC GPU based PC using Linux or Windows WSL2. ## Screenshot ![screenshot](doc/screenshot.png) @@ -18,7 +18,7 @@ Linux: ```bash $ git clone https://github.com/mattcurf/ollama-intel-gpu $ cd ollama-intel-gpu -$ docker-compose up +$ docker compose up ``` Windows WSL2: @@ -28,11 +28,8 @@ $ cd ollama-intel-gpu $ docker-compose -f docker-compose-wsl2.yml up ``` -Then launch your web browser to http://localhost:3000 to launch the web ui. Create a local OpenWeb UI credential, then click the settings icon in the top right of the screen, then select 'Models', then click 'Show', then download a model like 'llama3:8b-instruct-q8_0' for Intel ARC A770 16GB VRAM - -# Known issues -* Little effort has been made to prune the packages pulled into the Ollama docker image for Intel GPU +Then launch your web browser to http://localhost:3000 to launch the web ui. Create a local OpenWeb UI credential, then click the settings icon in the top right of the screen, then select 'Models', then click 'Show', then download a model like 'llama3.1:8b-instruct-q8_0' for Intel ARC A770 16GB VRAM # References -* https://dgpu-docs.intel.com/driver/client/overview.html -* https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/ollama_quickstart.html +* https://github.com/ollama/ollama/issues/1590 +* https://github.com/ollama/ollama/pull/3278