Add WSL2 support
This commit is contained in:
parent
025b1b0fc9
commit
c93791e9ba
15
README.md
15
README.md
@ -1,29 +1,36 @@
|
|||||||
# ollama-intel-gpu
|
# ollama-intel-gpu
|
||||||
|
|
||||||
Using Ollama for Intel based GPUs is not as straight forward as other natively Ollama supported platforms. As a workaround, this repo provides a quick sample showing the use of Ollama built with support for Intel ARC GPU based from the information provided by the references bellow. Run the recently released [Meta llama3](https://llama.meta.com/llama3) or [Microsoft phi3](https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential) models on your local Intel ARC GPU based PC.
|
Using Ollama for Intel based GPUs is not as straight forward as other natively Ollama supported platforms. As a workaround, this repo provides a quick sample showing the use of Ollama built with support for Intel ARC GPU based from the information provided by the references bellow. Run the recently released [Meta llama3](https://llama.meta.com/llama3) or [Microsoft phi3](https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential) models on your local Intel ARC GPU based PC using Linux or Windows WSL2.
|
||||||
|
|
||||||
## Screenshot
|
## Screenshot
|
||||||
![screenshot](doc/screenshot.png)
|
![screenshot](doc/screenshot.png)
|
||||||
|
|
||||||
# Prerequisites
|
# Prerequisites
|
||||||
* Ubuntu 23.04 or newer (for Intel ARC GPU kernel driver support. Tested with Ubuntu 23.10)
|
* Ubuntu 23.04 or newer (for Intel ARC GPU kernel driver support. Tested with Ubuntu 23.10), or Windows 11 with WSL2 (graphics driver [101.5445](https://www.intel.com/content/www/us/en/download/785597/intel-arc-iris-xe-graphics-windows.html) or newer)
|
||||||
* Installed Docker and Docker-compose tools
|
* Installed Docker and Docker-compose tools (for Linux) or Docker Desktop (for Windows)
|
||||||
* Intel ARC series GPU (tested with Intel ARC A770 16GB)
|
* Intel ARC series GPU (tested with Intel ARC A770 16GB)
|
||||||
|
|
||||||
# Usage
|
# Usage
|
||||||
|
|
||||||
The following will build the Ollama with Intel ARC GPU support, and compose those with the public docker image based on OpenWEB UI from https://github.com/open-webui/open-webui
|
The following will build the Ollama with Intel ARC GPU support, and compose those with the public docker image based on OpenWEB UI from https://github.com/open-webui/open-webui
|
||||||
|
|
||||||
|
Linux:
|
||||||
```bash
|
```bash
|
||||||
$ git clone https://github.com/mattcurf/ollama-intel-gpu
|
$ git clone https://github.com/mattcurf/ollama-intel-gpu
|
||||||
$ cd ollama-intel-gpu
|
$ cd ollama-intel-gpu
|
||||||
$ docker-compose up
|
$ docker-compose up
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Windows WSL2:
|
||||||
|
```bash
|
||||||
|
$ git clone https://github.com/mattcurf/ollama-intel-gpu
|
||||||
|
$ cd ollama-intel-gpu
|
||||||
|
$ docker-compose -f docker-compose-wsl2.yml up
|
||||||
|
```
|
||||||
|
|
||||||
Then launch your web browser to http://localhost:3000 to launch the web ui. Create a local OpenWeb UI credential, then click the settings icon in the top right of the screen, then select 'Models', then click 'Show', then download a model like 'llama3:8b-instruct-q8_0' for Intel ARC A770 16GB VRAM
|
Then launch your web browser to http://localhost:3000 to launch the web ui. Create a local OpenWeb UI credential, then click the settings icon in the top right of the screen, then select 'Models', then click 'Show', then download a model like 'llama3:8b-instruct-q8_0' for Intel ARC A770 16GB VRAM
|
||||||
|
|
||||||
# Known issues
|
# Known issues
|
||||||
* It should be easy to adopt/refactor this for support on Windows WSL2, but this was not the target for this repo
|
|
||||||
* No effort has been made to prune the packages pulled into the Ollama docker image for Intel GPU
|
* No effort has been made to prune the packages pulled into the Ollama docker image for Intel GPU
|
||||||
|
|
||||||
# References
|
# References
|
||||||
|
35
docker-compose-wsl2.yml
Normal file
35
docker-compose-wsl2.yml
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
version: "3.9"
|
||||||
|
services:
|
||||||
|
ollama-intel-gpu:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
container_name: ollama-intel-gpu
|
||||||
|
image: ollama-intel-gpu:latest
|
||||||
|
restart: always
|
||||||
|
devices:
|
||||||
|
- /dev/dri:/dev/dri
|
||||||
|
- /dev/dxg:/dev/dxg
|
||||||
|
volumes:
|
||||||
|
- /usr/lib/wsl:/usr/lib/wsl
|
||||||
|
- /tmp/.X11-unix:/tmp/.X11-unix
|
||||||
|
- ollama-intel-gpu:/root/.ollama
|
||||||
|
environment:
|
||||||
|
- DISPLAY=${DISPLAY}
|
||||||
|
ollama-webui:
|
||||||
|
image: ghcr.io/open-webui/open-webui:git-c9589e2
|
||||||
|
container_name: ollama-webui
|
||||||
|
volumes:
|
||||||
|
- ollama-webui:/app/backend/data
|
||||||
|
depends_on:
|
||||||
|
- ollama-intel-gpu
|
||||||
|
ports:
|
||||||
|
- ${OLLAMA_WEBUI_PORT-3000}:8080
|
||||||
|
environment:
|
||||||
|
- OLLAMA_BASE_URL=http://ollama-intel-gpu:11434
|
||||||
|
extra_hosts:
|
||||||
|
- host.docker.internal:host-gateway
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
ollama-webui: {}
|
||||||
|
ollama-intel-gpu: {}
|
Loading…
x
Reference in New Issue
Block a user