Update README.md
This commit is contained in:
parent
dadaf4000c
commit
ab863141ca
13
README.md
13
README.md
@ -1,6 +1,6 @@
|
||||
# ollama-intel-gpu
|
||||
|
||||
Using Ollama for Intel based GPUs is not as straight forward as other natively Ollama supported platforms. As a workaround, this repo provides a quick sample showing the use of Ollama built with support for Intel ARC GPU based from the information provided by the references bellow. Run the recently released [Meta llama3](https://llama.meta.com/llama3) or [Microsoft phi3](https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential) models on your local Intel ARC GPU based PC using Linux or Windows WSL2.
|
||||
This repo illlustrates the use of Ollama with support for Intel ARC GPU based via SYCL. Run the recently released [Meta llama3.1](https://llama.meta.com/) or [Microsoft phi3](https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential) models on your local Intel ARC GPU based PC using Linux or Windows WSL2.
|
||||
|
||||
## Screenshot
|
||||
![screenshot](doc/screenshot.png)
|
||||
@ -18,7 +18,7 @@ Linux:
|
||||
```bash
|
||||
$ git clone https://github.com/mattcurf/ollama-intel-gpu
|
||||
$ cd ollama-intel-gpu
|
||||
$ docker-compose up
|
||||
$ docker compose up
|
||||
```
|
||||
|
||||
Windows WSL2:
|
||||
@ -28,11 +28,8 @@ $ cd ollama-intel-gpu
|
||||
$ docker-compose -f docker-compose-wsl2.yml up
|
||||
```
|
||||
|
||||
Then launch your web browser to http://localhost:3000 to launch the web ui. Create a local OpenWeb UI credential, then click the settings icon in the top right of the screen, then select 'Models', then click 'Show', then download a model like 'llama3:8b-instruct-q8_0' for Intel ARC A770 16GB VRAM
|
||||
|
||||
# Known issues
|
||||
* Little effort has been made to prune the packages pulled into the Ollama docker image for Intel GPU
|
||||
Then launch your web browser to http://localhost:3000 to launch the web ui. Create a local OpenWeb UI credential, then click the settings icon in the top right of the screen, then select 'Models', then click 'Show', then download a model like 'llama3.1:8b-instruct-q8_0' for Intel ARC A770 16GB VRAM
|
||||
|
||||
# References
|
||||
* https://dgpu-docs.intel.com/driver/client/overview.html
|
||||
* https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/ollama_quickstart.html
|
||||
* https://github.com/ollama/ollama/issues/1590
|
||||
* https://github.com/ollama/ollama/pull/3278
|
||||
|
Loading…
x
Reference in New Issue
Block a user