Can I run ollama on RTX 3060 and Inter iGPU to increase speed?
from jeena@piefed.jeena.net to selfhosted@lemmy.world on 06 Feb 02:15
https://piefed.jeena.net/post/108509

I’m on Arch Linux btw. and I have a RTX 3060 with 12 GB VRAM which is cool so a 14b model fits into the VRAM. It works quite well but I wonder if there is any way to help with the speed even more by trying to utilize the iGPU in my Intel 14600K. It always just sits there not doing anything.

But I don’t know if it even makes sense to try. From what I read in some comments on the internet, the bottleneck will be the ram speed in the iGPU, which will use my normal ram which is a magnitude slower than the VRAM.

Does anyone have any experience with that?

#selfhosted

threaded - newest

just_another_person@lemmy.world on 06 Feb 02:43 next collapse

Nope.

theunknownmuncher@lemmy.world on 06 Feb 03:07 collapse

Models are computed sequentially (the output of each layer is the input into the next layer in the sequence) so more GPUs do not offer any kind of performance benefit

jeena@piefed.jeena.net on 06 Feb 04:00 next collapse

I see, that's a shame, thanks for explaining it.

Blue_Morpho@lemmy.world on 06 Feb 18:04 collapse

You can. But I don’t think it will help because the igpu is so slow.

medium.com/…/llm-multi-gpu-batch-inference-with-a…

Blue_Morpho@lemmy.world on 06 Feb 17:49 collapse

More gpus do improve performance:

medium.com/…/llms-multi-gpu-inference-with-accele…

All large AI systems are built of multiple “gpus” (AI processers like Blackwell ). Really large AI models are run on a cluster of individual servers connected by 800 GB/s network interfaces.

However igpus are so slow that it wouldn’t offer significant performance improvement.

theunknownmuncher@lemmy.world on 06 Feb 21:01 collapse

What I am talking about is when layers are split across GPUs. I guess this is loading the full model into each GPU to parallelize layers and do batching

Blue_Morpho@lemmy.world on 07 Feb 16:05 collapse

No, full models are not loaded into each GPU to improve the tokens per second.

The full Gpt 3 needs around 640GB of vram to store the weights. There is no single GPU (ai processor like a100) with 640 GB of vram. The model is split across multiple gpus (AI processers).