How to run offline now that backends are downloaded at runtime? #5918
Replies: 2 comments
-
I think the AIO download the models for you and sets everything up on first start. You can pull the back-ends with docker and save them as an OCI file to install later using |
Beta Was this translation helpful? Give feedback.
-
easiest way is to backup your Another way is install backends from OCI files manually, which is ideal for airgap: For instance, you can pull the backends images (even with |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I used LocalAI (great project!) to run LLMs offline. And I mean offline, not just on my machine but on a computer that isn't connected to the internet. So I download the container image from an internet computer, save it to a file and move it with a USB to the other machine. Worked great. But with the latest images, there are no backends... I thought the AIO images would have the backends but it appears not as it tried to download llama-cpp on first model load.
Is there a standard way to solve this? I suppose I can create a new image starting from the default LocalAI image, run the backend download command and save that new image. But I wanted to check first.
Beta Was this translation helpful? Give feedback.
All reactions