LLM Model Storage & Local-Host WIP notes

I've noticed that every time i install a new piece of LLM software, they
want to download the models.  without a simple solution, this can easily
end-up consuming both the bandwidth and disk-space as multiple copies of
the same LLM Models get stored on a system.  It would be good if there was
a simple solution, and in-turn also engagement with LLM software creators,
to see whether there's a simple recommendation that could be produced that
provides a very simple way to encourage a 'common llm store' method being
considered by LLM software creators.

In other news; I'm still working on my implementation, and I've been doing
some notes.  I've set-up a site for broader community engagement, whilst
I'm focused on doing things a particular way; and, I've also started doing
implementation related documentation, as I do the integration /
configuration / implementation.

notes:
https://community.openlinksw.com/t/local-host-vector-db-integration/4587

FWIW: I'm planning to do only a limited amount of documentation, whilst I'm
setting-up the system, then when it's working locally, go through, load-up
and make available the far more significant amount of underlying
information / docs / historical works, etc...

There's a Keybase 'team' set-up in the mean-time, link is on the
local-host.co site.

Tim.h.

Received on Sunday, 1 September 2024 19:46:13 UTC