

4·
1 month agoAs far as I understand, the training data is closed source. But, the methodology of training is open source which allows independent parties to recreate the model from scratch and see similar results. Not only can you download the full >400GB model using huggingface or ollama, but they also offer distilled versions of the model which are small enough to run on something like a raspberry pi. i’m running it locally on my machine at home with perplexica (perplexity.ai lookalike with searching capabilities)
I’ve been using the fork Outertune, which seems more compatible with youtube music sync across devices, if you care about that