- cross-posted to:
- linux@programming.dev
- cross-posted to:
- linux@programming.dev
I look forward to not installing it.
holy shit, no thank you
While I definitely do not want a LLM (especially not Open AI or whatever) to have access to my terminal or other stuff on my PC, and in general don’t have any use for that, I find it cool that something like this is available now.
Remember, it’s totally optional and nobody forces you to download that stuff. You have the choice to ignore it, and that’s the great thing about Linux!
From the title I thought Gnome foundation made a Ai Client for a sec, Until I read the article.
Idk why people don’t read the article before commenting.
Newelle supports interfacing with the Google Gemini API, the OpenAI API, Groq, and also local large language models (LLMs) or ollama instances for powering this AI assistant.
So you configure it with your prefered model which can include a locally run one. And it seems to be its own package not something built into gnome itself so you an easily uninstall it if you won’t use it.
Seems fine to me. I probably won’t be using it, but it’s an interesting idea. Being able to run terminal commands seems risky though. What if the AI bricks my system? Hopefully they make you confirm every command before it runs any of them or something.
Big nope from me dawg
Or, ORRRR…just do the stuff yourself and don’t further perpetuate this dumbshit until it doesn’t require an entire months worth of energy for an efficient home to run to search “Hentai Alien Tentacle Porn” for you.
Buncha savages.
search “Hentai Alien Tentacle Porn” for you
This is suspiciously specific 🙂
It’s clearly what most Linux users that would use “AI” would be searching.
I haven’t tested this but TBH as someone who has run Linux at home for 25 years I love the idea of an always alert sysadmin keeping my machine maintained and configured to my specs. Keep my IDS up to date. And so on.
Two requirements:
1 Be an open source local model with no telemetry
2 Let me review proposed changes to my system and explain why they should be made
- That is not what this does
- You can certainly have unattended updates without an LLM in the mix.
Like what do you need to keep configured? lol Linux is set it and forget it. I’ve had installs be fine from day one to year 7. It’s not like windows where Microsoft is constantly changing things and changing your settings. Like it takes minimum effort to keep a Linux server/system going after initial configuration.
For some reason, these local LLMS are straight up stupid. I tried deepseek R1 through ollama and it was straight up stupid and gave everything wrong. Anyone got the same results? I did the 7b and 14b (if I remember these numbers correctly), 32 straight up didn’t install because I didn’t have enough RAM.
Did you use a heavily quantized version? Those models are much smaller than the state of the art ones to begin with, and if you chop their weights from float16 to float2 or something it reduces their capabilities a lot more
The performance is relative to the user. Could it be that you’re a god damned genius? :/