stravanasu

  • 42 Posts
  • 200 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle

  • It was exactly as you said; a difference I didn’t know about. Also confirming that Kubuntu apparently installs them system-side, even if flatpak install ... is called without sudo, again as you inferred. I don’t know how I managed to install them user-side in one laptop, but now they mirror each other :)

    For others interested, these two commands show the difference, as explained by another user in a cross-post:

    flatpak --user list
    flatpak --system list
    

    Thank you!








  • It is actually not so difficult to see this for yourself in a much simplified setting. One can easily build a “Small Language Model” that extracts correlations between only three consecutive words. On the web there’s plenty of short scripts that do this; here and here is one example. The output created by such a SLM can have remarkably long sentences with grammatical meaning (see the examples in the links above); this is remarkable since all it learned was correlations between triplets of words.

    Now you can take a large amount of output from such a SLM, and use it to train a second, identical or even better SLM, then check the output generated by this second one. You’ll see that the new output is less coherent than the one from the first SLM. Give the output of the second SLM to a third, and you’ll see even less coherent text coming out. And so on.


  • They aren’t out of context, and you have just said the same thing. Data processing can help in removing noise, but it can’t help in creating information or extracting information that wasn’t there in the first place. In fact – again as you said – it can end up destroying part of the original information.

    LLMs extract word correlations from textual data. Already in this process they are losing information, since they can’t extract correlations beyond a certain (yet large) length, and don’t extract correlations at shorter lengths. And in creating output they insert spurious correlations that replace (destroy) some of the original ones. This output will contain even less information than the original training data. So a new LLM trained with such an output will give back even less.







  • I’ve been having similar turd-kind encounters with bank apps even within Android. I use the egregious Heliboard from F-droid, and my bank app refused to start because I use an “untrusted keyboard” – funny as it’s way more trustworthy that Gboard or Microslop board apps. Turns out the apps of all banks in my country are like that. So now I simply access the bank via the browser instead. Fuck their apps.

    But I understand that the browser solution may not work for everyone :(

    Partly this problem comes from incompetence of the app’s developers, partly for shifting responsibility: it seems to me that they let Play Store do the checks, so if any hacking happens they can blame Play Store. And there’s also the modern motto: “if you want to make an app secure, make it unusable”. Even better I’d then say “don’t make it at all”! – there, security-problem fully solved.

    Put pressure on banks would be best. Possibly one could also play a “disability” card: I must use such-and-such app or OS owing to visual impairment, say. Or collect signatures for a petition… but I imagine we’re a very small minority.

    As a protest in my case I changed bank a couple of times.

    But thank you for the USB-ADB tip! I’ll use it when I switch to GrapheneOS.





  • Nobody in the middle. No server storing anything. No company analyzing anything

    […]

    In deferred mode, it works just like regular email. Meaning your contact doesn’t need to be online when you send the message. Your contact will get it automatically once they come online.

    So I can’t send a message while my contact is offline, then go offline myself, and expect that my contact will receive it when they go online? This is quite limiting.

    How is PeerBox different from Delta Chat?