• 0 Posts
  • 177 Comments
Joined 10 months ago
cake
Cake day: June 7th, 2025

help-circle

  • End-to-end encryption is the final boss of false-sense-of-security.

    Like, it’s great and all, but it’s not universal perfect privacy the way a lot of people seem to hold it up as if it were. You have to understand what it’s actually defending against, and who might be blocked by that, and more importantly, who won’t be. Because the list of potential adversaries it is actually useful against are becoming narrower and increasingly out-of-date.

    Encryption alone prevents the messages being read in transit between you and Signal, and obviously that’s fundamental basic security at this point. Signal being end-to-end encrypted prevents your messages being spied on by Signal, but ironically they’re probably one of the most trustworthy actors in this whole chain, so the fact that it’s protected from them, while commendable, is not particularly valuable security. They were probably not the ones going to spy on you in the first place. They have prevented themselves from being capable of doing so, and that’s good, but if that’s all you’re worried about and you now think your privacy problems are solved, you’re completely missing the point because instead of Signal themselves, you need to be worried about the guy currently standing over your shoulder with his camera filming.

    Treat your phone and your Windows computer like they are permanently compromised with a rootkit taking continuous screenshots of everything you do and feeding that to their big tech overlords, because they might as well be.

    For that matter, even Linux PCs still have their black box “intel management engine” or similar processor running constantly and potentially watching everything you do, although I don’t believe they actually do that in any reasonable case, we need to understand they have both the capability and the motivation to be, at least in some cases, compromised by adversaries which may include (but are not limited to) tech companies and governments. You can’t even trust your “dumb monitor” unless you’ve audited every chip inside it, you’ll never know if it could be scanning everything on your screen and feeding it back through HDMI/DP back-channels or even through powerline networking. You also don’t know if the same kind of things could be happening on the other side that you’re sending/receiving from. Sure the network trip is protected, but that’s hardly the only place you’re vulnerable to interception.

    That probably all sounds paranoid and extreme and improbable, and it is, but the point is end-to-end encryption does nothing to help you against any of that, so don’t make the mistake of assuming you’re 100% safe because it’s end-to-end encrypted. The “end” is not what you think it is and it’s not paranoid to at least understand that and accept the risk with the understanding.

    I realize I am probably preaching to the choir here, and most of you probably understand this as well as I do. But I’m also pretty sure a lot of people truly believe it’s more secure against eavesdropping than it actually is and that needs to change. The surveillance state is adapting and expanding rapidly and I fear they’ve started getting ahead of many of us. Beware, and plan carefully in the months and years ahead.



  • Absolutely, just like addiction to fast food causes obesity, our addiction to fast information has developed into a profound societal ignorance. Studying issues seriously takes time and effort, and if you think “ain’t nobody got time for that” I’ll tell you right now you’re going to have to start to make time for it. Because if you don’t, you’ll end up knowing nothing, and being wrong about everything, and while that may be acceptable to anyone following all the other lemmings in the same direction (the double irony of “lemming behavior” being historical fake information itself, while posting this on lemmy is not lost on me), I’m also going to suggest to you there will be serious personal consequences from being wrong all the time, and those consequences are going to catch up with you sooner or later.


  • I dabble in local AI and this always blows my mind. How do people just casually throw 135b parameter models around? Are people like, renting datacenter hardware or GPU time or something, or are people just building personal AI servers with 6 5090s in them, or are they quantizing them down to 0.025 bits or what? what’s the secret? how does this work? am I missing something? like the Q4 of Qwen3.5 122B is between 60-80GB just for the model alone. That’s 3x 5090s minimum, unless I’m doing the math wrong, and then you need to fit the huge context windows these things have in there too. I don’t get it.

    Meanwhile I’m over here nearly burning my house down trying to get my poor consumer cards to run glm-4.7-flash.


  • I hate to break it to you, but we’re never going to be able to trust anything ever again. At least, not the way we used to. In the future, without any doubt, we are going to need to develop a different model of learning, using, and processing information that considers the provenance of where the information came from and how it got there from essentially first principles. We will have to build a web of investigation and trust to determine and mark what information is trustworthy and what is not, especially new information. None of this exists in any meaningful way yet, and the systems we used to have for it, like academic research and journalism for example, would have been catastrophically inadequate to handle this onslaught even at their peak, and they are nowhere near their peak anymore, having been deliberately eroded into a shadow of their former effectiveness so some assholes could get rich and powerful. So hopefully we’ll be able to rely on solid ground like Wikipedia and… books as a starting point, and nobody gets around to burning the Library of Alexandria down in their rage against “woke stuff”, because otherwise we’re going to be rebuilding our information spaces pretty much from scratch in the near future, probably at the same time we’re rebuilding civilized society in general. If this sounds incredibly uncertain, tedious and painful: yes, it will be, especially at first. But we will get better at it, eventually. We will develop new systems for it, we will become fluent in information again and the friction will fade.

    I wish we could get to that stage right away, but unfortunately it will have to wait. We can’t do anything to improve the swimming pool while we are currently drowning in it. This is the reality that rampant and unchecked use of AI technologies by soulless corporations and corrupt governments have wrought. Logic and reason never stood a chance, and we are entering the digital dark ages. The enlightenment is probably coming someday, but don’t hold your breath for it.

    Support your local library, that’s the most helpful thing I can think of for individuals to do. Librarians know their shit.


  • A business is something owned and run by a real human, who may be an evil person but is still at least a person that can potentially be reasoned with and can suffer consequences for their actions. Sociopathic business owners absolutely do exist and are a real concern, but they are a manageable one, at least theoretically, at least when the entire system isn’t stacked in favor of them.

    As you say, corporations are different (and they are a significant part of the reason the economy is stacked in favor of sociopaths instead of against them). They are only nominally run by a human, and typically only in a temporally limited or some other limited capacity. A corporation is owned by its shareholders, an anonymous, nameless, faceless mob of pitchforks and torches, a group that is constantly shifting, amorphous and fluid, impossible to solidify into anything that can be pinned down, typically mostly represented by bankers, fund managers and balance sheets that want to look good for their eventual consumer so they can sell financial products to them. They are inherently amoral, and like any mob can quickly turn from vicious to apathetic and back again at the prompting of single individual acts or actors without any logical reason. The sociopaths on the other hand can easily take advantage of this, becoming the single actor or creating the single act to incite the mobs to riot or soothe them into complacency almost at will, and as a result, they control the corporations, and thus the economy.



  • The simple, maybe unhelpful answer is that fail2ban needs to have two things at once: the logs, and a way to block the network traffic.

    Where exactly you want those things to coincide is really up to you, there might only be one point that simultaneously has access to both those things, or there might be multiple points depending on how your systems and services and network is configured, or if you’re in a bad situation you might find you don’t really have any single point where both those things are simultaneously possible, in which case you’ll need to reconfigure something until you do have at least one point where both those things are again coincident.

    As far as best practices, I can’t really say for sure, but I know that one of the more convenient ways to run it is usually on the same system, I usually run it outside of docker, on the host, which can pretty easily get access to the container’s logs if necessary, and let fail2ban block traffic on the whole system. For me, any system running any publicly accessible network services that allow password login gets a fail2ban instance.

    A whole-network approach where you block the traffic on the firewall is fine too, if that’s what you prefer and what you want to work towards, but it’s probably going to be significantly more complex to set up because now you need to either figure out how to get fail2ban to be able to access your firewall or a way for your firewall to get the logs it needs.



  • I agree. Flatpaks have similar challenges. I understand the dilemma, I understand what they’re trying to do and what they’re trying to solve, but shifting responsibility for these sorts of things from “here” to “there” is not actually solving the problem it’s just moving it, and often moving it to somewhere that someone who has no business dealing with it will ultimately end up dealing with it in a way that’s even worse than what you started with.

    Personally I try to be pragmatic and not ideological about software packaging. I usually prefer distro-provided deb packages whenever I can get it as a strict first-place-to-check, and I try to convince myself to use that even if it’s a somewhat older version or kind of stupidly packaged, falling back to the project’s own deb repos if they have them for more up to date versions, and if that fails I might consign myself to building from source or banging my head against a docker, unless I really absolutely need to use some other packaging option some specific reason. I’ve even used a flatpak occasionally (often for something I would like somewhat sandboxed) But snap is pretty garbage and has few redeeming features and I’ve never really felt the slightest interest in using it for any reason.

    It’s a mess, but it’s a manageable mess, mostly because I’m forced to manage it whether I like it or not. This is the unfortunate reality of software packaging on Linux. With great choice comes too many choices. It’s a tradeoff I’m willing to make, because I like having choice.


  • Yes. They are technically reflected sunlight, so they are as bright as the sun, just very small. It makes sense you can see them during sunlight, since they are reflections of sunlight. You will typically only see them on the side of the sky opposite the sun, but the exact angle depends on the location and orientation of the satellite and the surface that is actually doing the reflection.

    Generally speaking, they are dots that fade in somewhat gradually, moving at a consistent pace (typically slower than a shooting star, but faster than an airplane at cruising altitude) in a straight line direction for awhile at full brightness, then fading away.


  • It’s literally the core foundation of my entire self-hosting configuration. I could not live without Forgejo. I can’t imagine being shackled to Github or some other hosted provider anymore for something as important as my git repositories.

    Gitea’s okay too in every practical respect, but Forgejo is the more community-led fork and in my opinion less likely to be corporatized and enshittified far in the future, so I’ve hitched my wagon there and couldn’t be happier. The fork is starting to diverge slowly, so it seems like direct migration is no longer possible. That said, git repositories are git repositories, and they have most of the important history and stuff inside them already, so unless you’re super attached to stuff like issues and whatever you can still migrate, you’ll just lose some stuff.



  • Because they incentivize it. Some banks are better at incentivizing it than others. My bank for example, allows the highest daily limit (by a factor of 5x) if you use the app. Online banking has a lower limit, and cards lower still. I don’t appreciate them holding my own money hostage, but the sad reality we live in precludes me from having enough remaining mental bandwidth and effort reserve to commit it to fighting against it in such an empty and unwinnable battle. Money is a scam anyway.




  • You don’t have any great options but you do have some options. You’ll need dynamic DNS, which you can get for free by various providers. This will manage a “dynamic” DNS entry for your occasionally changing, non-static IP at home. The dynamic DNS entry won’t be on your own domain name, it will be on the provider’s domain name. But wait! That’s just step one.

    You can still get your own, fully-functional domain name, and you can have all the domains and subdomains you want, and set them up however you want, with one important restriction: You can’t use IP addresses (because yours is dynamic, and changes all the time and you would have to be constantly updating your domain every time it does, and there would be delays and downtime while everything gets updated).

    Instead, your personal domains have to use CNAME records. This substitutes the IP from a different domain INTO your domain. So you CNAME every entry on your own fancy domains to point at your dynamic DNS provider, which manages the dynamic part of the problem for you and always gives the real IP you need. Nobody sees the dynamic DNS name, it’s there, but it’s happening behind the scenes, they still see your fancy personalized domain names.

    It’s still not going to be perfect, it won’t work well or at all for certain services like email hosting (self-hosting this is not for the faint of heart anyway) that are very strict about how their DNS and IP addresses need to be set up, but it will likely be good enough for 99% of the stuff you want to self-host.