

How long does it reasonably take? Just a weekend?


How long does it reasonably take? Just a weekend?


This also makes sense. Dell is massive in the dataceneters. As a consultant I’ve worked with Dell hardware far more than anything else. I will say, just about every customer I’ve worked with is interested in AI, but they want to run their own models, not some half baked thing from Dell.


Honestly, that’s pretty quick to learn that lesson. Huge corporations usually take way longer to figure that sort of thing out. Usually not until it’s too late.


No kidding. I know it doesn’t change their point. You could tell by the way I said it doesn’t change their point. The original commenter might not have known it’s to the point of models consuming their own synthetic data. They may have learned something.


I have some other keys for different things like “service accounts.” But I mainly have one personal key I use for things that need ssh keys.


The training sets aren’t all human created. They have models that feed other models training data. That doesn’t change your point, but you should know it’s worse than you think.


My key is used in so many places I’m reluctant to update it. I probably should, even just to use newer crypto standards.


Yeah, I only put specific things there. Management and monitoring things. The services are still local.


I’ve been using a VPS for a while now. I still maintain it, so it’s very much like self hosting.


Self hosting can be stretched to mean you’re hosting your own services on a cloud provider.


Defense in depth for one. It looks like this project is made for protecting your data on cloud storage. I’ve noticed right now there seems to be a lot of projects around using relatively cheap S3 storage solutions.


Just self host the whole thing with Forgejo. I run a few github actions on runners all on my own stuff.


I haven’t used it, but this project looks interesting: https://github.com/dkorecko/PatchPanda
It doesn’t just update you containers, it checks the release notes too.
In a critical environment the UPS only has to last as long as it takes to switch over to a backup generator.
Yeah, people that brag about uptimes are just bragging about the fragility of their infrastructure. If designed correctly you should be able to patch and reboot infrastructure while application availability stays up.
I’ve been using Linux for decades and I’ve never tried Gentoo. Kudos and welcome.


It’s really not. They handle authentication but then everything is sent to your server.


A lot of companies. Don’t forget that IBM was ahead of the game with Watson and Watsonx. Also, don’t forget that Red Hat is owned by IBM and OpenShift is getting big in the AI space allowing GPUs to be pooled and workloads to be scheduled dynamically.


Sure but they’re in the business of consulting on how to build out that AI platform and the business of providing an AI platform.
I would be doing it to learn as well. I’m pretty good with Linux, I work for Red Hat. But, I don’t get down in the weeds at all. I do mostly Ansible and OpenShift.