So after months of dealing with problems trying to get the stuff I want to host working on my Raspberry Pi and Synology, I’ve given up and decided I need a real server with an x86_64 processor and a standard Linux distro. So I don’t continue to run into problems after spending a bunch more, I want to seriously consider what I need hardware-wise. What considerations do I need to think about in this?

Initially, the main things I want to host are Nextcloud, Immich (or similar), and my own Node bot @DailyGameBot@lemmy.zip (which uses Puppeteer to take screenshots—the big issue that prevents it from running on a Pi or Synology). I’ll definitely want to expand to more things eventually, though I don’t know what. Probably all/most in Docker.

For now I’m likely to keep using Synology’s reverse proxy and built-in Let’s Encrypt certificate support, unless there are good reasons to avoid that. And as much as it’s possible, I’ll want the actual files (used by Nextcloud, Immich, etc.) to be stored on the Synology to take advantage of its large capacity and RAID 5 redundancy.

Is a second-hand Intel-based mini PC likely suitable? I read one thing saying that they can have serious thermal throttling issues because they don’t have great airflow. Is that a problem that matters for a home server, or is it more of an issue with desktops where people try to run games? Is there a particular reason to look at Intel vs AMD? Any particular things I should consider when looking at RAM, CPU power, or internal storage, etc. which might not be immediately obvious?

Bonus question: what’s a good distro to use? My experience so far has mostly been with desktop distros, primarily Kubuntu/Ubuntu, or with niche distros like Raspbian. But all Debian-based. Any reason to consider something else?

  • Zagorath@aussie.zoneOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 days ago

    Sorry for the late reply. I’m just disorganised and have way too many unread notifications.

    LXC containers sound really interesting, especially on a machine that’s hosting a lot of services. But how available are they? One advantage of Docker is its ubiquity, with a lot of useful tools already built as Docker images. Does LXC have a similarly broad supply of images? Or else is it easy to create one yourself?

    Re VM vs LXC, have I got this right? You generally use VMs only for things that are intermittently spun up, rather than services you keep running all the time, with a couple of exceptions like HomeAssistant? What’s the reason they’re an exception?

    Possibly related: your examples are all that VMs get access to the discrete GPU, containers use the integrated GPU. Is there a particular reason for that distribution?

    I’m really curious about the cluster thing too. How simple is that? Is it something where you could start out just using an old spare laptop, then later add a dedicated server and have it transparently expand the power of your server? Or is the advantage just around HA? Or something else?