• 0 Posts
  • 12 Comments
Joined 1 month ago
cake
Cake day: November 30th, 2025

help-circle



  • Bad setup isn’t a reason why something is a bad idea. Whatever your opinions of cars are, talking about how bad they would be if everyone drove drunk doesn’t really prove your point.

    In any security system, and this should also apply to home automation, one of the things you have to account for is failure. If you don’t have a graceful failure mode, you will have a bad time. And context matters. If my security system fails at home, defaulting to unlocked doors makes sense, especially if it’s due to an emergency like a fire. If the security system in a virology lab fails, you probably don’t want all the doors unlocked, and you may decide to have none of the doors unlocked, because the consequences of having the doors unlocked is greater than having them locked. Likewise, but of a much less serious nature, if your home automation fails, you should have some way of controlling the lights. If you don’t, again, it hasn’t failed gracefully.


  • You’re still not getting it. A proper smart home will know when you want certain things. You’re going into the bathroom to get ready for work, the lights are programmed for full intensity. In the middle of your sleep period, they go to the pre-programmed dim mode. And most rooms will be used in certain ways, as defined by you. If you’re in the living room and turn the TV on the lights dim, because that’s what you told it to do. You have an EV to charge, it knows how much time your EV needs to charge and how much electricity costs you during certain periods. So you plug the car in and it charges it when you want it to so you are ready when it’s time to go to work. This is where smart homes start to shine - they do all the usual things you would do if they weren’t so complicated and all the default things you would normally do, and you just live your life and deal with the exceptions as needed. If you use a room 3 different ways, you set up those 3 different ways and make the typical one your default. Now you’re back to exceptions. And the more rules you have to how you do things, the better it works for you. And most people have a preferred way they want things, modified by how much it takes to get there and other circumstances. With the right sensors, timers, etc., most of those can be accounted for.

    So maybe you start with lights turning on when you enter the room, but if you do it right you get to the point where you barely think about lights at all - they’re just how you want them to be. Why would you not want that? However little effort lights take to manage, why do you want them to take any effort at all? And there are many more things than lights, some of which just make life easier, or more comfortable, or cheaper, all of which are good reasons to want this.



  • What you’re saying is mostly right, and in a practical sense is right, as well, but not as much from a technical sense. This is the specific block that is problematic.

    Risc CPUs like the arm in the raspberry pi are really good at not doing anything, or doing a really small subset of things (it’s in the name!), but x86 is great at doing some stuff and being able to do a wide variety of stuff with its big instruction set. If you raise an eyebrow at my claim, consider that before gpus were the main way to do math in a data center it was x86. If the people who literally count every fraction of a watt of power consumption as billable time think it’s most efficient it probably is!

    This is generally correct, per cycle. Overall, it really depends. The problem is, the x86 architecture does okay as long as it’s kept busy and the work to be performed is predictable (for the purposes of look-ahead and parallelization). This is why it’s great for those mathematical calculations you referred to, and why GPUs took over - they’re massively better performers on tasks that can be parallelized such as math calculations and graphics rendering. Beyond that, the ARM use case has been tuned to low power environments, which means the computing does poorly in environments that need a lot of calculations because, in general, more computing requires more power (or the same power with more efficient hardware, and now we’re talking about generational chip design differences). Now, couple that with the massive amount of money spent to make x86 what it is, and the relatively lower amounts that RISC and ARM received, and the gap gets wider.

    Now, as I started with, even a basic x86 computer running at mostly idle is going to have pretty low power consumption, dollar-wise. Compare that to the power draw on a new router, or even a newer low-power mini PC, and your ROI is not going to indicate the need for that purchase if you have the hardware just sitting around idle. And it will still perform better than a raspberry pi configured to act as a router if your bandwidth is above about 250 mbps, if I remember correctly (and something like 120 mbps for the v4 and earlier generations).