

This was sold by Foveon, which had some interesting differences. The sensors were layered which, among other things, meant that the optical effect of moire patterns didn’t occur on them.


This was sold by Foveon, which had some interesting differences. The sensors were layered which, among other things, meant that the optical effect of moire patterns didn’t occur on them.


This doesn’t appear to be made by the people from either the Raspberry Pi Foundation or Raspberry Pi Holdings.


Bad setup isn’t a reason why something is a bad idea. Whatever your opinions of cars are, talking about how bad they would be if everyone drove drunk doesn’t really prove your point.
In any security system, and this should also apply to home automation, one of the things you have to account for is failure. If you don’t have a graceful failure mode, you will have a bad time. And context matters. If my security system fails at home, defaulting to unlocked doors makes sense, especially if it’s due to an emergency like a fire. If the security system in a virology lab fails, you probably don’t want all the doors unlocked, and you may decide to have none of the doors unlocked, because the consequences of having the doors unlocked is greater than having them locked. Likewise, but of a much less serious nature, if your home automation fails, you should have some way of controlling the lights. If you don’t, again, it hasn’t failed gracefully.


You’re still not getting it. A proper smart home will know when you want certain things. You’re going into the bathroom to get ready for work, the lights are programmed for full intensity. In the middle of your sleep period, they go to the pre-programmed dim mode. And most rooms will be used in certain ways, as defined by you. If you’re in the living room and turn the TV on the lights dim, because that’s what you told it to do. You have an EV to charge, it knows how much time your EV needs to charge and how much electricity costs you during certain periods. So you plug the car in and it charges it when you want it to so you are ready when it’s time to go to work. This is where smart homes start to shine - they do all the usual things you would do if they weren’t so complicated and all the default things you would normally do, and you just live your life and deal with the exceptions as needed. If you use a room 3 different ways, you set up those 3 different ways and make the typical one your default. Now you’re back to exceptions. And the more rules you have to how you do things, the better it works for you. And most people have a preferred way they want things, modified by how much it takes to get there and other circumstances. With the right sensors, timers, etc., most of those can be accounted for.
So maybe you start with lights turning on when you enter the room, but if you do it right you get to the point where you barely think about lights at all - they’re just how you want them to be. Why would you not want that? However little effort lights take to manage, why do you want them to take any effort at all? And there are many more things than lights, some of which just make life easier, or more comfortable, or cheaper, all of which are good reasons to want this.


If they ramp up production and the bottom falls out of AI, they could be left with large product reserves, and people may still be reluctant to buy. One way to increase demand is to lower prices. Now, if they are the only company in this position, things may not change much. But if more than one are, the other can supply the market at a price that’s acceptable to them and the consumers.
Or those companies can collude and just completely fuck over customers. But that would never happen, right?


What you’re saying is mostly right, and in a practical sense is right, as well, but not as much from a technical sense. This is the specific block that is problematic.
Risc CPUs like the arm in the raspberry pi are really good at not doing anything, or doing a really small subset of things (it’s in the name!), but x86 is great at doing some stuff and being able to do a wide variety of stuff with its big instruction set. If you raise an eyebrow at my claim, consider that before gpus were the main way to do math in a data center it was x86. If the people who literally count every fraction of a watt of power consumption as billable time think it’s most efficient it probably is!
This is generally correct, per cycle. Overall, it really depends. The problem is, the x86 architecture does okay as long as it’s kept busy and the work to be performed is predictable (for the purposes of look-ahead and parallelization). This is why it’s great for those mathematical calculations you referred to, and why GPUs took over - they’re massively better performers on tasks that can be parallelized such as math calculations and graphics rendering. Beyond that, the ARM use case has been tuned to low power environments, which means the computing does poorly in environments that need a lot of calculations because, in general, more computing requires more power (or the same power with more efficient hardware, and now we’re talking about generational chip design differences). Now, couple that with the massive amount of money spent to make x86 what it is, and the relatively lower amounts that RISC and ARM received, and the gap gets wider.
Now, as I started with, even a basic x86 computer running at mostly idle is going to have pretty low power consumption, dollar-wise. Compare that to the power draw on a new router, or even a newer low-power mini PC, and your ROI is not going to indicate the need for that purchase if you have the hardware just sitting around idle. And it will still perform better than a raspberry pi configured to act as a router if your bandwidth is above about 250 mbps, if I remember correctly (and something like 120 mbps for the v4 and earlier generations).


There was a story about a researcher using evolving algorithms to build more efficient systems on FPGAs. One of the weird shortcuts was some system that normally used a clock circuit, but none was available, and it made a dead-end circuit the would give a electric pulse when used, giving it a makeshift clock circuit. The big problem was that better efficiency often used quirks of the specific board, and his next step was to start testing the results on multiple FPGAs and using the overall fitness to get past that quirk/shortcut.
Pretty sure this was before 2010. Found a possible link from 2001.


Summarize that sentence into a thumbs up or thumbs down emoji.


Pretty much everything you said is incorrect, except for the article age. Valetudo literally wrote software that does this on multiple models locally, including mapping. The response of the manufacturers whose models were capable of this was to release new versions where this wasn’t an option. As for servers and local control, there are a number of solutions for those with the knowledge and hardware to set it up, and the only thing stopping robovac companies from supporting this is (less) money.


We could still live in caves, but most of us have chosen not to. I’m personally of the opinion that every advancement that gives you more time to do things that are important to you are worth it. This doesn’t mean inviting every piece of spyware some company tries to thrust upon me is acceptable, either.


The Pebble Time 2 has a heart rate monitor. I can’t say if the rest of your statement is correct or not.
Having previously used tools like Inventor (which isn’t great for floor plans, but is great for parametric modeling) yes, Sweet Home 3D has a terrible UX. That’s doubtless why you didn’t find out how to adjust walls, etc. parametrically. I wouldn’t classify it as terrible, but it isn’t great, for sure.