• 0 Posts
  • 55 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • So the real turning point in his riches to more riches story was Zip2. People have never heard of it because it was never anything even vaguely important, but it was a website in the midst of the dot-com era and Compaq, desperate to be “in” threw a bunch of money at it. Elon basically won a lottery.

    His next stop was to roll his winnings to try to get X (not the current one, an online payment platform) going. By all measures, it didn’t get anywhere, pretty well stomped by Paypal.

    In the midst of that competition, X folded into PayPal. Against all reason, they made Elon the head of the now joined PayPal/X, despite being on what was obviously the losing side of the business. It was a disaster and they ultimately sidelined him to save the company because he was so bad.

    Ok, so now he’s on the sideline but a large shareholder in PayPal… And there came $1.5 billion from eBay to acquire, and that got him to about a quarter billion, just for being there.

    Then the next significant stop was to jump on Tesla, rewrite their history to declare himself founder and largely let them do what they will while he collected the money. Sounds like in recent years he’s started to believe his own mega-genius hype, and has been imposing his direction more, and not to Tesla’s betterment.

    Like every step of the way, he either fell into lucky circumstances and managed to get everyone to feed his ego. I suppose his “skill” was taking credit for Tesla despite only being a source of funding way early on.





  • Lower storage density chips would still be tiny, geometry wise.

    A wafer of chips will have defects, the larger the chip, the bigger portion of the wafer spoiled per defect. Big chips are way more expensive than small chips.

    No matter what the capacity of the chips, they are still going to be tiny and placed onto circuit boards. The circuit boards can be bigger, but area density is what matters rather than volumetric density. 3.5" is somewhat useful for platters due to width and depth, but particularly height for multiple platters, which isn’t interesting for a single SSD assembly. 3.5 inch would most likely waste all that height. Yes you could stack multiple boards in an assembly, but it would be better to have those boards as separately packaged assemblies anyway (better performance and thermals with no cost increase).

    So one can point out that a 3.5 inch foot print is decently big board, and maybe get that height efficient by specifying a new 3.5 inch form factor that’s like 6mm thick. Well, you are mostly there with e3.l form factor, but no one even wants those (designed around 2U form factor expectations). E1.l basically ties that 3.5 inch in board geometry, but no one seems to want those either. E1.s seems to just be what everyone will be getting.




  • There’s a cost associated with making that determination and managing the storage tiering. When the NVME is only 3x more expensive per amount of data compared to HDD at scale, and “enough” storage for OS volume at the chepaest end where you can either have a good enough HDD or a good enough SDD at the same price, then OS volume just makes sense to be SSD.

    In terms of “but 3x is pretty big gap”, that’s true and does drive storage subsystems, but as the saying has long been, disks are cheap, storage is expensive. So managing HDD/SDD is generally more expensive than the disk cost difference anyway.

    BTW, NVME vs. non-NVME isn’t the thing, it’s NAND v. platter. You could have an NVME interfaced platters and it would be about the same as SAS interfaced platters or even SATA interfaced. NVME carried a price premium for a while mainly because of marketing stuff rather than technical costs. Nowadays NVME isn’t too expensive. One could make an argument that number of PCIe lanes from the system seems expensive, but PCIe switches aren’t really more expensive than SAS controllers, and CPUs have just so many innate PCIe lanes now.




  • The lowest density chips are still going to be way smaller than even a E1.S board. The only thing you might be able to be cheaper as you’d maybe need fewer SSD controllers, but a 3.5" would have to be, at best, a stack of SSD boards, probably 3, plugged into some interposer board. Allowing for the interposer, maybe you could come up with maybe 120 square centimeter boards, and E1.L drives are about 120 square centimeters anyway. So if you are obsessed with most NAND chips per unit volume, then E1.L form factor is alreay going to be in theory as capable as a hypothetical 3.5" SSD. If you don’t like the overly long E1.L, then in theory E3.L would be more reasonably short with 85% of the board surface area. Of course, all that said I’ve almost never seen anyone go for anything except E1.S, which is more like M.2 sized.

    So 3.5" would be more expensive, slower (unless you did a new design), and thermally challenged.


  • Hate to break it to you, but the 3.5" form factor would absolutely not be cheaper than an equivalent bunch of E1.S or M.2 drives. The price is not inflated due to the form factor, it’s driven primarily by the cost of the NAND chips, and you’d just need more of them to take advantage of bigger area. To take advantage of the thickness of the form factor, it would need to be a multi-board solution. Also, there’d be a thermal problem, since thermal characteristics of a 3.5" application are not designed with the thermal load of that much SSD.

    Add to that that 3.5" are currently maybe 24gb SAS connectors at best, which means that such a hypothetical product would be severely crippled by the interconnect. Throughput wise, talking about over 30 fold slower in theory than an equivalent volume of E1.S drives. Which is bad enough, but SAS has a single relatively shallow queue while an NVME target has thousands of deep queues befitting NAND randam access behavior. So a product has to redesign to vaguely handle that sort of product, and if you do that, you might as well do EDSFF. No one would buy something more expensive than the equivalent capacity in E1.S drives that performs only as well as the SAS connector allows,

    The EDSFF defined 4 general form factors, the E1.S which is roughly M.2 sized, and then E1.L, which is over a foot long and would be the absolute most data per cubic volume. And E3.S and E3.L, which wants to be more 2.5"-like. As far as I’ve seen, the market only really wants E1.S despite the bigger form factors, so I tihnk the market has shown that 3.5" wouldn’t have takers.


  • Not enough of a market

    The industry answer is if you want that much volume of storage, get like 6 edsff or m.2 drives.

    3.5 inch is a useful format for platters, but not particularly needed to hold nand chips. Meanwhile instead of having to gate all those chips behind a singular connector, you can have 6 connectors to drive performance. Again, less important for a platter based strategy which is unlikely to saturate even a single 12 gb link in most realistic access patterns, but ssds can keep up with 128gb with utterly random io.

    Tiny drives means more flexibility. That storage product can go into nas, servers, desktops, the thinnest laptops and embedded applications, maybe wirh tweaked packaging and cooling solutions. A product designed for hosting that many ssd boards behind a single connector is not going to be trivial to modify for any other use case, bottleneck performance by having a single interface, and pretty guaranteed to cost more to manufacturer than selling the components as 6 drives.


  • I’ve got mixed feelings on the CHIPS act.

    It was basically born out of a panic over a short term shortage. Like many industry observers accurately stated that the shortages will subside long before any of the CHIPS spending could even possibly make a difference. Then the tech companies will point to this as a reason not to spend the money they were given.

    That largely came to pass, with the potential exception of GPUs in the wake of the LLM craze.

    Of course, if you wanted to give the economy any hope for viable electronics while also massively screwing over imports, this would have been your shot. So it seems strategically at odds with the whole “make domestic manufucating happen” rhetoric.



  • While they have to be careful, there can be reasonable ones to help what they do/stop doing.

    Example, “x% of telemetry enabled users enable the bookmark bar”, not particularly useful for harmful purposes, but if it were 0.00%, then they know efforts accommodating the bookmark bar would be pointless. Not many users would go out of their way to say “I don’t use some feature I’m ignoring”, and telemetry is able to convey that data, so the developer is not guessing based on his preference.

    That being said, the telemetry is so opaque that it’s hard to make an informed decision as to whether the telemetry in question is risky or not. Might be good to have some sort of accumulated telemetry data that you can click to review and submit, and have that data be actually human readable and to the point for salient points.


  • This is understandable, and also can see why FOSS would struggle, since a big part of the value is keeping the operators of the machines from doing the things they want or need to do. This is anathema to general FOSS thinking, to keep the user from doing things they would generally be empowered to do.

    Which I can see as being great for the admins, but it is often maddening to be a user under that regime. For example, “officially” I must use the corporate load for my work, and it’s super locked down. Problem being is the lock down makes my job effectively impossible (unable to run arbitrarily new binaries, unable to connect to services without a proper certificate, unable to add my own certificates, must get all binaries and service certificates from IT who takes 2-3 weeks to turnaround a signature). So you have a few departments resorting to that naughtiest of naughty words “Shadow IT”, always looking for end-runs around the corp policy that explicitly blocks software development work because they wouldn’t be able to discern that from malware.

    Ours also shot us in the head, by forcing automatic updates off (because they know better how to deploy patches than Micrsoft I guess) and then there’s a ransomware attack that cripples things because they didn’t realize they failed to apply security updates for two years on most systems. Fortunately enough people had been manually updating to keep things going.



  • So for one, business lines almost always have public IPv4. Even then, there are a myriad of providers that provide a solution even behind NAT (also, they probably have public IPv6 space). Any technology provider that could provide AI chat over telephony could also take care of the data connectivity path on their behalf. Anyone that would want to self-host such a solution would certainly have inbound data connectivity also solved. I just don’t see a scenario where a business can have AI telephony but somehow can’t have inbound data access.

    So you have a camera on a logbook to get the human input, but then that logbook can’t be a source of truth because the computer won’t write in it and the computer can take bookings. I don’t think humans really want to do a handwritten logbook anyway, a computer or tablet ui is going to be much faster anyway.