• 0 Posts
  • 124 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle


  • Most problem those give tend to be turn it on and it says, “hey I’m not so sure you put a game in me, would love to play one for you, though!” Or “I think this might be a game or it might just be a weird blue pattern because the connection is there but only partially”.

    At least the NES was like that. You had to know the ritual to put the cartridge in correctly and that ritual changed. I don’t ever remember having a ritual with my N64.



  • Things get more violent. Wind tries to find the path of least resistance, though as a fluid, so it’s taking all paths in proportion to how much resistance it has (just like electricity). If you increase the absolute resistance in one area, it reduces the relative resistance everywhere else, so you end up with increased airflow everywhere else and a reduction where you added resistance. Which means more wind outside of the turbine’s path (because it’s going to equalize that pressure differential one way or another). More flow through the same volume means higher speeds and forces (think like turning up the pressure on a tap).

    But wind turbines don’t have a constant effect on wind resistance; it depends on how fast it’s spinning or how fast the wind is moving. When the wind slows, the resistance goes down, and when resistance goes down, wind speed increases. So you end up with an oscillating effect where the wind goes through cycles of strengthening, losing more energy to the turbines and weakening, which means the turbines take less energy, and the winds strengthen again. Though you’d need to be taking a significant amount of that energy to see an extreme effect like this.

    Apparently taking more than 53.9% of the total wind energy in an area is enough to slow the wind to a stop (again, a violent, turbulent, oscillating stop, not a gentle end of wind).






  • TPM is more about securing data from PC owners rather than for them. Since it’s there anyways, it is used to support bitlocker, but the reason they are pushing it so much is because it might (depending on whether it actually is secure) be able to allow content providers to allow users to view their content without needing to give them access to copy or edit it.

    And there isn’t any guarantee that the uses that do benefit the user’s security don’t have some backdoor for approved crackers to get in. Like doesn’t the MS account store a copy of the recovery key for bitlocker? Which is nice for when the user needs it, but also comes in handy if MS wants to grant access to anyone else.





  • It’s because they are horrible at problem solving and creativity. They are based on word association from training purely on text. The technical singularity will need to innovate on its own so that it can improve the hardware it runs on and its software.

    Even though github copilot has impressed me by implementing a 3 file Python script from scratch to finish such that I barely wrote any code, I had to hold its hand the entire way and give it very specific instructions about every function as we added the pieces one by one to build it up. And even then, it would get parts I failed to specify completely wrong and initially implemented things in a very inefficient way.

    There are fundamental things that the technical singularity needs that today’s LLMs lack entirely. I think the changes that would be required to get there will also change them from LLMs into something else. The training is a part of it, but fundamentally, LLMs are massive word association engines. Words (or vectors translated to and from words) are their entire world and they can only describe things with those words because it was trained on other people doing that.



  • Buddahriffic@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 month ago

    I don’t hate AI or LLMs. As much as it might mess up civilization as we know it, I’d like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.

    I just think that there’s a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say “this is why it suggested glue as a pizza topping” to put whether or not it approaches AGI in a grey zone.

    I’ll agree though that it was maybe too much to say they don’t have knowledge. “Having knowledge” is a pretty abstract and hard to define thing itself, though I’m also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don’t have intelligence. And I’d argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).


  • Buddahriffic@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    1 month ago

    Calling the errors “hallucinations” is kinda misleading because it implies there’s regular real knowledge but false stuff gets mixed in. That’s not how LLMs work.

    LLMs are purely about word associations to other words. It’s just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it’s trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.

    All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an “end” token.

    Earlier on when using LLMs, I’d ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn’t do. Its capabilities don’t actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn’t even have to reflect how it really works.


  • Not sure where 1440p would land, but after using one for a while, I was going to upgrade my monitor to 4k but realized I’m not disappointed with my current resolution at all and instead opted for a 1440p ultrawide and haven’t regretted it at all.

    My TV is 4k, but I have no intention of even seriously looking at anything 8k.

    Screen specs seem like a mostly solved problem. Would be great if focus could shift to efficiency improvements instead of adding more unnecessary power. Actually, boot time could be way better, too (ie get rid of the smart shit running on a weak processor, emphasis on the first part).