• UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    3 hours ago

    99% won’t do when the consequences of that last 1% are sever.

    There’s more than one book on the subject, but all the cool kids were waving around their copies of The Black Swan at the end of 2008.

    Seems like all the lessons we were supposed to learn about stacking risk behind financial abstractions and allowing business to self-regulate in the name of efficiency have been washed away, like tears in the rain.

    • snooggums@piefed.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      99% won’t do when the consequences of that last 1% are sever.

      As an example, your whole post is great but I can’t help but notice the one tiny typo that is like 1% of the letters. Heck, a lot of people probably didn’t even notice just like they don’t notice when AI returns the wrong results.

      A multi billion dollar technical system should be far better than someone posting to the fediverse in their spare time, but it is far worse. Especially since those types of tiny errors will be fed back into future AI training and LLM design is not and never will be self correcting because it works with the data it has and it needs so much that it will always include scraped stuff.