• ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    6 days ago

    AI can do the heavy lifting, but must not be treated as an infallable machine that can do no wrong unless it absolutely malfunctions, otherwise we get yet another YouTube, Twitch, etc.

        • futatorius@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 days ago

          The AI embodies the bias of whatever it was trained on. Clearly they used the decisions of existing fascist mods. May Reddit burn in hell.

  • arotrios@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    6 days ago

    Well, Reddit’s approach towards AI and auto-mod has already killed most of the interesting discussion on that site. It’s one of the reason I moved to the Fediverse.

    At the same time, I was around in the Fediverse during the CSAM attacks, and I’ve run online discussion sites and forums, so I’m well aware of the challenges of moderation, especially given the wave of AI chat-bots and spam constantly attempting to infiltrate open discussion sites.

    And I’ve worked with AI a great deal (go check out Jan - open source, runs on local machine if you’re interested), and there’s no chance in hell it’s anywhere near ready to take on the role of moderator.

    See, Reddit’s biggest strength is its biggest weakness = the army of unpaid mods that have committed untold numbers of hours towards improving the site’s content. What Reddit found out during the API debacle was that because the mods weren’t paid, Reddit had no recourse to control them aside from “firing” them. The net result was a massive loss of editorial talent, and the site’s content quality plunged as a result.

    Because although the role of a mod is different in that they can’t (or shouldn’t) edit user content, they are still gatekeepers the way junior editors would be in a print publishing organization.

    But here’s the thing - there’s a reason you pay editors. Because they ensure the content of the organization is of high caliber, which is why advertisers want to pay you to run their ads.

    Reddit thinks it can skip this step. Instead of doing the obvious thing = pay the mods to be professionals - they think that they can solve the problem with AI much more cheaply. But AI won’t do anything to encourage people to post.

    What encourages people to post is that other people will see and comment, that real humans will engage with their content. All it takes is the automod telling you a few times that your comment was banned for X inexplicable reason and you stop wanting to post. After all, why waste your time creating unpaid content for a machine to reject it?

    If Reddit goes the way of AI moderation, they’ll need to start paying their content creators. If they want to use unpaid content from an open discussion forum, they need to start paying their moderators.

    But here’s the thing. Reddit CAN’T pay. They’ve been surfing off of VC investment for two decades and have NEVER turned a profit, because despite their dominance of the space, they kept trying to monetize it without paying people for contributing to it… and honestly, they’ve done a piss poor job at every point in their development since “New Reddit” came online.

    This is why they sold your data to Google for AI. And its why their content has gone to crap, and why you’re all reading this on the Fediverse.

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      The mods are totally complicit though, at least for some of the subs, and the Ai had a hand in the massive ban wave that’s been going on currently. It went looking out for accts you may or may not have violated any terms and banned them regardless. They actually increased their automod filtering for their subs

  • Jakeroxs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    6 days ago

    I think using LLMs to HELP with moderation makes sense. The problem with all these companies is they appear to think it’ll be perfect and lay off all the humans.

    • Obelix@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      Yeah, LLMs could really help. Other tools without AI are also helpful. The problem with all those companies is that they don’t want to do moderating for the public good at all. Reddit could kill a lot of Fake News on it’s platform, prevent reposts of revenge porn or kick idiots just by implementing a few rules. They don’t want to

      • Pyr_Pressure@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        5 days ago

        I mean, what people refer to as AI today isn’t really synonymous with actual AI

        It’s been cheapened

        • Opinionhaver@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          I don’t think it’s that. LLM’s very much are actual AI. Most people just take that term to mean something more than that when it actually doesn’t. A simple chess engine is an AI as well.

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      thier aggressive autoban is getting everyone, regardless if you did actually ban evade or not, though not in large numbers.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    7 days ago

    Why would anybody even slightly technical ever say this? Has he ever used what passes for AI? I mean it’s a useful tool with some giant caveats, and as long as someone is fact checking and holding its hand. I use it daily for certain things. But it gets stuff wrong all the time. And not just a little wrong. I mean like bat shit crazy wrong.

    Any company that is trying to use this technology to replace actually intelligent people is going to have a really bad time eventually.

    • alcoholic_chipmunk@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 days ago

      “Hey as a social media platform one of your biggest expenses is moderation. Us guys at Business Insider want to give you an opportunity to tell your investors how you plan on lowering that cost.” -Business Insider

      “Oh great thanks. Well AI would make the labor cost basically 0 and it’s super trendy ATM so that.” -Reddit cofounder

      Let’s be real here the goal was never good results it was to get the cost down so low that you no longer care. Probably eliminates some liability too since it’s a machine.

  • Ledericas@lemm.ee
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 days ago

    isnt it already happening on reddit? i mean the massive amounts of accs that were banned in the last few months were all AI

    • futatorius@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      i mean the massive amounts of accs that were banned in the last few months were all AI

      No they weren’t. Mine got banned for no reason.

  • CaptainBasculin@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    7 days ago

    In my opinion AI should cover the worst content; ones that harm people just by looking at it. Anything up to debate is a big no; however there exists many content where even seeing the content can be disturbing to anyone seeing it.

    • lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      7 days ago

      Yeah, but who decides what content is disturbing? I mean there is CSAM, but the fact that it even exists shows that not everyone is disturbed by it.

        • lka1988@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          7 days ago

          I mean I’m not defending CSAM, just to be clear. I just disagree with any usage of AI that could turn somebody’s life upside down based on a false positive. Plus you also get idiots who report things they just don’t like.

      • Zexks@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 days ago

        You’ll never be able to get a definition that covers your question. The world isn’t black and white. It’s gray and because of that a line has to be drawn and yes it would always be considered be arbitrary for some. But a line must be drawn none the less.

        • lka1988@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          7 days ago

          Agreed 100%, a line absolutely should be drawn.

          That said, as a parent of 5 kids, I’m more concerned for false positives. I’ve heard enough horror stories about parents getting arrested over completely innocent pics of their kids as toddlers or infants, that may have genitalia showing. Like them at 6 months old doing something silly in the tub, or what have you. I don’t trust a computer program that doesn’t understand context to accurately handle those kinds of photos. Frankly, parents shouldn’t be posting those pics on social media to begin with, but I digress. It sets a bad precedent.

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 days ago

            There’s a vast gulf between automated moderation systems deleting posts and calling the cops on someone.

            • lka1988@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              5 days ago

              You would be shocked.

              My ex has called the cops on me more than a few times in our past, because she just didn’t like how I was doing things. After I remarried and moved in with my now wife, my ex called the cops and CPS on us for abusing our kids (we don’t - we just have reasonable rules). That was a fun one and ended up with me getting a lawyer and dragging her to court. The judge was not happy with her. Also, my neighbor called the cops on us a few months ago because one of my kids was having a temper tantrum, and then again because my two older kids (and some neighborhood friends) purposefully excluded her kids from whatever game they were playing. We had a talk with our kids about excluding others and why you shouldn’t do that, but between you and I, those kids are brats and bully a lot of other neighborhood kids.

              That’s just a taste of crazy. It’s not out of the realm of possibility for someone to be batshit enough to call the cops over an innocent baby-in-the-bathtub picture (though like I said, parents shouldn’t be sharing that on social media anyway, but here we are).

  • Viri4thus@feddit.org
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 days ago

    To think we lost Aaron Swartz and this shitstain and Huffman are still with us. I don’t believe in the supernatural but this kind of shit makes a good case for the existence of a devil.