• artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    124
    arrow-down
    2
    ·
    16 days ago

    Hell yeah, let’s hold them accountable for disinformation. They’ll be gone completely in a matter of months.

    Want to get rid of that responsibility? Direct the user to the source. Oh wait, that’s just a search engine.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      15 days ago

      It’s a bit different, because a search engine can give you 0 results. An AI is trained on getting the most correct answers so it always guesses, it’s the best way to score on an evaluation

  • supersquirrel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    100
    arrow-down
    2
    ·
    16 days ago

    I think a better solution is to ban techbros from giving serious economic or cultural advice and take computers away from business majors.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      2
      ·
      16 days ago

      Please don’t take them entirely away. Maybe just internet access? 30ish years had to do accounting by hand. In those green ledgers. It took approximately twelve times longer to do it by hand than to do it with a computer. And it made me shrimp like 5 times worse. I needed an architect’s table what angled the top of it in order to work properly but I could neither get one supplied by the employer nor afford to give one to the employer.

      Not all technology is bad

    • jaybone@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      15 days ago

      I don’t get how some of these tech company CEOs who came up as engineers can be pushing this bullshit. I get once the company got big they started hiring business bros. But some big companies still have CEOs that were once engineers. You’d think they would know better.

      • NannerBanner@literature.cafe
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        15 days ago

        What kind of engineer? Because while the physical world, with all of its mechanical and civil and aerospace engineers, has its shit figured out with professional standards and very clearly defined responsibilities and duties, the world of social engineers, tire engineers, procurement engineers, supply chain engineers, sandwich engineers, project engineers, lead engineers, and yes, software engineers, definitely is a little too loose with any definition for me to care that these ceos were once ‘engineers.’

        • Rooster326@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 days ago

          You can take any of these professional engineers, give them a billion dollars and they’re going to turn into total pieces of shit.

          Power corrupts. Absolute power corrupts absolutely.

  • mrmaplebar@fedia.io
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    16 days ago

    This reads as a way to protect white collar industries from the effects of AI without addressing the root problem–that AI does not actually think, and that it is little more than a meat grinder full of scraped data.

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        15 days ago

        Why is it CALLED intelligent?

        Because it is “intelligent” by definition. You’re conflating the word with “highly intelligent” or just “smart”.

        Dogs are “intelligent” but can’t they write code, but we sometimes refer to dogs as “smart”.

        A flatworm has intelligence but no one would call it smart.

      • atopi@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        15 days ago

        it had that name for a really long time

        a couple decades ago, a program learning was really impressive

        • SeeMarkFly@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          15 days ago

          I remember when LISP was available for my Atari 800.

          Yes, I had the FULL 64K of memory installed.

    • entropicdrift@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      It does think, just not very logically.

      To put it another way, it’s like we figured out how to give machines an intuition via Machine Learning. So you’ve got a machine with an intuition trained on all written text that is not literal gibberish, but by default all they know how to do is shoot from the hip with their intuition, and the only feedback they get for whether they said the right thing is whether the human they’re chatting with approves of what they say.

      It’s a bullshitter to the extreme because that was how we built the incentive structure. And now they use the bullshitters to train better bullshitters.

      Is it any surprise that business executives think that these are the ultimate in intelligence? All they do is bullshit.

  • tinkermeister@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    16 days ago

    I may have become too cynical but, as is often the case when you dig deeper, this sounds like the result of lobbyists trying to protect licensing rather than people.

    We can be dumb, but we’ve been doing web searches for legal and medical advice for ages because it is too damned expensive and time consuming to go to professionals for every little thing. Not to mention, doctors have so little time for you that it is hard to get them to listen to the whole story to make connections between symptoms.

    The LLMs already tell you that they aren’t licensed professionals and, for many, provide citations for their sources (miles better than your typical health website).

    As a personal anecdote, my son was having stomach pain but was planning to tough it out. He checked with ChatGPT and it recommended he go to the ER. He did, and if he hadn’t, he would likely be dead now. He spent 3 days in the hospital having his bowels unobstructed through a tube in his nose.

    There is value in people having that kind of information at their fingertips.

    Regulation is absolutely needed, but I would rather they focus on protecting us from AI being used for military purposes, mass surveillance, etc. rather than protecting citizens from ourselves.

    • tempest@lemmy.ca
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      16 days ago

      Are you in the US? My take away here is American healthcare is bad but we’re treating the symptom not the disease.

      • tinkermeister@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        Yeah, I’m in the US and I agree. Though it is going to take some serious change to treat the problem. In the meantime, this is at least a stopgap solution for people who don’t have a lot of options.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      16 days ago

      Wait, he thought he could sit that pain through at home? Your son is tough as nails. Give him a hug for me and everyone else who’s had that four day n-g tube delight.

      • tinkermeister@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 days ago

        Yeah, he is pretty tough. I wish I could hug him, he is about a 10 hour drive from me. That tube was nightmarish from what he’s told me.

        • MinnesotaGoddam@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          15 days ago

          if i were his parent, i would be giving him gentle reminders to drink more water. after teasing him for eating way too much corn or broccoli or whatever bastard fiber caused his obstruction (assuming he’s in a mental place he can handle the teasing)

  • deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    23
    ·
    15 days ago

    If implemented, that would just ban chatbots that use large language models. It’s not a terrible idea.

    What would actually happen is that so-called AI chatbot systems would try to detect if someone is from New York and then try to exclude them from receiving medical or legal advice, fail, and then get sued and then pay a small fine, over and over again forever.

    • architect@thelemmy.club
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      8
      ·
      15 days ago

      This is a really bad idea.

      First because healthcare is clearly being gatekept from people.

      Second, because even if you go to a healthcare professional nowadays, there is no guarantee that that person is not a fucking idiot that doesn’t believe in vaccines. I can’t believe I have to actually ask people before they touch me if they believe in vaccines or not and then tell them to not come back into my room if they answer that they don’t believe in science. But that has happened and it has happened to the people I’ve taken care of and because of this now healthcare can’t be trusted.

      The LLM is not any worse than that. In fact, I would say that it’s already too cautious. No way the model is ever going to tell me vaccines are bad. It’s not going to tell me to take a poison to clear Covid. It’s not going to tell me to drink bleach like the president did. It’s literally not any worse than the bullshit we are dealing with all day every fucking day.

      And I’m getting to the point that if you’re a full grown human fucking being and you’re going to believe something if it tells you to drink fucking bleach or swallow a fucking lightbulb then that’s nature saying something about you.

      • Doomsider@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        15 days ago

        Naw, completely disagree. If you had a calculator you knew was defective you would ban doctors and lawyers from using it.

        You also seem to think that LLM is going to be inherently more accurate than a expert human. We can see with GrokAI how easy it is to manipulate an AI into saying racist white nationalist garbage. So we are not just trusting the technology but also a layer of unpredictable corporate meddling.

        Why does the LLM recommend this drug but not the other one? We quickly see how a corporation could favor a certain medication due to behind the scene deals or even push a medication.

        You can’t trust a black box you are not allowed to look into. Trust in a LLM at this point is pure folly.

        • Lfrith@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          15 days ago

          Funny thing is LLMs are bad as calculators too, since I’ve seen it get simple multiplication wrong.

          It’s capable of generating content, but unable to verify or know itself if it is correct. But, lot of people don’t realize that because the less they know about a subject matter the smarter it will seem to them not knowing its well…a language model. As in just outputting what can be complete gibberish.

          • raldone01@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            14 days ago

            Some of the SOTA models like gemini 3 pro are getting quite good at ballpark/estimations. I have fed it multiple complex formulas from my studies and some values. The end result is often quite close and similar in accuracy how I would do an estimation myself. (It is usually more accurate then my own ones.)

            Now I don’t argue there is any consciousness or magic going on. But I think the generalization that is going on is quite something! I have trained ai models for various robot control and computer vision tasks. Compared to older machine learning approaches transformers are very impressive, computationally accessible and easy to use. (In my limited experience)

            • Lfrith@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              14 days ago

              I find it okay for writing programs since you can verify it to see if the output is correct.

              But, actual analysis not so much, since when verifying what comes out that its not completely reliable even for things it should be like numbers. Now numbers might be close, but still off

              Abstract stuff might be fine. But, its still not something to entirely trust on analysis because of errors. There’s a lot of double checking that needs to be going on.

  • iegod@lemmy.zip
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    6
    ·
    15 days ago

    I don’t see how you police/enforce this. The technology is out of the bag, people will find ways to access. Do we need age/location verification for this now too? What if I’m running a local agent? I don’t agree with this.

    • cmnybo@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      2
      ·
      15 days ago

      The law would allow you to sue whoever is running the chatbot. If you run your own LLM locally and take bad advice from it, then it’s your own fault.

      • how_we_burned@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        15 days ago

        So who gets sued. The guy who put the chat bot on the server and is running it or the chatbot software developer themselves?

        Or both?

      • iegod@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        11
        ·
        15 days ago

        Walk me through how a company based and operating not in new york would be subject to any actions from this lawsuit.

        • altkey (he\him)@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          9
          ·
          15 days ago

          I do agree it’s limited to a small scope of New York-based smaller LLMs, but if you read the news you know why exactly this bill occured - just now Mamdani gave up on a useless chatbot made with local budget by his predecessor Adams: https://www.thecity.nyc/2026/01/30/mamdani-unusable-ai-chatbot-budget/ It was indeed giving inaccurate legal recomendations on city’s website. I think the better result that can happen to that bill is it becoming a trend across cities and states as, I suspect, New York administration wasn’t the only one falling for this scam.

  • TheObviousSolution@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    15 days ago

    Just have them add a disclaimer or have the hosts be liable for what their chatbots say, stop adding bureaucracy just asking to get selective prosecuted and abused.

    • deathbird@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      15 days ago

      Section 230 of the dmca is designed to allow platforms to exist because people can say whatever the fuck they want. But nobody should make a machine that says things they can’t control, and if you do you need to be disciplined for such irresponsibility.

  • willington@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    14 days ago
    1. Make laws against chatbots.
    2. Demand proof you are not a chatbot.
    3. Surveillance capitalism.

    The real target here is population control.

    The lawmakers, which take billionaire money by the ton, who HAVE NEVER given a shit, suddenly, NOW, they want to protect the vulnerable. Abso fucking lutely laughable on its face.

  • moroninahurry@piefed.social
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    7
    ·
    15 days ago

    Laws like this are great for these companies. This is how they will justify removing access to useful information and putting it behind paywalls. But oh your need a prescription so now the insurance companies are involved (spoiler: they already are) and so you don’t even have access to pay out the nose for medical information.

    Then when Google search has been completely replaced with AI, you won’t even be able to search for medical information.

    Healthcare companies aren’t about to provide anything for free.

    • Routhinator@startrek.website
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      5
      ·
      15 days ago

      Most of the medical information coming up these days is garbage and you should be going to a known, reputable site and searching their database. LLMs have been trained on absolute garbage. There is nothing of value being kept from anyone here.

      • presoak@lazysoci.al
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        14 days ago

        LLMs have been trained on absolute garbage

        It depends on the LLM actually.

        Specialized medical LLMs are actually very accurate.

        • badgermurphy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          14 days ago

          I’m sure the quality of the LLM output does vary a lot based on the size of the scope it covers and the training data set.

          However, I believe that if it were possible to get an LLM to be “quite accurate” in any context, that would make it easy to find a path to profitability for that tool, but I don’t think we have seen that materialize anywhere.

          I believe that the best they can get is “more accurate” than the mean, but still not accurate enough to reliably make anyone money*.

          *Nvidia notwithstanding

          • Routhinator@startrek.website
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 days ago

            Moreover, until you can get the same output from the same input from an LLM consistently, the entire tech is unreliable garbage.

    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      15 days ago

      LLMs and chatbots should not be giving medical advice. You are afraid of the private healthcare system, not the lack of access to the most janky bandaid fix for its failures.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        14 days ago

        The line between medical advice and personal research is pretty freaking gray, so banning medical advice. Does that also ban talking to llms about anything that is medical adjacent?

        Does medical adjacent mean personal disabilities? Drug related interests? Pet health? Stretches? Pain support?

        Anything that falls under “Health, Wellness, and Fitness”?

        …etc

        It’s a slippery slope and we don’t need to be sliding down it

        • moroninahurry@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          14 days ago

          People are so vicious over this tech they would rather have disabled poor people with cancer suffer and die under inadequate care than do anything about the inadequate care. Ban the tech, but let this all go on.

          If you are perfectly able and well, you can ignore all advice that isn’t perfect.

          The perspective they seem to lack is frightening. The empathy they refuse to engage is massive. This is able-ism.

          Tech companies are bad, but use of tech will cure and ease cancer, HIV, and chronic disease. Bring on the downvotes.

          • Soup@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            14 days ago

            “Would rather have disabled people with cancer suffer and die…”

            My guy, that’s not a lack of LLM access, it’s a completely fucked US healthcare system that forces people onto the internet because they can’t get what they need from the state, you goofy-ass weirdo.

            • douglasg14b@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              14 days ago

              Well yes of course but also restricting access to information machines doesn’t exactly help much either.

              • Soup@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                14 days ago

                Do hallucinating LLMs, that have done such things as convince a child to commit suicide before, really count as “information machines”? The Mayo clinic website might take a single whole other braincell to read through but at least it’ll be written properly.

                I mean, the fact that you consider these programs to have enough credibility to be called “information machines” is exactly why they’re so potentially dangerous.

              • SLVRDRGN@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 days ago

                I hate to break it to you but… they’re not really “information machines”. Google search is a better information machine.

          • badgermurphy@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            14 days ago

            I think you may be falling into a false dichotomy. Not only is the choice being presented a bad one, it ignores real solutions to the root problem, leaving us to argue over the crappy “band-aid” solution to it.

            I believe that people needing health care should have no reason to ask a chat bot about their symptoms because they can ask a helpful doctor instead. The fact that they can’t do that is the problem, not their access or lack of it to the chat bot.

      • moroninahurry@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        14 days ago

        Neither should Wikipedia or Google. So I guess by your logic nobody should search or learn about medical conditions on a computer.

        • Soup@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          14 days ago

          You know damn well there’s an important difference related to the confidence of a bot that has been a key problem since this whole thing started.

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 days ago

          I guess by your logic nobody should search or learn about medical conditions on a computer.

          How else would we know the TRUTH about 5G vaccines and invermectin? Or the cures of Apple Cider vinegar?

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    16 days ago

    Sounds like a start. More is needed though.

    The bill targets AI chatbots that impersonate licensed professionals — such as doctors and lawyers — and bars them from providing “substantive response, information, or advice” that would violate professional licensing laws or constitute the unauthorized practice of law.

    It also mandates that chatbot owners provide “clear, conspicuous, and explicit” notice to users that they are interacting with an AI system, with the notice displayed in the same language as the chatbot and in a readable font size. However, the bill clarifies that this notice for users, which indicates that they are interacting with a non-human system, does not absolve the chatbot owners of liability.

  • Zink@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    14 days ago

    I’m a human being and I’m pretty sure I am already not allowed to give legal or medical advice to anybody in new york or any other state.

  • phx@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    16 days ago

    AI in the legal field could be useful for assisting an actual legal professional in compiling precedent based against on-the-books laws, so long as it cites sources and they verify them.

    In the medical field, it could be useful for spotting anomalies between multiple images such as X-rays or cross-referencing medical documents WHEN USED BY A PROFESSIONAL.

    But the thing is, it should be a tool - carefully used - to enhance the existing profession, not replace actual professionals.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      15 days ago

      But the thing is, it should be a tool - carefully used - to enhance the existing profession, not replace actual professionals.

      except in practice, the “professionals” just take the LLM’s word as unassailable and disengage their brains. funny that, the gap between theory and reality

      • phx@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 days ago

        Yup, but those are the cases that make the news. There’s always gonna be some stupid/lazy ones

        • MinnesotaGoddam@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          15 days ago

          tell me you haven’t worked with anyone in the medical industry without telling me you haven’t worked with anyone in the medical industry

          source: 20 years as a medical accountant

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    8
    ·
    16 days ago

    Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      16 days ago

      There are billions being sunk into AI. How much health care could that buy? Your logic only makes sense if AI is free. It’s not.

    • JoshuaFalken@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      16 days ago

      ‘Should I use one teaspoon of salt in this recipe, or two?’

      Two is ideal.

      ‘Do dogs like chicken wings?’

      Wild dogs regularly hunt small animals like hare or chicken for food.

      One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren’t much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.

      • henfredemars@infosec.pub
        link
        fedilink
        English
        arrow-up
        5
        ·
        16 days ago

        Hm, good point. Perhaps the overconfidence AI might provide is even worse than knowing you don’t know.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      16 days ago

      Having potentially inaccurate resources might be better than nothing, or is that worse?

      You pick up a mushroom in the forest and take it home. If you have no information, do you eat it? If something tells you it’s safe do you eat it?

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      16 days ago

      If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.

    • thisbenzingring@lemmy.today
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      16 days ago

      the AI devices will just have preambles and disclaimers and word things in ways to refer the user to human resources

    • smh@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 days ago

      We had a medical scare just yesterday. I was in the ER for 8 hours with my partner over a non-life-threatening but still emergency problem.

      An ultrasound, cat scan, and much poking and prodding later, we still don’t know what is up. The AI was at least able to predict next steps (if A then discharge and follow up with PCP, if B then surgery this week, if C then emergency surgery), something the ER was too busy to do for several hours. It was reassuring. The AI also gave me (working) links to more thorough resources on the topic.

    • Lfrith@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 days ago

      Problem is people treat it as reliable when AI itself isn’t able to verify or know if what it is generating is correct.

      Would be better if it provided direct links for people to go to and read. A list of citations than the proclamations it makes know. Its too “opinionated” making it give advice when it would ideally be neutral just providing links for people to read further from sources that hopefully isn’t AI.

      AI has even gotten sports trivia I know incorrect. I don’t think people realize AI is just generation and hallucinations are part of it. Not as reliable or trustworthy authority just because it strings together sentences.

      Its use is more ideal for making stories or whatever where people aren’t expecting accuracy than medical advice, which it lacks the knowledge on despite the sources it pulls from. Because it has no logic or thought itself.