“We think we’re on the cusp of the next evolution, where AI happens not just in that chatbot and gets naturally integrated into the hundreds of millions of experiences that people use every day,” says Yusuf Mehdi, executive vice president and consumer chief marketing officer at Microsoft, in a briefing with The Verge. “The vision that we have is: let’s rewrite the entire operating system around AI, and build essentially what becomes truly the AI PC.”

…yikes

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 hours ago

    Agreed. A lot of communication is non-verbal. Me saying something loudly could be due to other sounds in the environment, frustration/anger, or urgency. Distinguishing between those could include facial expressions, gestures with my hands/arms, or any number of non-verbal clues. Many autistic people have difficulty picking up on those cues, and machines are at best similar to the most extreme end of autism, so they tend to make rules like “elevated volume means frustration/anger” when that could very much not be the case.

    Verbal communication is designed for human interactions, whether in long-form (conversations) or short-form (issuing commands), and they rely on a lot from the human experience. Human to computer interactions should focus on those strengths, not try to imitate human interaction, because it will always fail at some point. If I get driving instructions from my phone, I want it to be terse (turn right on Hudson Boulevard), whereas if my SO is giving me directions, I’m happy with something more long-form (at that light, turn right), because my SO knows how to communicate unambiguously to me whereas my phone does not.

    So yeah, I’ll probably always hate voice-activation, because it’s just not how I prefer to communicate w/ a computer.