• Avid Amoeba@lemmy.ca
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    Yeah, I got a superbly functional and super fast search / research / assistant tool from Qwen 3.6 35B and Open Web UI + SearXNG. All running local. It passed the WAF benchmark with flying colors.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      9
      ·
      1 day ago

      It’s honestly incredible how good the local stack is nowadays. It’s literally better than any frontier model you could’ve rented like a year ago.

  • neon_nova@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 days ago

    I have a 16gb MacBook Air m4.

    I like the idea of having a model I can run locally in the event of a possible long term internet outage.

    Can you recommend a model that would be suitable for my computer?

      • neon_nova@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Thanks! I figured it’s low on ram, but with the way things are going in the world, maybe it’s better than nothing is what I’m thinking.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          8
          ·
          2 days ago

          It’s entirely possible we might see fairly capable models that can be run with 16 gigs of RAM in the near future. Qwen 3.5 came out in February, and you needed a server with hundreds of gigs of memory to run a 397bln param model. Fast forward to a couple of weeks ago and 3.6 comes out with a 27bln param version beating the old 397bln param one in every way. Just stop and think about how phenomenal that is https://qwen.ai/blog?id=qwen3.6-27b

          So, it’s entirely possible people will find ways to optimize this stuff even further this year or the next, and we’ll get an even smaller model that’s more capable.

    • bountygiver [any]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      long term internet outage is not that likely. But getting priced out of any online models is quickly the reality.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
        link
        fedilink
        arrow-up
        12
        ·
        2 days ago

        Mainly data sovereignty. Running a local model means all your data stays on your machine. Any time you use a service you’re sending whatever the model is working on to the company. Another advantage is the price. With services you have to pay a subscription, with local models you get to run them for the price of electricity.

  • OldQWERTYbastard@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    3
    arrow-down
    3
    ·
    2 days ago

    The land of the CCP is the last place I’d expect to see FOSS AI agents. Good for them! Beats the hell out of our greedy bastards in the United States.