• DonutsRMeh@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    For some reason, these local LLMS are straight up stupid. I tried deepseek R1 through ollama and it was straight up stupid and gave everything wrong. Anyone got the same results? I did the 7b and 14b (if I remember these numbers correctly), 32 straight up didn’t install because I didn’t have enough RAM.

    • felsiq@piefed.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Did you use a heavily quantized version? Those models are much smaller than the state of the art ones to begin with, and if you chop their weights from float16 to float2 or something it reduces their capabilities a lot more

    • teawrecks@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      The performance is relative to the user. Could it be that you’re a god damned genius? :/