cm0002@ttrpg.network to Technology@lemmy.zipEnglish · 8 days agoMicrosoft just open-sourced bitnet.cpp, a 1-bit LLM inference framework. It let's you run 100B parameter models on your local CPU without GPUs. 6.17x faster inference and 82.2% less energy on CPUs.github.comexternal-linkmessage-square5linkfedilinkarrow-up129arrow-down16cross-posted to: technology@lemmy.ml
arrow-up123arrow-down1external-linkMicrosoft just open-sourced bitnet.cpp, a 1-bit LLM inference framework. It let's you run 100B parameter models on your local CPU without GPUs. 6.17x faster inference and 82.2% less energy on CPUs.github.comcm0002@ttrpg.network to Technology@lemmy.zipEnglish · 8 days agomessage-square5linkfedilinkcross-posted to: technology@lemmy.ml
minus-squareArghblarg@lemmy.calinkfedilinkEnglisharrow-up13arrow-down2·8 days agoIs it still probabilistic slop or does the model understand what it’s doing and verify primary sources? If not, yay for burning the planet more slowly I guess, but still no thanks.
minus-squarescholar@lemmy.worldlinkfedilinkEnglisharrow-up12arrow-down1·8 days agoNo that would require AI (Actual Intelligence)
minus-squareLeon@pawb.sociallinkfedilinkEnglisharrow-up3arrow-down2·8 days agoWe already have that. It’s like, you put the AI in your brain. You are the AI.
Is it still probabilistic slop or does the model understand what it’s doing and verify primary sources? If not, yay for burning the planet more slowly I guess, but still no thanks.
No that would require AI (Actual Intelligence)
We already have that. It’s like, you put the AI in your brain. You are the AI.