• mr_account@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    10 days ago

    All these upvotes and comments and not one joke about how it sounds like TurboCunt?

    • spy@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Well that depends on a lot of factors. One of them being the distance of dick to floor of every one they would jerk. Call that D2F.

      Hopefully they thought of it.

      • themachinestops@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        Someone already done a paper on it:

        https://ia800308.us.archive.org/32/items/pdfy-tG1MuMpwvrML6QD0/228831637-Optimal-Tip-to-Tip-Efficiency.pdf

        Abstract A probabilistic model is introduced for the problem of stimulating a large male audience. Double jerking is considered, in which two shafts may be stimulated with a single hand. Both tip-to-tip and shaft-to-shaft configurations of audience members are analyzed. We demonstrate that pre-sorting members of the audience according to both shaft girth and leg length allows for more efficient stimulation. Simulations establish steady rates of stimulation even as the variance of certain parameters is allowed to grow, whereas naive unsorted schemes have increasingly flaccid perfor- mance.

    • cecilkorik@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 days ago

      No, that is what it would be if we were using traditional, deterministic compression and using a reversible and verifiable mapping of data. But this is the new era of memetic compression, “Pied Piper” is what everyone remembers from the show, so we compress it to “Pied Piper” to minimize the amount of memetic overhead and allow the smallest possible compression artifact. Like with “AI”, it doesn’t need to be correct, just close enough for people to think it is! /s

    • uuj8za@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      Yes, it should be Nucleus. Them calling it PiedPiper is a propaganda campaign to try to earn good will from people. Fuck Google locking down Android.

  • Brewchin@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 days ago

    This should come in handy for the recently projected need for 300 GB RAM* in upcoming self-driving cars.

    *Not a typo. 😳

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 days ago

    TurboQuant, meanwhile, could lead to efficiency gains and systems that require less memory during inference. But it wouldn’t necessarily solve the wider RAM shortages driven by AI, given that it only targets inference memory, not training — the latter of which continues to require massive amounts of RAM.

    I didn’t realize the RAM shortage was mostly due to training—I would have thought inference was at least a big a factor.

    • Dran@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      Inference is dirt cheap in comparison. Hundreds to thousands of concurrent users can be served by hardware costing in the high-thousands to low-ten-thousands.

      Training those same foundational models is weeks to months of time on tens to hundreds of millions worth of hardware.

      • AbouBenAdhem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Yeah—but in theory you only need to train once, while inference costs are ongoing and scale up with usage.

        I guess it’s ultimately a business decision by AI companies to weigh how often retraining is worth the cost.

        • douglasg14b@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 days ago

          Training is constant. None of these models by any of these providers are static. You’ll notice that they are releasing new models and new model versions regularly.

          This means that training is happening constantly. It never stops. There’s always new shit being trained.

  • DraconicSalad@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 days ago

    Funny thing that came with this: apparently Micron’s stock fell off a cliff, and apparently so did RAM prices? Can’t confirm that later one.