

5·
16 days agoIt’s really not. Just because they describe their algorithm in computer science terms in the paper, doesn’t mean it’s theoretical. Their elastic and funnel examples are very clear and pretty simple and can be implemented in any language you like…
Here’s a simple python example implementation I found in 2 seconds of searching: https://github.com/sternma/optopenhash/
Here’s a rust crate version of the elastic hash: https://github.com/cowang4/elastic_hash_rs
It’s not a lot of code to make a hash table, it’s a common first year computer science topic.
What’s interesting about this isn’t that it’s a complex theoretical thing, it’s that it’s a simple undergrad topic that everybody thought was optimised to a point where it couldn’t be improved.
If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.
They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.
They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”
Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.
Paper here: https://arxiv.org/pdf/2412.04984
Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”
It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).