Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 0 Posts
  • 21 Comments
Joined 13 days ago
cake
Cake day: July 17th, 2025

help-circle








  • I don’t think you even know what you’re talking about.

    You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.

    The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.

    And for the record, the term is Artificial General Intelligence (AGI), not GAI.


  • Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”

    LLMs are intelligent - just not in the way people think.

    Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.




  • I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

    I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.





  • The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:

    1. Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,

    2. Or we wipe ourselves out before we get the chance.

    Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That’s what humans do; improve our technology.

    The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.



  • Asking investment advice from a system that’s designed to do nothing else but generate natural sounding language based on probabilities is pretty stupid.

    That being said, what’s wrong with this answer? I think it’s more or less a good and balanced take.

    Here’s the first half of it that I left out:

    spoiler

    “AI” as an investment isn’t one thing—it’s more like a category of bets, ranging from hardware to software to services, each with wildly different risk profiles. So the honest answer is: yes, it can be a good investment—but only if you understand what you’re actually investing in.

    Here’s why that nuance matters:

    Buying Nvidia stock in 2019 was a good AI investment. Buying it now, after a 10x run-up? Much less clear—it’s priced as if they’re the sole arms dealer in a forever war.

    OpenAI, Anthropic, etc. aren’t publicly traded, so retail investors can’t buy them directly. Instead, you get exposure via companies like Microsoft, Amazon, or other backers—meaning you’re not really investing in “AI” directly, but as part of a much broader bundle.

    AI startups and ETFs are all over the place—some are thinly veiled hype vehicles chasing trends, while others are building real infrastructure (like vector databases, chip design tools, or specialized AI services). Picking the wrong one is like investing in Pets.com during the dot-com boom—it sounds techy, but the business might be garbage.

    Thematic ETFs like BOTZ or ROBO give you AI exposure but are diluted by their attempt to hedge across subsectors. They tend to underperform when compared to cherry-picking the winners.


  • I’m unable to replicate your results. I get a long and nuanced aswer. Mind sharing the answer you got?

    When I asked the same thing the conclusion was:

    So is AI a good investment? The sector has long-term potential, especially in areas like chip manufacturing, enterprise automation, and maybe foundational model licensing. But it’s also deeply speculative right now, with prices reflecting hype as much as earnings.

    If you’re thinking long-term and can stomach volatility, AI is worth including. If you’re chasing short-term returns because you think “AI is the future,” you might be buying someone else’s exit.