I agree with almost all of your comment. The only part I disagree on is:
How can we attempt to recreate the human brain into AGI when we are not close to mapping out how our brains work in a way to translate that into code, let alone other more simple brains in the animal kingdom.
An implementation of AGI does not need to be inspired from the human brain, or any existing organic brain. Nothing tells us organic brains are the optimal way to develop intelligence. In fact, I’d argue it’s not.
That being said, it doesn’t change the conclusion: We are nowhere near AGI, and LLMs being marketed as such is absolutely a scam.
I agree with almost all of your comment. The only part I disagree on is:
An implementation of AGI does not need to be inspired from the human brain, or any existing organic brain. Nothing tells us organic brains are the optimal way to develop intelligence. In fact, I’d argue it’s not.
That being said, it doesn’t change the conclusion: We are nowhere near AGI, and LLMs being marketed as such is absolutely a scam.