I'm definitely a fan of the technology and I believe its full potential wasn't reached yet, not by far, but I'm not sure AGI is ever going to be it, let alone soon, within the current theoretical and technological framework.
But I guess it may be a definition problem - what is your definition of advanced general intelligence?
Even asuming ChatGPT has learned (to reliably imitate) specifically the natural language statement logic and human common sense, there's just a lot more to human intelligence by any reasonable definition.
Also, from the examples I've seen, I would say that what ChatGPT has learned is how to do a synopsis, or how to translate one type or style of speech into another and how to merge disparate ideas into a valid statement. Which is great, and pretty impressive, but still fairly basic.
Now, when I say fairly basic, I realize that it may be beyond the ability of many human beings, but then we start getting into how language or math proficiency isn't all there is to intelligence, and that intelligence isn't all there is to being human.
For example, I'm an experienced debater and a writer, and I have seen advanced AIs debate (Project Debater) and write. These are disciplines where language logic comprehension and ability to put facts together to answer questions will only get you so far, especially if you're limited to common sense.
Project Debater lost the public debate to humans because while it could answer with facts, it had no comprehension of debate strategy or counterstrategy. Also, the most likely answer in a debate is the argument that's the easiest to prepare for or against.
I guess the debating AI could be improved in this regard, but debate is not like most games that are perfectly gameable using game theory because the rules are in large part informal or open-ended - your opponent can always counter with a new move (a new argument or stratagem), the legality of which is, well, open to debate. Which makes debating pretty close to reality, as games go.
In writing, the abilities to describe factual realities or to make up plausible-sounding fictional characters or accounts are important, but those still only make you a hack writer. The whole point of good writing is to come up with something that isn't exactly or approximately like anything anyone has written before. That's where common sense is limiting you, you need to take educated risks, or go with your gut.
It doesn't matter that ChatGPT isn't directly copy-pasting anything, if the result is still strictly derived and therefore derivative (in the pejorative art critic sense). The whole concept of predicting which word likely comes next cannot escape this, as far as I can tell, it is based on not straying too far from what's, well, predictable.
Of course, even within this paradigm, there's still room for improvement. I guess depending on how the algorithm is trained, it could learn some aspects of overarching or more intricate story structures. Much like you could teach the debating AI something about strategy. As it is now, the longer a story fragment is that ChatGPT generates, the more it becomes a non-sequitur in terms of things like character arcs or plot developments.
And these are just immediate practical examples of a couple of niche applications where I think the whole approach to how the AI is trained fails to (have the potential to) reach anywhere near full human potential, which admittedly many humans also fail to reach.
Beyond that, I have no idea how one would go about creating an emotionally stable AI with character integrity (unless it's a puppet), or one that could match or beat humans at adaptive strategic rule-breaking, let alone a machine that actually has any form of real (self-)awareness or intention. Or, in other words, things that a true AGI should have to count as one, in my opinion.