I'm just going by the title. If the title was, "Don't believe the hype, LLMs will not achieve AGI" then I might agree. If it was "Don't believe the hype, AGIs is 100s of years away" I'd consider the arguments. But, given brains exist, it does seem inevitable that we will eventually create something that replicates it even if we have to simulate every atom to do it. And once we do, it certainly seem inevitable that we'll have AGI because unlike brain we can make our copy bigger, faster, and/or copy it. We can give it access to more info faster and more inputs.
The assumption that the brain is anything remotely resembling a modern computer is entirely unproven. And even more unproven is that we would inevitably be able to understand it and improve upon it. And yet more unproven still is that this "simulated brain" would be co-operative; if it's actually a 1:1 copy of a human brain then it would necessarily think like a person and be subject to its own whims and desires.
We don’t have to assume it’s like a modern computer, it may well not be in important ways, but modern computers aren’t the only possible computers. If it’s a physical information processing phenomenon, there’s no theoretical obstacle to replicating it.
That's only a problem if the relevant functional activity is a quantum effect. We have no problem mass producing complex macroscopic functional objects, and in the ways that are relevant human brains are all examples of the same basic system. Quantum theory doesn't seem to have been an obstacle to mass producing those.