Ever since I studied AI at college, ages ago, there was this understanding that biology-inspired artificial neural networks are really simplifications and you cannot compare them directly. Back in the time, one of the aspects was the lack of temporal dimension, which we tried to simulate using for example recurrent neural network architecture. But this was in term of memory of a state.

After reading Andrew Gallimore's Alien information theory and experimenting a lot with neurofeedback (while studying theory), I think what is missing is some sort of internal resonance. When you measure electrical brain activity, you get these temporal neural network patterns, which manifest on EEG as brainwaves with different frequencies. There is a time scale to what is happening and that allows resonance to emerge.

We don't have that yet in artificial neural networks.

That is one of the reasons that comparing "number of neurons" with "number of trainable parameters" does not make direct sense, although at least magnitude comparisons are maybe worthwile, because of the appearance of emergent behavior at some level of complexity, which requires space.