I'm familiar with this line of reasoning, but I'm always struggling to understand the exact thing that humans can do that computers don't. Usually, the differentiation is that humans have e.g.
"true understanding"
I assume you mean the process of looking at an image and not just deriving patterns, but seeing that you are looking at a cat, that a cat is an "animal" which has "four legs and a tail" and that cats can be friendly towards you or aggressive, depending on your own behavior and theirs.
Neural Nets are certainly capable of the first two: classification and creating taxonomies. The last one I admit is tricky as it requires the Neural Net to be an entity within the observed world
"intellectual process"
the intellectual process is arguably exactly the process input->categorize and analyze->compile->produce output loop that we've modelled AI based upon
"creativity"
is the ability to create something truly new. This one seems obvious as Neural Nets only can derive patterns (plus maybe a random input) - but I would posit the question if any human ever created something truly new in the "apple pie from scratch" sense or if we've only ever created higher level works derived from existent things.
"consciousness"
this one is hard to grasp. I would argue that consciousness is the realization that one exists (in the descartian sense) - coupled with the desire to continue to do so. It is a quality that wouldn't make much sense for an output focused neural net like the one behind Stable Diffusion - but it might be a desirable trait in a decision making focused deep learning setup - similar to a self-healing cloud deployment.
"love/emotion"
This builds on the previous consciousness example. Not to sound like Rick Sanchez /some other cynic - but aren't these at their core adjustment mechanisms that help us further evolutionary goals like survival and continuation of our lineage. Wouldn't a decision making focused deep learning setup be more stable/have a higher uptime if it would facilitate its goal of "staying on" through a strong drive of survival/expansion?
The last two examples are where my point falls apart a bit. But I still stand by my general thesis: We are way too certain that our particular human way of processing information and "thinking" has some divine quality to it that isn't replicable in neural networks. Against that, I would argue that neural networks are largely the same mechanism we employ in our thinking and that they are just a couple of millenia in evolution behind, but are catching up at a multiple of the speed it took us to get to where we are now intellectually.
"true understanding"
I assume you mean the process of looking at an image and not just deriving patterns, but seeing that you are looking at a cat, that a cat is an "animal" which has "four legs and a tail" and that cats can be friendly towards you or aggressive, depending on your own behavior and theirs.
Neural Nets are certainly capable of the first two: classification and creating taxonomies. The last one I admit is tricky as it requires the Neural Net to be an entity within the observed world
"intellectual process"
the intellectual process is arguably exactly the process input->categorize and analyze->compile->produce output loop that we've modelled AI based upon
"creativity"
is the ability to create something truly new. This one seems obvious as Neural Nets only can derive patterns (plus maybe a random input) - but I would posit the question if any human ever created something truly new in the "apple pie from scratch" sense or if we've only ever created higher level works derived from existent things.
"consciousness"
this one is hard to grasp. I would argue that consciousness is the realization that one exists (in the descartian sense) - coupled with the desire to continue to do so. It is a quality that wouldn't make much sense for an output focused neural net like the one behind Stable Diffusion - but it might be a desirable trait in a decision making focused deep learning setup - similar to a self-healing cloud deployment.
"love/emotion"
This builds on the previous consciousness example. Not to sound like Rick Sanchez /some other cynic - but aren't these at their core adjustment mechanisms that help us further evolutionary goals like survival and continuation of our lineage. Wouldn't a decision making focused deep learning setup be more stable/have a higher uptime if it would facilitate its goal of "staying on" through a strong drive of survival/expansion?
The last two examples are where my point falls apart a bit. But I still stand by my general thesis: We are way too certain that our particular human way of processing information and "thinking" has some divine quality to it that isn't replicable in neural networks. Against that, I would argue that neural networks are largely the same mechanism we employ in our thinking and that they are just a couple of millenia in evolution behind, but are catching up at a multiple of the speed it took us to get to where we are now intellectually.