Yes. But more importantly, we are continuously training our network. There is no terminating "training set" except the set of all considered classifications. Our discovery of ourselves being "wrong" about a thing is our network continually adjusting. We also have the notion of ignorance. These classifiers are often forced to come to a conclusion, instead of having "I don't know, let me look at it from another angle" kind of self-adjustment process. "Aha, it is a cat!" moments do not happen for ai. In us, it would create a whole new layer to wrap the classifier around some uncertainty logic. We would be motivated to examine the reasons behind our initial failure, and use the conclusions to train this new layer, further developing strategies to adapt to those inputs.