I’m guessing you are correct. Computers are not my field. Apparently neither is grammar.
But one of the things I was taught in an introductory computer class 20 years ago was that self-awareness, learning, is the holy grail of computer science. Maybe it will prove to be the philosophers stone. But apparently it is on the mind of those who do know what they are doing.
I have heard of a book called The Emperors New Mind, which is said to make a fairly strong argument against the capacity of any artificial intelligence which could rival the human mind (and by mind I mean brain but also the soul, “self” or “I”), because the human mind is completely unique in structure, operation, etc. To be frank, we don’t even know how the human brain operates yet and we are still very far away from getting a computer to rival such operations (IF that could ever happen). One important thing to consider is that computers must run on code and logic, and the more detailed the code and logic get, the harder it is for their not to be errors in the code. The human mind is capable of holding numerous positions, thoughts, emotions, etc, which simply are not captured by mere logical, arithmetic, codes.
However, the essential question is: what makes a human being human? If we believe in our Christian faith and also, some good supporting philosophy, we know that human beings have a soul. The soul is spiritual and simply cannot be replicated by material means. Therefore the whole position of being “self-aware” and “free” really means nothing when discussing artificial intelligence. There is good reason to think that machines, which are firstly limited by programming, could neither be free or self-aware, (these two things seem to go hand in hand) if they do not have a soul. I think, perhaps, some may be able to create fairly convincing machines which seem human in some ways, MAYBE, but nevertheless, these machines will always be constrained by programming. Even now, a machine can be programmed to say hello, but as far as it being aware, having a single unitive self which not only has data, but KNOWS data, requires a leap into the spiritual realm, in my opinion. The essential question is: what does it mean to know? and surely to know one has to have a “self” which knows and which knows that it knows. Delving into that question, would be of more benefit to all of us I think.
The whole drive into the artificial intelligence issue (or era perhaps) makes me question the motives. Why are some so compelled to create machines which mimic human beings? Utilitarian purposes aside, it seems like a rather pride-filled drive for man to create in his own image and likeness, to rival the true Creator. Indeed all of these technologies and even those which tamper with genetics and the like, all seem to draw from a prideful position. I daresay it may even have a demonic component to it, as they often depart from the morally acceptable.