Third, I think the creation of conscious AI will never happen. Consciousness isn’t a gradual thing. It is a leap from nothingness to something. Essentially it is something from nothing. Adding one more line of code will never produce consciousness. Either it was always conscious or it never will be.
Valid point, but just for the sake of argument let’s assume that you’ve created an AI that could pass the Turing test. Of course that wouldn’t prove that it was conscious. But let’s assume that you really want to know if your AI is conscious. After all, there might be some things that are ethical to do to a mere machine, that aren’t ethical at all once that machine is conscious. So what could you do to determine if your “
machine” is conscious?
Well one conceivable option might be to test whether or not your AI has free will. If it has free will, then we might logically assume that it’s conscious.
But how do we do that? How do we test whether our AI has free will? It would actually seem to be fairly simple. We present it with something that a conscious mind might find desirable, ohhh I don’t know, perhaps the knowledge of good and evil. We then program our AI with a command strictly forbidding it from doing the one thing necessary to attain that knowledge, like say, eating an apple.
Now if our AI overrides its programming, and does what we specifically programmed it not to do, then we can only assume that it has free will, and if it has free will, then we must also assume that it’s conscious. But an AI with the ability to override its own programming might lead to unintended and detrimental consequences, so steps might need to be taken to lead our AI through its emerging consciousness, and save it from the cognitive dissonance that might otherwise drive it insane.
But hey, who would believe a story like that?