Everytime I make a choice it certainly seems as though I am doing so. Even if I observe certain behavior in others, the same thing seems to be happening. Lets not say seem, lets say it “appears” to be happening. With ourselves, you may say that is a feeling, but it is more than that. If I decide to get up and go get a glass of milk out of the refrigerator, that is more than just a feeling that I made that choice.
I’m back, and I’ve had some time to think about the differences in how you and I approach the problem of free will. You understandably believe that free will is self-evident, and yet I’m not convinced that it even exists. So I’ve tried to come up with a scenario in which you might be able to understand my position. This scenario isn’t meant to trick you, or present you with an unrealistic situation. It’s just meant to illustrate the problem that I see in assuming that free will is self-evident.
Let’s assume that in the not-too-distant future we have self-driving cars. Now these self-driving cars are in no way conscious, or self-aware. They’re just fairly run-of-the-mill AI’s. Perhaps a bit more sophisticated than what we have today, but still just basically silicon. Now since it would likely be impossible to program the AI as to how it should react in every possible situation, we instead allow the AI to learn. We put people in driving simulators, and let the AI observe how those people react in thousands upon thousands of simulated real world situations. We let the AI learn whether to swerve to avoid an object in it’s path depending upon whether it’s a ball, or a dog, or a child. We let it learn that some things are more important to avoid than other things. And we let it learn how to react when it’s a choice between the lesser of two evils. We allow it to learn how to make value judgments. But it’s still just silicon. It can “
choose” to swerve to avoid a child, even though it means striking a dog. It can “
choose” to rear end the car in front of it, rather than swerve into the cyclist beside it.
But is our AI really “
choosing” to do these things? Or does it just “
seem” to be choosing to do them, when in fact it’s not actually “
choosing” to do anything. It’s just doing what a lot of sophisticated software has programmed it to do? And what if our AI actually was conscious, would it think that it was choosing to do them? Would it feel as though it was choosing to do them?
That’s the question. When that AI makes a choice, or when you make a choice, is it really a matter of free will, or just a lot of very sophisticated “
programming” masquerading as free will?
Now you seem to think that we can tell the difference, but I’m not so sure that we can. I’m not sure that fundamentally your “
choices” are all that much different than the AI’s “
choices”.