P
Pallas_Athene
Guest
I wonder what does the phrase “freedom of will” mean to you? Libertarian free will simply means that there is a goal to achieve, there are two or more ways to achieve it, and the locus of decision is with the agent - not some external factor. It says nothing about the internal structure if the agent.Since the machine is a slave to its structure and programming, thus has no self will, no designer can give them freedom of will.
Do you really believe that IBM’s Watson is nothing more than a huge IF-THEN-ELSE table? It was a clear winner over the human Jeopardy champions. The program had to interpret intentionally hazy and misleading clues, and come up with the proper solution. These self-modifying computers have some basic programming built into them, but those instructions can and do change during the operation. The final result will not even remotely resemble to the original programming.As someone who is majoring in a tech area, I can tell you that self-driving cars will never have free will. Everything the car does will have been programmed into it. (Simple example: the programmer can stipulate that IF there is something recognized by a certain sensor AND by another sensor AND [insert other conditions here], THEN the car should do X maneuver. The car doesn’t literally decide what to do. It merely checks conditions that the programmer has written.
Yes, this is exactly what I think.Driving is a very constrained activity; there is no reason why such a vehicle should even theoretically have “unlimited freedom” even if such were computationally possible (which it’s probably not).
Yes. The “aim of the game” is safer, faster, more efficient transportation. Clearly, the designer would incorporate these factors into the base algorithm of the AI governing the car’s behavior.On the whole I think that, with good programming and latest technology (better than cutting edge now), a driverless car could be safer - certainly in the not too distant future.
As such, the freedom the designer will give to the car will be limited. And it had better be.


Very pertinent question. Depending upon the complexity, the AI can detect certain internal malfunctions, and decide to go to the repair shop, where OTHER, specialized AI’s will perform the necessary repairs. If the malfunction is too large, the car will simply be rendered motionless. Isn’t that the same with humans? We feel sick, we go to the doctor for “repair”?If Google cars have free will, does that mean that they send themselves to the compactor?

Let me ask again, please drop the “legal” or other aspects of this problem. Those do not belong here. The topic is the amount of “freedom” that the designer should grant to these autonomous “gadgets”.