A philosophical question pertaining to self-driving cars

  • Thread starter Thread starter Pallas_Athene
  • Start date Start date
Status
Not open for further replies.
Since the machine is a slave to its structure and programming, thus has no self will, no designer can give them freedom of will.
I wonder what does the phrase “freedom of will” mean to you? Libertarian free will simply means that there is a goal to achieve, there are two or more ways to achieve it, and the locus of decision is with the agent - not some external factor. It says nothing about the internal structure if the agent.
As someone who is majoring in a tech area, I can tell you that self-driving cars will never have free will. Everything the car does will have been programmed into it. (Simple example: the programmer can stipulate that IF there is something recognized by a certain sensor AND by another sensor AND [insert other conditions here], THEN the car should do X maneuver. The car doesn’t literally decide what to do. It merely checks conditions that the programmer has written.
Do you really believe that IBM’s Watson is nothing more than a huge IF-THEN-ELSE table? It was a clear winner over the human Jeopardy champions. The program had to interpret intentionally hazy and misleading clues, and come up with the proper solution. These self-modifying computers have some basic programming built into them, but those instructions can and do change during the operation. The final result will not even remotely resemble to the original programming.
Driving is a very constrained activity; there is no reason why such a vehicle should even theoretically have “unlimited freedom” even if such were computationally possible (which it’s probably not).
Yes, this is exactly what I think.
On the whole I think that, with good programming and latest technology (better than cutting edge now), a driverless car could be safer - certainly in the not too distant future.
Yes. The “aim of the game” is safer, faster, more efficient transportation. Clearly, the designer would incorporate these factors into the base algorithm of the AI governing the car’s behavior.

As such, the freedom the designer will give to the car will be limited. And it had better be. 🙂 No one would want a car which has the “freedom” to choose to go over the cliff. There might be exceptions, of course. (Do you recall the joke which asks the definition of mixed feelings? The answer is: “when you see your brand new Mercedes with your mother-in-law at the steering wheel drive over the cliff”. 🙂 In that case we would talk about a feature, and not a bug.)
If Google cars have free will, does that mean that they send themselves to the compactor?
Very pertinent question. Depending upon the complexity, the AI can detect certain internal malfunctions, and decide to go to the repair shop, where OTHER, specialized AI’s will perform the necessary repairs. If the malfunction is too large, the car will simply be rendered motionless. Isn’t that the same with humans? We feel sick, we go to the doctor for “repair”? 🙂 If our “malfunction” is too large, we might faint, or become immobilized. The actual methodology is different, but the functionality is same.

Let me ask again, please drop the “legal” or other aspects of this problem. Those do not belong here. The topic is the amount of “freedom” that the designer should grant to these autonomous “gadgets”.
 
As someone who is majoring in a tech area, I can tell you that self-driving cars will never have free will. Everything the car does will have been programmed into it. (Simple example: the programmer can stipulate that IF there is something recognized by a certain sensor AND by another sensor AND [insert other conditions here], THEN the car should do X maneuver. The car doesn’t literally decide what to do. It merely checks conditions that the programmer has written.
Exactly. I am also in the tech field. I remember when I took a course in robotics. It was the most challenging course. We had to make an autonomous robotic scale vehicle that would drive through a maze, find candles and put out the fire. It was one thing to write software and another to actually get it to work in the real world. Things didn’t work like you expected them to. I was able to get it to work eventually, but it was more trial and error than I expected. And it still never worked 100% of the time even though my robot won the competition.
 
The topic is the amount of “freedom” that the designer should grant to these autonomous “gadgets”.
Assuming the designer has any power whatsoever to grant any “freedom” to those “autonomous” (whatever that means) gadgets.

A great deal is presumed.

Yes, I know, spare me the soliloquy.

Yes, I know, I was being facetious.
 
You won’t see cars driving around that are self aware. They will just be machines that run algorithms. Besides, even if you could have cars that had free will you would not want that. You would not for instance want a car to turn evil and decide to drive you off a cliff. What you do want is a car that is predictable and follows orders, first from the human driver and secondarily from any autopilot programming.
 
Besides, even if you could have cars that had free will you would not want that. You would not for instance want a car to turn evil and decide to drive you off a cliff. What you do want is a car that is predictable and follows orders, first from the human driver and secondarily from any autopilot programming.
I too work in information technology, and am not sure about the free-will. Consider that apparently the next generation of drones will be capable of autonomously taking lethal action, for instance to fire on an aggressor in self-defense.

Now suppose I programmed the drones. They implement my free-will decisions about the circumstances to open fire, they will fire when I freely decided they should.

So if a drone fires on people and kills them by correctly following my programming, I don’t think I could morally claim it wasn’t my free-will decision to kill those people, since the drone was following my orders, and in the nature of a machine following my orders to the letter.
 
The main reason you will not see computers having free will is because they do not have minds. An immaterial human mind is needed to program the computers in the first place. And, computers simply execute their programming. They are not self-aware. They do not have souls. Decision trees are simply programming algorithms that make ‘decisions’ based on conditional statements. If this condition is true then execute this. If Yes then do something. If no then do something else. These imperative statements come from the mind of a human. The computer can execute logical Boolean statements - AND, IF, OR functions on the hardware level. It is not a simple thing to program A.I. When I was going to school my professors basically said that they have given up on A.I. in robots and now only make specialized robots to do specific tasks. A.I. is too difficult.

Driving a car autonomously is a difficult enough task. But, true A.I. is way harder.
 
The main reason you will not see computers having free will is because they do not have minds. An immaterial human mind is needed to program the computers in the first place. And, computers simply execute their programming. They are not self-aware. They do not have souls. Decision trees are simply programming algorithms that make ‘decisions’ based on conditional statements. If this condition is true then execute this. If Yes then do something. If no then do something else. These imperative statements come from the mind of a human. The computer can execute logical Boolean statements - AND, IF, OR functions on the hardware level. It is not a simple thing to program A.I. When I was going to school my professors basically said that they have given up on A.I. in robots and now only make specialized robots to do specific tasks. A.I. is too difficult.

Driving a car autonomously is a difficult enough task. But, true A.I. is way harder.
Im pretty sure countless R&D labs thru out the world are striving for some form of A.I. currently, they certainly will not just give up. I think they will eventually be able to mimic a human brain and design a machine/ robot around that, plus with the quantum world, things are MUCH different, its not as simple as yes, no, and actions based on those.

Regarding self driving cars though, A.I. would not work well in that capacity.
 
Im pretty sure countless R&D labs thru out the world are striving for some form of A.I. currently, they certainly will not just give up. I think they will eventually be able to mimic a human brain and design a machine/ robot around that, plus with the quantum world, things are MUCH different, its not as simple as yes, no, and actions based on those.

Regarding self driving cars though, A.I. would not work well in that capacity.
The word “mimic” is fraught with all kinds of problems. If you mean something like “passible to an outside observer,” that entails nothing like free will, just the capability to fool someone who is without full access to the causal sequence leading to the act. Certainly, that is not anything like “free will” in any meaningful sense.
 
I think I heard someone say that the human brain has as many neural connections as there are water molecules in Lake Michigan, or some such. If so, cars are not worth that type of effort. It would be simpler and cheaper to hire remote drivers in some other country to drive you home on their computer or phone. Pay by Paypal or credit card. Theres a nice startup for someone.
 
I think I heard someone say that the human brain has as many neural connections as there are water molecules in Lake Michigan, or some such. If so, cars are not worth that type of effort. It would be simpler and cheaper to hire remote drivers in some other country to drive you home on their computer or phone. Pay by Paypal or credit card. Theres a nice startup for someone.
You’d just better make sure you treated them really really well. I mean tips and everything. Maybe even Christmas cards. I don’t know. I mean they might otherwise be sort of tempted. Sort of tempted to think they were playing a racing game. Or something.

Peace.

-Trident
 
I think I heard someone say that the human brain has as many neural connections as there are water molecules in Lake Michigan, or some such. If so, cars are not worth that type of effort. It would be simpler and cheaper to hire remote drivers in some other country to drive you home on their computer or phone. Pay by Paypal or credit card. Theres a nice startup for someone.
And a source of virtually endless employment opportunities!
 
Im pretty sure countless R&D labs thru out the world are striving for some form of A.I. currently, they certainly will not just give up. I think they will eventually be able to mimic a human brain and design a machine/ robot around that, plus with the quantum world, things are MUCH different, its not as simple as yes, no, and actions based on those.

Regarding self driving cars though, A.I. would not work well in that capacity.
Until we can actually nail down what makes us conscious, we will never be able to design a machine that can mimic it.

Besides, if a machine is going to do the driving, I want it to be a better driver then I am.
Merely mimicking human brain activity is merely going to yield the same flawed reactions.

Also consider…I work in the IT field and watch computers do things every day that the programmer did not intend.
So we are going to put them in the drivers seat and let it run…:rolleyes:
Given the direction the rest of the world is headed, I expected this level of intelligent decisions…and I am tightening my seatbelt.
 
Oh, they are working on a human brain replica device now. According to the unclassified literature, the US and China are in a race. Why hire a human when a replica brain with learning capabilities and far more processing speed can be built? Some computers with the ability to solve structural design problems exist.

Such machines should be illegal to own. Only the military and classified projects should have access to this.

Ed
 
Oh, they are working on a human brain replica device now. According to the unclassified literature, the US and China are in a race. Why hire a human when a replica brain with learning capabilities and far more processing speed can be built? Some computers with the ability to solve structural design problems exist.

Such machines should be illegal to own. Only the military and classified projects should have access to this.

Ed
If it renders every military and secret service on the planet completely harmless :stretcher:, I would be all for it.
 
If it renders every military and secret service on the planet completely harmless :stretcher:, I would be all for it.
Having read many scenarios about future military operations and technologies, this is a disaster in the making. It would enable humanoid robots to take over large territories and disable communications technologies on a massive scale. The only thing left among the lands once populated by undesirables would be their resources, and no one to blame. There’s more, but I’ll leave it at that.

Ed
 
The word “mimic” is fraught with all kinds of problems. If you mean something like “passible to an outside observer,” that entails nothing like free will, just the capability to fool someone who is without full access to the causal sequence leading to the act. Certainly, that is not anything like “free will” in any meaningful sense.
Yes, and they are no more alive than a rock or your laptop. If the materialists were right and your mind is only your physical brain then not even we have free will. Free will is predicated on the fact that our minds are immaterial and are not just chemical reactions in the brain.
 
Until we can actually nail down what makes us conscious, we will never be able to design a machine that can mimic it.

Besides, if a machine is going to do the driving, I want it to be a better driver then I am.
Merely mimicking human brain activity is merely going to yield the same flawed reactions.

Also consider…I work in the IT field and watch computers do things every day that the programmer did not intend.
So we are going to put them in the drivers seat and let it run…:rolleyes:
Given the direction the rest of the world is headed, I expected this level of intelligent decisions…and I am tightening my seatbelt.
Well, driving, for a computer would not be that difficult, but for it to be successful, there could be NO human drivers, as that would be too unpredictable, if all vehicles were connected to this system, of course, this implies all roads, are connected, it largely comes to do whether a vehicle should stop or go, and at what speed…if ALL vehicles were connected to the same system, and they could communicate and recognize each other, I think it would be successful.

I dont think the technology is the problem, all the industries that would be suddenly obsolete, is whats going to throw a wrench into this, no more need for auto insurance (comprehensive anyway), traffic accidents would be a thing of the past, no more need for auto body repair, new parts, towing, etc. Not to mention, there would suddenly be no more DUIs or traffic offenses, this would be disastrous for many cities/ counties and revenue.
 
You speak as though humans are the only wild cards on the road.
What about animals?
What about the broken infrastructure?
What about the car itself being broken?

It is easy to make an automaton do things in a perfect world. But the world is not perfect.
 
Well, driving, for a computer would not be that difficult, but for it to be successful, there could be NO human drivers, as that would be too unpredictable, if all vehicles were connected to this system, of course, this implies all roads, are connected, it largely comes to do whether a vehicle should stop or go, and at what speed…if ALL vehicles were connected to the same system, and they could communicate and recognize each other, I think it would be successful.
I agree with you, that it would be successful. A computer doesn’t need freedom to work - it has to make decisions based on the directions given.

In video-games, such as GTA (Great Theft Auto - I had a thug life :p), every pedestrian and cars are controlled by the game: they have information telling each NPC (non-playable character) where to go. The only variable is the player and whatever mayhem the player causes.

I used to drive quite well, just trying to pretend I was going to work or something. Whenever I crossed in front of a computer-controlled car, the car would simply stop, recognizing that there was another car moving in front of it. Then, it would keep on driving to its destination (often honking for me to move already, as I often missed the green light).

Sure, I could simply speed over everything, and the video-game AI wouldn’t be fast enough to predict my randomness - which only shows that the problem was in the human component (that is: me), although the computer component would need improvement (as would anyone, if a crazy 14yo girl came driving fast with every intention of hitting your car). As long as I was following the rules, even a human-controlled car was able to move around without causing accidents in a computer controlled environment.

And to do that, all that is required is some limited programming. We already have self-parking cars, Automatic Train Operation, and GPS - mix all these technologies, and you might get something resembling the movie Wall-E’s “hover chair traffic” (youtube link) - no sort of freedom involved! (if you ignore directive A113, that is :rolleyes:)
 
:newidea:

Personally, I think self-driving cars should have unbridled free will. They should be programmed to be allowed to do absolutely anything and cause any amount of mayhem and chaos with no restraint other than the physical limitations of their designs. After all, if there were any limitations on the choices available to the AI, then the car and passenger wouldn’t be able to have a meaningful relationship, and the relationship is more important than the health or safety of the passenger, pedestrians, other drivers, or anyone in the path of the car of course.

…joking… 😉 😛
 
Status
Not open for further replies.
Back
Top