A philosophical question pertaining to self-driving cars

  • Thread starter Thread starter Pallas_Athene
  • Start date Start date
Status
Not open for further replies.
P

Pallas_Athene

Guest
Google and several other companies are working on developing cars which can evaluate the surroundings, make decisions about the route to take, avoiding congestions, and getting to their destination safely and quickly. There are already actual prototypes on the roads in several states. This is not science fiction, this is reality. The question is this:

How much freedom should the designer give to these autonomous gadgets? Think about it, and the philosophical ramifications of “free will”?
 
There are a lot of legal questions that will come up. For example, in the case of an accident should you sue Google or the person who owns the car?
 
This is going to cause a great many practical and legal problems. Freedom has no bearing on this. They are using autonomous vehicle technology that has been tested for years by the military. The problems is, streets, highways and pedestrians are atypical environments for most of these vehicles. They will not operating over uneven terrain and being fed tactical information about where the “enemy” is. The plan is impractical and idiotic.
  1. The vehicle malfunctions and changes course or goes out of control at 70 mph.
  2. The vehicle strikes a deer or pedestrian or avoids hitting the deer or pedestrian and has many difficult scenarios to face. A) It avoids hitting a living being but hits a tree or wall. B) It cannot avoid hitting the deer because it has two vehicles to either side of it. The same with a pedestrian.
  3. In fog, human operated vehicles can cause a massive, chain reaction crash where the unmanned vehicle may be unable to stop or cannot avoid hitting a vehicle in front of it because it cannot, either due to a car next to it or a wall or a barrier, which, if it chooses the barrier, may force it over into a ravine. The vehicle may explode and catch fire. Communication and tracking devices in the vehicle may fail at this point.
  4. The vehicle’s tracking and/or ‘route following’ equipment fails, and instead of making the next curve, it continues on a straight path, striking anything in the way.
  5. The vehicle hits a patch of ice and rolls over or changes direction. If this happens at over 55 mph, there may be no time for other vehicles/drivers to react.
On the legal front: Who will be held responsible for any injuries, deaths and property damage? If enough lawsuits are filed, the cars will be ruled unsafe and will be taken off the roads.

Who pays for gas or electricity?

Ed
 
Google and several other companies are working on developing cars which can evaluate the surroundings, make decisions about the route to take, avoiding congestions, and getting to their destination safely and quickly. There are already actual prototypes on the roads in several states. This is not science fiction, this is reality. The question is this:

How much freedom should the designer give to these autonomous gadgets? Think about it, and the philosophical ramifications of “free will”?
What has this got to do with free will?
I suppose it is a human which tells the car where he/she wants to go.
 
I guess some more clarification is in order. I am not interested in the legal or physical questions of such a project. Those are very interesting, but they could be explored in a different thread (feel free to open one). My question is only related to one aspect of this problem:

We, the humans are the creators of these cars. We can build in strict or loose controls which will govern the behavior of these cars. Just one example (and let’s not get hung up on it): the speed of the car on the road will depend on the road conditions, of the traffic, of the weather… and a bunch of other variables. Obviously it is impossible and it would be impractical to “program” for all the possible combinations of these variables.

So instead of making a “table” for all the different combinations, a learning program needs to be developed, which has a decision-tree like algorithm. The algorithm will “enjoy” a considerable amount of “freedom” to select the optimal speed (and other parameters). In a very good sense, we are some “imperfect” gods vis-à-vis these contraptions. Not really “gods”, but rather close to it.

So the philosophical question is this: How much limitation should we build into these cars? Is (relatively) unlimited freedom a good idea?

As Hans W said: “I suppose it is a human which tells the car where he/she wants to go.” Obviously. But the car “decides” which route to take. It can monitor the traffic much better than we do. Just imagine what kind of environment shall we have when all the cars (and not just a handful of prototypes) will be on the road.
 
Having limited experience with P.L.C.'s the cars will never have unlimited freedom. The programing (un) limits will most likely be decided on safety. Not having any experience with self-driving cars the configuration or driving parameters that the drivers may change maybe over-ridden by the P.L.C’s safety protocols. But philosophically speaking they look pretty cool. It’s fun to be God
 
There are a lot of legal questions that will come up. For example, in the case of an accident should you sue Google or the person who owns the car?
Just like parking tickets…the driver is not responsible, the owner of the car will be.

Eventually it will become so expensive with all the liability, that only rich people with tons of insurance can afford to drive.
 
Just like parking tickets…the driver is not responsible, the owner of the car will be.

Eventually it will become so expensive with all the liability, that only rich people with tons of insurance can afford to drive.
Insurance companies and public outcries will be the biggest barriers. There is no practical purpose for building such vehicles except for military purposes.

Ed
 
I guess some more clarification is in order. I am not interested in the legal or physical questions of such a project. Those are very interesting, but they could be explored in a different thread (feel free to open one). My question is only related to one aspect of this problem:

We, the humans are the creators of these cars. We can build in strict or loose controls which will govern the behavior of these cars. Just one example (and let’s not get hung up on it): the speed of the car on the road will depend on the road conditions, of the traffic, of the weather… and a bunch of other variables. Obviously it is impossible and it would be impractical to “program” for all the possible combinations of these variables.

So instead of making a “table” for all the different combinations, a learning program needs to be developed, which has a decision-tree like algorithm. The algorithm will “enjoy” a considerable amount of “freedom” to select the optimal speed (and other parameters). In a very good sense, we are some “imperfect” gods vis-à-vis these contraptions. Not really “gods”, but rather close to it.

So the philosophical question is this: How much limitation should we build into these cars? Is (relatively) unlimited freedom a good idea?

As Hans W said: “I suppose it is a human which tells the car where he/she wants to go.” Obviously. But the car “decides” which route to take. It can monitor the traffic much better than we do. Just imagine what kind of environment shall we have when all the cars (and not just a handful of prototypes) will be on the road.
I think it will be safer. The programmer will build in a lot of safety margin in terms of following distance, road conditions (wet, icy etc.). Of course, if another car (driverless or with driver) makes a mistake, say comes frontal against you, then a human driver is possibly in a better position of making an emergency decision, like driving off the road, even if that is dangerous as well.

Concerning taking the best route, the shortest route or avoiding traffic congestion, then my GPS does already make the best decisions. Of course, it is still up to me to follow the advice or take a chance because my gut feeling tells me that the congestion will clear before I get there. But these are minor things.

On the whole I think that, with good programming and latest technology (better than cutting edge now), a driverless car could be safer - certainly in the not too distant future.
 
They are using autonomous vehicle technology that has been tested for years by the military.
Google is actually pioneering here, using different technology than the military uses (although the military DID devise GPS). Also, the cars on the roads for test cases have only been involved in minor fender benders which were caused by human drivers every single time. Either way, most of your objections regarding the tech can be addressed.
  1. The vehicle malfunctions and changes course or goes out of control at 70 mph.
Why would it though? We’re not talking about your stove here, we’re talking about a car. Any system which supports life is generally designed with what is called a “fail safe” software design (unless you’re dealing with hazardous chemical releases, such as halon, which require a fail closed design). A vehicle malfunction would most likely result in the car shutting down immediately and coming to a stop at the side of the road.
  1. The vehicle strikes a deer or pedestrian or avoids hitting the deer or pedestrian and has many difficult scenarios to face. A) It avoids hitting a living being but hits a tree or wall. B) It cannot avoid hitting the deer because it has two vehicles to either side of it. The same with a pedestrian.
None of these is a situation that would be any worse than a human faces now. The difference is that a computer is never distracted when driving, can see through fog, and can communicate with wireless. Let’s take your example of heading towards and obstacle with a self-driven car on either side. My car senses the object and, in less than a thousandth of a second, calculates that it can’t dodge. It sends a wireless signal to the car next to it. That car reacts within a quarter of a second and hits the breaks while my car accelerates and changes lanes. All three cars avoid the obstacle in the road.

Currently, the ability to detect, calculate, make a decision at that speed, and communicate that decision with other vehicles on the road is completely impossible.
  1. In fog, human operated vehicles can cause a massive, chain reaction crash where the unmanned vehicle may be unable to stop or cannot avoid hitting a vehicle in front of it because it cannot, either due to a car next to it or a wall or a barrier, which, if it chooses the barrier, may force it over into a ravine.
Again, you assume that a self-driven car is limited to human senses. They are not. They can “see” through fog using radar. Cars stopped in fog would immediately begin broadcasting the accident. Other cars would slow down and exit the road, immediately rerouting around the obstruction. No car would react into a barrier either. Self-driven cars are FAR more precise than human drivers, able to drive within an inch of a barrier and not hit it.
  1. The vehicle’s tracking and/or ‘route following’ equipment fails, and instead of making the next curve, it continues on a straight path, striking anything in the way.
These vehicles don’t actually only function on GPS and nav systems. They actually have light sensors which can detect lanes and radar which can detect objects. Even if connectivity with GPS was lost, the vehicle would have no reason to hit anything, it would probably maintain safe speed in the lane its in and notify the driver to take over navigating
  1. The vehicle hits a patch of ice and rolls over or changes direction. If this happens at over 55 mph, there may be no time for other vehicles/drivers to react.
Only NASCAR and formula 1 drivers have shown themselves to be better at manipulating brakes than a computer (although independent 4 wheel ABS is better than they are). A computer would be able to make calculations about the perfect amount of steering and brakes in less than a thousandth of a second… a human can’t do the same. Additionally, the car can INSTANTLY begin communicating with all the other self-driven cars around it, allowing them to react INSTANTLY to get out of the way and avoid collisions. Additionally, the car could digitally flag an icy spot as it passes over it, allowing future cars coming through to avoid that spot.
On the legal front: Who will be held responsible for any injuries, deaths and property damage?
Those are actually good questions. I’m going to assume that if properly maintained, insurance and the manufacturer would be liable. If improperly maintained, or if someone is taking control back from the car, the driver/owner would be liable.
If enough lawsuits are filed, the cars will be ruled unsafe and will be taken off the roads.
That would be a shame, because right now the statistics are that self-driven cars will eliminate 90% of traffic fatalities…
 
On the legal front: Who will be held responsible for any injuries, deaths and property damage? If enough lawsuits are filed, the cars will be ruled unsafe and will be taken off the roads.
Simple: the legal landscape will change. Federal/state programs will intercede with the actual automotive companies to handle this, one way or another: the benefits of auto-pilot vehicles are too huge to be halted, and all of the above scenarios you mentioned are dwarfed by the number of fatalities produced by human clumsiness or irresponsibility .

People will be able to work (or fiddle) during commute time in the way they can on public transportation, and in a private & undisturbed atmosphere. That is massive. I just spent 3 hours on the road today that could have been time spent doing 3 hours of math work.
 
Driving is a very constrained activity; there is no reason why such a vehicle should even theoretically have “unlimited freedom” even if such were computationally possible (which it’s probably not).

IMNAAHO.

ICXC NIKA
 
How much freedom should the designer give to these autonomous gadgets? Think about it, and the philosophical ramifications of “free will”?
If Google cars have free will, does that mean that they send themselves to the compactor?
 
Google and several other companies are working on developing cars which can evaluate the surroundings, make decisions about the route to take, avoiding congestions, and getting to their destination safely and quickly. There are already actual prototypes on the roads in several states. This is not science fiction, this is reality. The question is this:

How much freedom should the designer give to these autonomous gadgets? Think about it, and the philosophical ramifications of “free will”?
As someone who is majoring in a tech area, I can tell you that self-driving cars will never have free will. Everything the car does will have been programmed into it. (Simple example: the programmer can stipulate that IF there is something recognized by a certain sensor AND by another sensor AND [insert other conditions here], THEN the car should do X maneuver. The car doesn’t literally decide what to do. It merely checks conditions that the programmer has written.
 
Google and several other companies are working on developing cars which can evaluate the surroundings, make decisions about the route to take, avoiding congestions, and getting to their destination safely and quickly. There are already actual prototypes on the roads in several states. This is not science fiction, this is reality. The question is this:

How much freedom should the designer give to these autonomous gadgets? Think about it, and the philosophical ramifications of “free will”?
Since the machine is a slave to its structure and programming, thus has no self will, no designer can give them freedom of will.
 
I think this will be great thing for the average person, concerns about potential accidents, liability, etc will be moot eventually, this is why they are doing long term testing in certain areas, to work all the bugs out, so when this goes nationwide, it will be a successful transition.

It will be bad news for auto insurance companies though, bad news for auto body repair companies too, as accidents will be a thing of the past, eventually all the bugs will be worked out and there will be no need for auto insurance, of course I imagine there will still be a need for some level of insurance, to cover trees falling on cars, things like that, but in general, there will be no wrecks caused by human error.

This will also be bad news for counties relying on revenue from DUIs, as they will be a thing of the past as well, but this is what progress looks like, eventually some industries are not needed anymore.

I look forward to when this is nationwide, it should bring the cost of driving down.
 
Google is actually pioneering here, using different technology than the military uses (although the military DID devise GPS). Also, the cars on the roads for test cases have only been involved in minor fender benders which were caused by human drivers every single time. Either way, most of your objections regarding the tech can be addressed.

Why would it though? We’re not talking about your stove here, we’re talking about a car. Any system which supports life is generally designed with what is called a “fail safe” software design (unless you’re dealing with hazardous chemical releases, such as halon, which require a fail closed design). A vehicle malfunction would most likely result in the car shutting down immediately and coming to a stop at the side of the road.

None of these is a situation that would be any worse than a human faces now. The difference is that a computer is never distracted when driving, can see through fog, and can communicate with wireless. Let’s take your example of heading towards and obstacle with a self-driven car on either side. My car senses the object and, in less than a thousandth of a second, calculates that it can’t dodge. It sends a wireless signal to the car next to it. That car reacts within a quarter of a second and hits the breaks while my car accelerates and changes lanes. All three cars avoid the obstacle in the road.

Currently, the ability to detect, calculate, make a decision at that speed, and communicate that decision with other vehicles on the road is completely impossible.

Again, you assume that a self-driven car is limited to human senses. They are not. They can “see” through fog using radar. Cars stopped in fog would immediately begin broadcasting the accident. Other cars would slow down and exit the road, immediately rerouting around the obstruction. No car would react into a barrier either. Self-driven cars are FAR more precise than human drivers, able to drive within an inch of a barrier and not hit it.

These vehicles don’t actually only function on GPS and nav systems. They actually have light sensors which can detect lanes and radar which can detect objects. Even if connectivity with GPS was lost, the vehicle would have no reason to hit anything, it would probably maintain safe speed in the lane its in and notify the driver to take over navigating

Only NASCAR and formula 1 drivers have shown themselves to be better at manipulating brakes than a computer (although independent 4 wheel ABS is better than they are). A computer would be able to make calculations about the perfect amount of steering and brakes in less than a thousandth of a second… a human can’t do the same. Additionally, the car can INSTANTLY begin communicating with all the other self-driven cars around it, allowing them to react INSTANTLY to get out of the way and avoid collisions. Additionally, the car could digitally flag an icy spot as it passes over it, allowing future cars coming through to avoid that spot.

Those are actually good questions. I’m going to assume that if properly maintained, insurance and the manufacturer would be liable. If improperly maintained, or if someone is taking control back from the car, the driver/owner would be liable.

That would be a shame, because right now the statistics are that self-driven cars will eliminate 90% of traffic fatalities…
Looking at the number of factory recalls for cars intended for drivers, there is no guarantee the car will be assembled correctly.

Ed
 
Status
Not open for further replies.
Back
Top