Is AI an existential threat

  • Thread starter Thread starter Samwise21
  • Start date Start date
Status
Not open for further replies.
S

Samwise21

Guest
You’ve probably heard of the words of Bill Gates, Stephen Hawking, Elon Musk and the like who say that the governments of the world should heavily regulate and observe the development of artificial intelligence to prevent it from evolving into an uncontrollable superintelligence, basically a determination that Skynet or AM could be in the future and that we’re past the point of no return. The other version is that AI will result in an economic downfall which would put millions of both blue collar and white collar workers out of jobs, or even reach a point where some would reject humanity’s and attempt to merge their bodies into machines and become transhuman. What I ask this: is this a sort of fear porn pressed upon us, or could AI be seen as creating our own destruction?
 
Last edited:
I’m sure DARPA has seen the Terminator movies. Only limited AI will be allowed. It will be isolated from similar systems. It will not need a body. Multiple scenarios are involved and new ones are being developed as reports come in. Transhuman is a silly, worthless concept. The ‘Luke Arm’ was recently unveiled and given to two veterans. The company that designed it was not mentioned, just DARPA.

AIs will serve two primary functions: military and civilian. The military for problem solving in the design of new weapon systems and platforms, among other things. In a civilian role, it will be very limited. AIs must serve human needs, and human beings without jobs means even AIs will lose their jobs. There is no way around that.

AIs have no needs, desires or goals aside from what humans give them. They can be virtually immortal if maintained.

Finally, there will be multiple “off” switches in place, along with the option of simply blowing it up.
 
Unfortunately, yes. The best we can do is act for protections against the threat
 
I tend to think that AI is not in and of itself a threat. What is a threat is how the opulent class will use it to their advantage and to the detriment of the rest of us. This has been the case with every new technology. The cotton gin drove the slave trade, television and radio helped the likes of people like Walter Lippman to instruct those in power on how to manipulate the minds of the middle class. AI may end up being the ultimate means by which the powerful subjugate the masses, unless we start thinking seriously about some form of UBI. Otherwise, the human intellect may turn out to have been a terminal mutation.
 
i am more concerned about RADICAL SECULARISM THAN I AM ABOUT AI
 
I only hope and pray that the scientists who create AI don’t get it into their heads that AI is some sort of path to godhood. The chances of it resulting in Frankenstein are there and everyone knows it.
 
No. It’s ridiculous to even think that this could happen at our current level of computer technology. Musk and the like are talking nonsense. Our computers still only have the ability to do only what they are programmed to. There’s no danger of them “evolving”. And we always have the ability to simply turn them off.
 
I don’t think we’re anywhere near creating true, self-aware AI that can make decisions outside programming parameters.

I think it’s likely that existential risk (or at least catastrophic risk) is increasing as the use and complexity of automated systems increases but AI…? There’s no ghost in the machines we create…
 
Last edited:
Only limited AI will be allowed.
Pretty likely IMO that wouldn’t happen. ASI (Artificial Superintelligence) would be a powerful tool for the first country to get it. And due to the rate at which intelligence can increase, only the first country to get it.
It will be isolated from similar systems.
We’ve had animals escape the limited region we allowed them to operate in. Humans can escape our jails. An advanced intellect couod surely find a way beyond out restraints.
AIs have no needs, desires or goals aside from what humans give them. They can be virtually immortal if maintained.
Even a simple goal can backfire. A classic example being that the quickest way to kill all the fleas on a dog is to burn the dog.
Finally, there will be multiple “off” switches in place
Going back, an ASI could find a way to disable those.
along with the option of simply blowing it up.
In the age if the internet, that requires a lot of blowing things up.
 
There are no self-aware AIs but AIs exist that can learn and correlate information.
 
It appears you don’t read the technical literature. None of the AIs will have access to the internet used by the peasants. The military has their own version which is isolated from the peasant version. If need be, portions of the peasant internet could be shut down. This version of the internet is nothing to them.
 
Science is evolving quickly. It’s only a matter of time until humans and computers combine in some fashion. It’s already happening (as I type from my iPhone). As bionic body parts/joints/organs replace decaying parts, the body will live vastly longer.

Imagine when we are able to control something remotely, like a nest thermostat from work. Then, imagine the ability to open or close a garage door or start the oven, or mow the grass from another location.

Now imagine that you could control a robot from your desk, or from a sailboat. If the robot had a human form, others might not know that the robot wasn’t artificial. You could have more than one robot doing tasks on your behalf/at your direction simultaneously, receiving feedback/stimulus response remotely. Almost allowing you to be in more than one place at a time.

The robots are already at the mercy of programmers who have to use moral values to decide, for example, if an autonomous car has an imminent accident ahead whether to veer toward a school bus full of children or an elderly pedestrian. “Choices” will be required. Maybe not in the ‘robot-becomes-self-aware’ vain, but in the need to make choices which have downstream consequences. The key will be how to assign an objective function. This will be driven, in part, by the ethos of the programmer.
 
could AI be seen as creating our own destruction?
It could be, that’s for sure. If you have a lot of concern about AI taking over, I can recommend that you listen to Ground Zero, a nationwide program originating in Portland Oregon at night. The host, Clyde Lewis, is very up on this possible problem
 
Sounds like you’ve been reading Dan Browns Origin 🙂 Seriously though there is no way of knowing xx we do need to be more present with each other though without staring at our phones ( the irony as I’m writing this on my phone ) but I’m by myself so I’m not neglecting anyone .even pope Francis has mentioned smart phones as being an issue
 
Last edited:
A few bionic parts are here. Self-driving cars are nowhere near ready. I wouldn’t want to watch my muscles atrophy from not doing things. We have muscles that need to be used or they become weaker. I would not pay money for a humanoid robot. The robots have no identity. They are not at the mercy of anyone but we are. A robot accidentally does something wrong that results in property damage or personal injury. Who is to blame? The owner. And bionic parts doesn’t mean you won’t die if shot by a robber.
 
Self-driving cars are nowhere near ready.
Every time I turn around, I see self drivers on the road. They are really devoting a lot of effort to the project, I can’t think its that far down the line.
 
It is a false problem. It’s like saying anyone could get their hands on a nuclear weapon. These systems will be highly protected. All of the scenarios have been run by experts. Whoever Clyde Lewis is has obviously not considered the fact that such advanced systems will be highly classified. The bits and pieces given to the public represent only the tip of the iceberg.
 
One death has been reported, but I suppose that would be considered an acceptable loss, like a military test pilot dying in a crash.
 
Status
Not open for further replies.
Back
Top