A new "win" for Artificial Intelligence

  • Thread starter Thread starter Solmyr
  • Start date Start date
Status
Not open for further replies.
So what is the difference between a religion that is shallow and meaningless ans superstition?
That reminds me of the question:

Q: which people belong to a religious sect?
A: those who attend the church next to yours. 🙂

And then those who attend that other church are just superstitious, since they worship some “false” gods.

So easy and obvious. 😉
 
The proposal was that superstitions as far as AI is concerned won’t last long.
Superstitions perhaps.
But religion is deeper.
Your religion?
Nope, wrong.
So just the deep and meaningful ones?
And wrong again.
So we are left with other people’s religions (not just yours) and ones that are not just deep and meaningful, so we can include those that are shallow and meaningless.

Again I will ask the question, based on the original proposal: What do you consider to be the difference between superstition and religion where the religion is not just yours and would include ones that are not deep and meaningful (and therefore by inference, shallow and meaningless)?
And then those who attend that other church are just superstitious, since they worship some “false” gods.
I think that vz has inadvertently painted himself into a corner. He wants to hold ‘religion’ up against superstition (not surprisingly) but graciously doesn’t want to simply nominate his own, but…is left holding a candle for those that aren’t ‘deep and meaningful’.
 
The proposal was that superstitions as far as AI is concerned won’t last long.
Yes.
That was the proposal.
And that precisely is what I responded to.

You’re attempts to place words in my mouth missed the mark each time.
Superstitions and religions are not the same thing.
And as such will be treated differently in this theoretical AI.

The optimists here wish to believe man will create an AI that can think and reason philosophically, then they need to address the inevitable. That the machine may well create it’s own religion.
 
The smartest computer at the moment is about as smart as a four year old. gizmodo.com/one-of-the-worlds-best-a-i-computers-is-as-smart-as-a-791902984.

Human IQ rises at about 3 points per decade. So in 75 years you’ll be a lot smarter.

But the rise in computer power will be exponential. What put men on the moon only 45 years ago is a lot dumber than this iPad on which I’m typing. In my business I have gone from producing drawings by scratching lines on some paper, just the same as the Ancient Eqyptians would have done on a piece of papyrus, to virtual reality.

You’d never heard of the term Google a few years back and now you have more information literally at your fingertips than anyone else that has ever existed. For a few dollars a week.

You won’t believe what is going to happen in the next few years. You need to be prepared for it. We all do. Pretty soon it will be too fast to logically control.
I like your idealism.
But increasing speed isn’t going to fool the people who build those faster machines into thinking that the machine is ‘smarter’ than its creator.

Name me one earth-shattering, awesome, technological invention from the past 5,000 years which we don’t now view as mundane…nay…archaic!

What you’re arguing is that 25th Century kids wont (still) be doing the equivalent of the way my daughter reacts when I try explaining what a ROM pack meant to Atari gamers or what a dial-up modem ‘handshake’ sounded like. Or the way kids in the 80’s looked at black & white television wondering what it must have been like in the old days of the 1970’s. Or the way people in the 1920’s viewed the motor car and wondered why it had taken so long to invent a machine that could outrun a horse.

I think you underestimate the ability of the human mind to evolve ahead of the pace of technology - technology which is wholly invented/designed and built by humans.

Answer the challenge - Name me one earth-shattering, awesome, technological invention from the past 5,000 years which we don’t now view as mundane with the benefit of hindsight.

Saying that a computer is smarter than a human is like saying that a bulldozer is stronger than a human.
 
Answer the challenge - Name me one earth-shattering, awesome, technological invention from the past 5,000 years which we don’t now view as mundane with the benefit of hindsight.
I agree with you. There isn’t one.

Time was I could strip a car and reassemble one. And I could teach someone from the 15th century how to do it. It was basic mechanical engineering. Same with a plane. Flight is pretty basic engineering as well. When computers came out I could programme the things from first principles. I knew how they worked.

But now…

You don’t list your whereabouts but even if you are on the other side of the planet, we could, within a few minutes and with a just few keystrokes, hold a video call with each other. And we’re reaching the point where I have a rough idea how it works but it’s getting away from me. I cannot intuitively grasp how a picture can make it’s way from your iPad or PC through different waypoints, up into orbit (into orbit!), back down and make its way to my house and then somehow jump from the box in the corner to this screen.

And this type of technology is gaining speed. This isn’t something that many people understand at all. More and more people know less and less about how the world works.

I think you are looking at it the wrong way. We weren’t blasé about cars because we had to make sure the thing kept working so most people could work on them. But ever looked under the hood of a modern car? Now we are blasé about them. Turn the key and expect it to start. Hit the keys on your iPad you and expect to talk to family in another country in a few seconds. Sooner than I’d care to guess, you’ll just walk into your house and make a verbal request to be connected to a hologram of the same family.

Depending on your age, either your children or your grandchildren will accept things that we can’t envisage as just part of normal life. And they will have no idea whatsoever how they work.
 
I agree with you. There isn’t one.

Time was I could strip a car and reassemble one. And I could teach someone from the 15th century how to do it. It was basic mechanical engineering. Same with a plane. Flight is pretty basic engineering as well. When computers came out I could programme the things from first principles. I knew how they worked.

But now…

You don’t list your whereabouts but even if you are on the other side of the planet, we could, within a few minutes and with a just few keystrokes, hold a video call with each other. And we’re reaching the point where I have a rough idea how it works but it’s getting away from me. I cannot intuitively grasp how a picture can make it’s way from your iPad or PC through different waypoints, up into orbit (into orbit!), back down and make its way to my house and then somehow jump from the box in the corner to this screen.

And this type of technology is gaining speed. This isn’t something that many people understand at all. More and more people know less and less about how the world works.

I think you are looking at it the wrong way. We weren’t blasé about cars because we had to make sure the thing kept working so most people could work on them. But ever looked under the hood of a modern car? Now we are blasé about them. Turn the key and expect it to start. Hit the keys on your iPad you and expect to talk to family in another country in a few seconds. Sooner than I’d care to guess, you’ll just walk into your house and make a verbal request to be connected to a hologram of the same family.

Depending on your age, either your children or your grandchildren will accept things that we can’t envisage as just part of normal life. And they will have no idea whatsoever how they work.
It all goes to show that the power of the mind is greater than anything else in the universe when it comes to creative hindsight, insight and foresight, a fact noted by Pascal (who was no fool) over five hundred years ago. If he hadn’t died at the age of 39 he would have added to his list of discoveries…
 
I like your idealism.
But increasing speed isn’t going to fool the people who build those faster machines into thinking that the machine is ‘smarter’ than its creator.

Name me one earth-shattering, awesome, technological invention from the past 5,000 years which we don’t now view as mundane…nay…archaic!

What you’re arguing is that 25th Century kids wont (still) be doing the equivalent of the way my daughter reacts when I try explaining what a ROM pack meant to Atari gamers or what a dial-up modem ‘handshake’ sounded like. Or the way kids in the 80’s looked at black & white television wondering what it must have been like in the old days of the 1970’s. Or the way people in the 1920’s viewed the motor car and wondered why it had taken so long to invent a machine that could outrun a horse.

I think you underestimate the ability of the human mind to evolve ahead of the pace of technology - technology which is wholly invented/designed and built by humans.

Answer the challenge - Name me one earth-shattering, awesome, technological invention from the past 5,000 years which we don’t now view as mundane with the benefit of hindsight.

Saying that a computer is smarter than a human is like saying that a bulldozer is stronger than a human.
Precisely! Brute force doesn’t equal a child’s intelligence.
 
Yes.
That was the proposal.
And that precisely is what I responded to.

You’re attempts to place words in my mouth missed the mark each time.
Superstitions and religions are not the same thing.
And as such will be treated differently in this theoretical AI.

The optimists here wish to believe man will create an AI that can think and reason philosophically, then they need to address the inevitable. That the machine may well create it’s own religion.
A machine cannot understand itself let alone explain the nature and purpose of existence!
 
The proposal was that superstitions as far as AI is concerned won’t last long.

Your religion?

So just the deep and meaningful ones?

So we are left with other people’s religions (not just yours) and ones that are not just deep and meaningful, so we can include those that are shallow and meaningless.

Again I will ask the question, based on the original proposal: What do you consider to be the difference between superstition and religion where the religion is not just yours and would include ones that are not deep and meaningful (and therefore by inference, shallow and meaningless)?

I think that vz has inadvertently painted himself into a corner. He wants to hold ‘religion’ up against superstition (not surprisingly) but graciously doesn’t want to simply nominate his own, but…is left holding a candle for those that aren’t ‘deep and meaningful’.
You are assuming religions have nothing in common even though they all have the same fundamental moral, social, personal and spiritual values and principles about which scientists tells us nothing whatsoever…
 
I’m going to in all seriousness ask what may sound like a silly question. But in this context, what is evil?
evil is an absense of good.

The good thing about AI is that it uses reason. The bad thing about AI is that it can reason itself into doing evil

All you have to do is see how people rationalize and justify all kids of evil.

AI is unable to distinguish truth from falsity. It assumes everything is true until a contradiction happens. Then where does it go?

Then the programmer must step in and program his or her moral code into the system. Oops. Then it is not really intelligent because it did not figure it out.

Mocrosoft bot got racist. OK, so time to insert code that says racism is wrong. Then wait for the next contradiction. - but in this case there was no contradiction yet according to the program. The contradiction was between software requirements and functionality.
 

There was Deep Blue, then there was Watson, and now it is AlphaGo. Hopefully very soon the artificial intelligence will leave the human achievements in the “dust”…
This, I think, is a “dog bites man” story.

When AIphaGo can go against its programming code and “sin,” so to speak, then we have a “man bites dog” story.
 
evil is an absense of good.
Okay, using that definition how does one determine whether there is an absence of good in something? What about something that has a mixture of constructive and destructive outcomes?
The good thing about AI is that it uses reason.
I’m going to ask what will sound like another silly question here, but what do you mean when you say that “it uses reason”? Also, since you are using the word “good” here to refer to AI here it sounds like you are saying that there’s not an absence of good in AI?
Then the programmer must step in and program his or her moral code into the system. Oops. Then it is not really intelligent because it did not figure it out.
This give me the impression you are viewing many most A.I.'s as being an attempt at strong AI. I don’t think it is the case that Strong AI is their goal. To quote an explanation provided in another thread:
ThinkingSapien said:
"Artifical Intelligence, a Modern Approach" (Second Edition)
Stuart Russell and Peter Norvig

…]

"…] the assertion that Machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the weak AI hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulated thinking) is called the strong AI hypothesis.

…]Most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis – as long as their program works, they don’t care whether you call it simulation of intelligence or real intelligence…]Arguments for and against strong AI are inconclusive. Few mainstream AI researchers believe that anything significant hinges on the outcome of the debate"
Some have started using a different vocabulary to close the gap between the researcher’s/developer’s intentions and how the general public interpret words (ex: “Machine Learning.”)
Mcrosoft bot got racist.

Microsoft’s Tay was a demonstration of some other services that Microsoft offers (I’ll be using some of them today) among which was a component for discovering a grammar. A couple of weeks ago they showed an application they had made for Domino’s Pizza that had been initially been taught a grammar for acting on orders submitted but could adapt to new phrases (the example given was someone ordering Pizza for their “crib” which resulted in the AI’s vocabulary being expanded to treat that in the same way as “home”). If you were able to get enough people to start using some other offensive phrase in a pizza order than the pizza ordering bot would adopt these into it’s vocabulary to. I wouldn’t treat the bot as being evil any more than I would treat a parrot or a voice recorder to be racist for repeating a phrase said to it. Rather I would view this as being a reflection of the person or people being repeated. These repeating agents themselves (the voice recorder, parrot, or AI agent) I see is being amoral.
Searching through Tay’s tweets (more than 96,000 of them!) we can see that many of the bot’s nastiest utterances have simply been the result of copying users. If you tell Tay to “repeat after me,” it will — allowing anybody to put words in the chatbot’s mouth.
The Verge
I agree that Microsoft will need to develop protections against their chat bots being influenced to utter something offensive while thinking that’s easier said than done. As I understand how the word “Racism” is used in modern times it seems inclusive of anything said with the intention of being offensive or harmful that was motivated by or includes references to someone’s racial or ethnic classification. Specific words could be marked as prohibited but the prohibition of these words doesn’t prevent someone from using more subtle references. Used in certain ways the word “boy” could be said to be invoking racists sentiments. But it could also be used to refer to a young male. There are times when it’s not clear what someone’s intentions were. To add to the complexity of this phrases that are considered racially offensive change over time and still may be okay within certain contexts.
 
A machine cannot understand itself let alone explain the nature and purpose of existence!
I guess that makes it very clear that you are not one of the optimists I spoke of.

And I agree with you.
I believe creation of a self-aware intelligence is something only God will be able to grant as only God has the understanding of this intelligence.
 
This, I think, is a “dog bites man” story.

When AIphaGo can go against its programming code and “sin,” so to speak, then we have a “man bites dog” story.
There is an IBM operating system called “CICS” (consumer information control system) which is well over half a million lines of assembler code. It was developed for decades, and it is still in use. There is no human who could understand its code. So how can anyone establish if the behavior of a hugely complex system goes against its original code? Answer: no way. These systems are not developed by a small group of people, in a short period of time.

To show you the difficulties, here is a ONE line “C” program. Try to figure out what it does.
Code:
#include <stdio.h> 
main(t,_,a) 
char *a; 
{ 
return!0<t?t<3?main(-79,-13,a+main(-87,1-_,main(-86,0,a+1)+a)): 
1,t<_?main(t+1,_,a):3,main(-94,-27+t,a)&&t==2?_<13? 
main(2,_+1,"%s %d %d
"):9:16:t<0?t<-72?main(_,t, 
"@n'+,#'/*{}w+/w#cdnr/+,{}r/*de}+,/*{*+,/w{%+,/w#q#n+,/#{l+,/n{n+,/+#n+,/#\ 
;#q#n+,/+k#;*+,/'r :'d*'3,}{w+K w'K:'+}e#';dq#'l \ 
q#'+d'K#!/+k#;q#'r}eKK#}w'r}eKK{nl]'/#;#q#n'){)#}w'){){nl]'/+#n';d}rw' i;#\ 
){nl]!/n{n#'; r{#w'r nc{nl]'/#{l,+'K {rw' iK{;{nl]'/w#q#n'wk nw' \ 
iwk{KK{nl]!/w{%'l##w#' i; :{nl]'/*{q#'ld;r'}{nlwb!/*de}'c \ 
;;{nl'-{}rw]'/+,}##'*}#nc,',#nw]'/+kd'+e}+;#'rdq#w! nr'/ ') }+}{rl#'{n' ')# \ 
}'+}##(!!/") 
:t<-50?_==*a?putchar(31[a]):main(-65,_,a+1):main((*a=='/')+t,_,a+1) 
:0<t?main(2,2,"%s"):*a=='/'||main(0,main(-61,*a, 
"!ek;dc i@bK'(q)-[w]*%n+r3#l,{}:
uwloca-O;m .vpbks,fxntdCeghiry"),a+1); 
}
I repeat. ONE line, in a high level language. Not over half a million lines of assembler code. And the systems under development are capable of self-modification. There is absolutely no way to decipher how they work.
 
There is an IBM operating system called “CICS” (consumer information control system) which is well over half a million lines of assembler code. It was developed for decades, and it is still in use. There is no human who could understand its code. So how can anyone establish if the behavior of a hugely complex system goes against its original code? Answer: no way. These systems are not developed by a small group of people, in a short period of time.

To show you the difficulties, here is a ONE line “C” program. Try to figure out what it does.
Code:
#include <stdio.h> 
main(t,_,a) 
char *a; 
{ 
return!0<t?t<3?main(-79,-13,a+main(-87,1-_,main(-86,0,a+1)+a)): 
1,t<_?main(t+1,_,a):3,main(-94,-27+t,a)&&t==2?_<13? 
main(2,_+1,"%s %d %d
"):9:16:t<0?t<-72?main(_,t,  
...
I repeat. ONE line, in a high level language. Not over half a million lines of assembler code. And the systems under development are capable of self-modification. There is absolutely no way to decipher how they work.
Well. let’s see here … first clear the parentheses then google some lines of the code and presto: “Reverse Engineering the Twelve Days of Christmas” pops up at – Thomas Ball at Microsoft Research.

The point, I think, is that whatever men invent can be understood by other men, given inclination and time. But when men attempt to understand the Creator’s beings, not so much.
 
A machine cannot understand itself let alone explain the nature and purpose of existence!
As well as being optimists we are realists if the principle of adequacy is anything to go by! The flaw in the objection that it leads to an infinite regress is the assumption that God is in the same category as other causes instead of being “He Who Is”…
 
Status
Not open for further replies.
Back
Top