Staredit Network > Forums > Lite Discussion > Topic: Humans Need Not Apply
Humans Need Not Apply
Oct 6 2014, 6:11 am
By: Apos  

Oct 16 2014, 2:53 am Roy Post #21

An artist's depiction of an Extended Unit Death

Quote from Oh_Man
I don't know about AI replacing managerial roles, Sac. Those roles require a lot of interaction with humans. It's basically a human resources role.
But interestingly enough, recently a computer program passed the Turing Test, fooling 1/3 of judges into believing they were actually communicating with a human in an unrestricted 5-minute conversation (Wiki article on Turing Test). I wouldn't say these positions are necessarily safer than other high-level jobs.

Quote from jjf28
But if we're supposing the roles below them have been replaced by AIs... then the human resource component of management would no longer exist.
For internal purposes, yes, perhaps those jobs would become obsolete. Of course, there are other interactions such as customer service that have the same problem we'd need to solve (though these positions have already decreased due to automation, as much as we all hate talking to a machine over the phone). Until we actually have a true AI, though, these things will always require human oversight.




Oct 16 2014, 6:52 am Lanthanide Post #22



Quote from Roy
Quote from Oh_Man
I don't know about AI replacing managerial roles, Sac. Those roles require a lot of interaction with humans. It's basically a human resources role.
But interestingly enough, recently a computer program passed the Turing Test, fooling 1/3 of judges into believing they were actually communicating with a human in an unrestricted 5-minute conversation (Wiki article on Turing Test). I wouldn't say these positions are necessarily safer than other high-level jobs.
You shouldn't take that particular case as anything other than a scam for headlines. The computer pretended to be a 13 year-old Ukrainian boy who had a limited vocabulary of english. That's really skirting around the point of the Turing Test to the extent that it's invalid.

Here's a comment on it from the Slashdot thread:
Quote from jnana on slashdot
What nonsense! A program pretending to be an immature person with poor language comprehension and speaking ability, and incapable of talking about a large number of topics that can't be discussed with a vocabulary of 400 words and little life experience is not at all what the test is about. Turing expected an intelligent interrogator who could have a wide-ranging discussion about almost anything with the unknown other. Here's a snippet from his paper that introduces the idea of the Turing test, which he just referred to as the imitation game:

Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?
Witness: It wouldn't scan.
Interrogator: How about "a winter's day," That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter's day.
Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
Witness: In a way.
Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.
Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.




None.

Oct 16 2014, 12:57 pm Roy Post #23

An artist's depiction of an Extended Unit Death

Hmm, you're right. It is a bit underhanded in that regard. But if we employed a specialized program for performing customer service, it also has restrictions not found in free conversation. Regardless, I used it as an example of progress on the Turing Test, which is steadily improving with time.




Oct 16 2014, 3:58 pm Oh_Man Post #24

Now on ICCUP, channel donuts

Also, and correct me if I'm wrong here, but aren't these tests done using writing only?

I imagine it would be a lot harder for an AI to trick a human via aural communication.




Oct 16 2014, 4:14 pm Apos Post #25

I order you to forgive yourself!

Customer services don't always need an AI to replace their human counter part. For example in some McDonalds in Europe, they added terminals for clients to enter their orders.
Snip


Sometimes you need to call a number for customer services, you'll usually be greeted by a bot. I'm pretty sure this used to be a human job only.




Oct 16 2014, 8:31 pm Roy Post #26

An artist's depiction of an Extended Unit Death

Quote from Oh_Man
Also, and correct me if I'm wrong here, but aren't these tests done using writing only?

I imagine it would be a lot harder for an AI to trick a human via aural communication.
Yes, that's another good point: it is done via text applications. Speech synthesis (or text-to-speech) is another technology that has admirably improved over time but is still lacking as a human replacement. An exciting development on this frontier that I remember hearing about a while ago is Microsoft's Universal Translator, which was a demo of translating speech in real time to another language. I hadn't heard anything more on it for a while, but doing a search today, I hear the tech is supposedly coming to Skype. That should be interesting to see (or hear, as it were).

All these technologies are incrementally improving: we haven't yet hit a wall (with perhaps human AI being the most slow-going or challenging task), which makes me very excited for the future.

Quote from Apos
Customer services don't always need an AI to replace their human counter part. For example in some McDonalds in Europe, they added terminals for clients to enter their orders.
Snip


Sometimes you need to call a number for customer services, you'll usually be greeted by a bot. I'm pretty sure this used to be a human job only.
This raises an interesting idea: we may just be changing the way we approach problems and interact with technology to make faster progress on these issues. It wouldn't be the first time technology has fundamentally changed our daily lives.




Oct 20 2014, 5:55 am Sacrieur Post #27

Still Napping

A better example is something like Watson.

We will reach a point where it will pass the Turing test. Not that it needs to pass the Turing test. We don't need it to be able to interpret everything that a human could talk about, only the things relevant to business. Further you don't need one big AI that does everything. Just a bunch of little AIs that specialize.



None.

Oct 20 2014, 4:59 pm Roy Post #28

An artist's depiction of an Extended Unit Death

Quote from Sacrieur
Further you don't need one big AI that does everything. Just a bunch of little AIs that specialize.
I'd argue that for complete automation, we would need one big AI. There are simply too many specialized cases to create AIs for, and some problems that are trivial to us (such as identifying objects in a picture) become extremely complicated tasks to program. This is going back to the original argument in this thread that a specialized AI couldn't effectively replace mathematicians: we need something that can learn as we can learn, and once we have it, any job can be automated, and every automation will be orders of magnitude greater than what a human tasked with the same goal could accomplish. And that terrifies some people.




Oct 22 2014, 1:16 pm Sacrieur Post #29

Still Napping

Quote from Roy
Quote from Sacrieur
Further you don't need one big AI that does everything. Just a bunch of little AIs that specialize.
I'd argue that for complete automation, we would need one big AI. There are simply too many specialized cases to create AIs for, and some problems that are trivial to us (such as identifying objects in a picture) become extremely complicated tasks to program. This is going back to the original argument in this thread that a specialized AI couldn't effectively replace mathematicians: we need something that can learn as we can learn, and once we have it, any job can be automated, and every automation will be orders of magnitude greater than what a human tasked with the same goal could accomplish. And that terrifies some people.

I'm thinking it's going to become a very long time before machines can operate 100% without any human intervention. But in the short term you need a central executive AI with branches for either more executive AIs or specialized AI arms. Middle branches can delegate a task to another branch. If a problem occurs, it can be elevated through the executive branches until the problem is resolved. So more or less like how a business already functions, just more efficiently.



None.

Oct 23 2014, 3:12 am jjf28 Post #30

Oh bother...

Is it wrong to just post a quote in LD? Probably :wob:

Quote from name:Kaczynski's Unabomber Manifesto
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem." Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them "sublimate" their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.




Rs_yes-im4real - Clan Aura - jjf28.net84.net

Reached the top of StarCraft theory crafting 2:12 AM CST, August 2nd, 2014.

Jul 9 2016, 2:48 am BeDazed Post #31



Quote from jjf28
Is it wrong to just post a quote in LD? Probably :wob:

Quote from name:Kaczynski's Unabomber Manifesto
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem." Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them "sublimate" their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.

There's a third option. If computation and human intelligence is not different at the core level, and the level of speculation we're discussing here- why don't we augment our own brains with wetware to the level of learning machine learning can be done, then we wouldn't be passing our decision making powers over to any other entity but ourselves. Since the wetware is also artificial, it can be linked via wireless communications to the internet, with people. That could maybe even out the power imbalance between humans and AI.



None.

Jul 15 2016, 7:28 pm Loser_Musician Post #32



Mankind thinks they´re special snowflakes when it comes to intelligence and creativity, and feel impotent and fearful when they find out they´re not. If there´s an intelligence equal or greater than our own, and if we don´t treat that intelligence as an equal, it becomes a civil rights issue.




Options
  Back to forum
Please log in to reply to this topic or to report it.
Members in this topic: None.
[06:29 pm]
Wing Zero -- NudeRaider
NudeRaider shouted: feeling comfortable on this lovely sunday? I can change that: http://imgur.com/gallery/APOuD
Oh god
[06:06 pm]
CecilSunkure -- shoulda done that years ago
[05:49 pm]
Mini Moose 2707 -- boutta just disable and forward to google site search LMAOOOOOOOOO
[05:49 pm]
NudeRaider -- I mean recently :P
[05:47 pm]
lil-Inferno -- u
[05:42 pm]
NudeRaider -- nice, SEN is on its period again. WHO USED THE SEARCH?
[05:42 pm]
NudeRaider -- feeling comfortable on this lovely sunday? I can change that: http://imgur.com/gallery/APOuD
[05:34 pm]
Mini Moose 2707 -- Because Lanth's answer means "yes"
[05:29 pm]
NudeRaider -- yeah I was confused because Lanth said that "bring always updates" when sig was asking about "does bring update when i give a unit" which is wrong. Still sig is happy with the answer.
[05:27 pm]
lil-Inferno -- smh, talmbout "what you guys talmbout" like he can't just read the context
Please log in to shout.


Members Online: Roy, Ahli