[SIDE CONVERSATION] BART SELMAN: OK. OK, let's begin. Let me just-- so welcome to this special seminar series on emergent intelligent machines. And let me just start off by saying a few organizational things. So this is a short lecture series. I think we have eight lectures overall. This is the introductory lecture. And it's together with Professor Joe Halpern. Joe, maybe you can stand up for a second. That's Joe.
So the approach is as follows. We have these general lectures as we start off with. And they will be about 50 minutes, 55 minutes. And then there is actually a class that is the 4732. That's a fairly small group of students who then-- we get to get another room, actually, and do some further discussions.
But this is for the general lecture. And we'll also have some questions at the end of this lecture, so in two parts. There's the schedule. I'm going to say more about these various talks while we-- towards the end of this lecture. So we can ignore this for now.
This lecture series is sponsored by several-- we have at least two people that are traveling in. So we have some costs. So it's sponsored by the Computer Science Department, the College of Computing and Information Science, and the Institute for Computational Sustainability. And we thank them for providing the funds.
And I should also mention the whole idea came actually from our computer science chair, professor Fred Schneider, partly to introduce people to these issues of the advance of AI, and the issues of ethics, and how to deal with the consequences of these technological developments. So we hope that students get a better idea of the issues involved. So I'm starting off with the intro lecture.
So what's happening? Basically, what we're starting to see-- and it's developing quite rapidly-- is the emergence of semi-intelligence and autonomous systems in our society. So self-driving cars and trucks are really at the point of being introduced. They're actually working. The next few years that will happen. Autonomous drones, of course, have been already used quite a bit.
Virtual assistants-- things like Siri, but Alexa, the Amazon one. Every companies is developing that technology. In the financial markets-- fully automated trading systems. In robotics, we're moving towards robots that can help-- we call it assistive robotics-- help people in the household, help elderly people live longer independently.
But the common theme is that these systems use AI technologies. And they operate autonomously. And they're going to make decisions for us, hopefully, to help us. But there's a range of issues that arises with these systems.
So I've been in artificial intelligence research for a long time. It's actually the first time now that we see this research shifting from academia to real world. And I'll say a little more in a moment about where this change comes from.
There have been some revolutionary ideas that have started [INAUDIBLE] the last five years. They center around deep learning and big data. Deep learning is a particular kind of machine learning technique that can learn from big data. So I'll say a little bit about that. But this change has made it possible to move these systems that were always sort of academic topics to practice.
So now-- I think the top is cut off, but anyway. So it's a series of events. But if you had to give one main event, what's really the difference between five years ago? Well, the difference is five years ago is what's called machine perceptions is starting to work. And I say, finally, because in AI researchers-- and I started 30 years ago-- it was always not working, OK?
So finally machines are starting to hear and see us. Most people are not quite aware that they couldn't. But really, they couldn't before. In fact, when Bill Gates was asked-- he was here, I think, two years ago opening the Gates Center. He was asked, when does he want to be in computer science? Is he happy that he started when he started?
Of course, it made him a lot of money. But he said, he would be even better off, he thought, if he would be in computer science now because he sees now a lot of things that we wanted to always do with computers are starting to become possible. And the key thing is hearing and seeing. You need the machines to be able to see people, to hear people, to perceive the world. That wasn't working up to a long time. OK.
Now, if you take notes, but I'm actually not expecting you to. This is a general talk. So I'm not expecting people to know the terms here. But basically, what's going to happen is we're going to link this hearing and seeing of computers with a lot of other AI techniques that have been developed for many decades, for several decades-- reasoning, search, reinforcement learning, planning, decision theoretic, a whole range of techniques. But now, these techniques can start using what the computer see and hear, OK?
Just to give you another example of how it has changed, in 2005, there was the first DARPA competition to challenge people to build self-driving cars. At that time, Stanford, and it's actually Sebastian Thrun from Stanford and his group, won that competition. But one thing that surprised, when he gave talks about that first self-driving car, even though it was successful, he would stress that the car drove around completely blind.
There was no computer vision on the car. The only thing it had was a lidar, a radar system, to look around it. He had gone to the computer's vision people at Stanford and said, shouldn't I use your computer vision? That's how we see the world.
And they said, don't bother, OK? It's not going to work. And 10 years ago, it would not have worked, OK? But now, it's starting to work. And we're going to use this technique in a broad range of other existing AI techniques.
So another way we put it a little bit more technically, the systems are becoming grounded in our world. They're going to be part of our world. They perceive our world. They're going to hear us. They're going to be able to synthesize speech. They're going to be able to talk to us. It's a whole different mode of interaction.
And in fact, what we're also seeing is we see this emergence of super-human capabilities. So Facebook-- oops. Facebook has what's called super-human face recognition. So their picture-- they are better at recognize face and pictures than we are. It has actually been tested to have people sitting down looking at pictures, even their own friends, that Facebook is better at recognizing them.
What I find surprising also is traffic signs recognition. When the first cars came out, the first idea of self-driving car, people said, how's it going to see the traffic signs, the stoplight, the cop that is trying to direct traffic? Well, the systems are better at reading traffic signs than we are by now. So we get these super-human capabilities as part of these systems.
OK, so just to give a few more examples, so I wanted to just show what changed. So 2005, so it's about 10 years ago looking at computer vision. And here's just a little example. I almost have to apologize, because I think it's actually an example from one of our own esteemed faculty members, but how it didn't work.
So let's see. So here's a left image, a real image, black and white. Here's what's called a ground labeled-- hand labeled image. It's called the ground truth where somebody has labeled, this is a lamp. And all the pixels are given the same white color here. That's the rest of the lamp. So all objects are labeled and what belongs to what object, OK?
So how good were we in 2005, OK? Well, this was the best, OK? So you see the best image processor. That's why Sebastian Thrun would not put any computer vision on his car, OK? The lamp, if you try to see that, it's all broken up. And this looks connected. So it was this kind of labeling.
And it's the first step you have to do in the vision. First [INAUDIBLE] is to say what belongs to what object. If you can't get that right, you cannot do the rest of computer vision. You cannot determine what's in that picture, OK?
So 2005, so let's skip-- this is what current vision systems can do. And they're taken from the self-driving car. I looked used the Nvidia-- Nvidia and a company called [? Mobileye, ?] an Israeli company that does the computer vision system for the Tesla.
And here's what you see. So see all these boxes here are cars, or other objects on the road, or near the road. Here's the labeled image that I see. And then if you look carefully, you see that the sidewalk, the road, the cars, everything is almost labeled almost perfectly, OK?
Now, go to this bottom image. You can barely see with the naked eye what the objects and where the cars are. This system has no problem putting little boundary boxes around to various cars up to 50 meters or 100 meters away, OK?
Here are the traffic signs it can read. Look at the traffic signs. Look at the shadows, et cetera, on top of those. We would have trouble, actually, reading some of these signs. The machine doesn't, OK? So how is that done? Well, so note this labeling, OK? This is the one you can-- it's a little hard to see with the boxes around the pedestrian, the bicycle, and about this car, OK? The key thing is, here, all the shadows. This would be-- shadows are a huge problem for vision system. Well, it used to be a huge problem-- not anymore. OK.
So how is it done? It's sort of interesting. They actually-- it's a machine learning approach. They trained a statistical network by giving it a million images to look at-- a million images labeled by humans. You need to actually get some ground truths. You need to learn. And this system learned by given labeled examples.
So somebody went around and for a million images, with Dunphy and Mechanical Turk, these images were hand-labeled. Then the network was trained. And then the networks identify these things in new images. So it wasn't trained on these images. It's given new images.
It requires special compute powers. It ran huge mathematical models with about 500k parameters to tune. But anyway, when you throw it all together, it can do it. Big surprise to the researchers.
So this is what we foresee the self-driving car will be like. It will track its-- real time tracking of its environment, 360 degree, 50 meters out, with decision making. So this car is doing that.
We know that that car is going to be much safer than any human driver. Most accidents-- 90% of all accidents-- are human error, driver errors. This car will drive perfectly, 24 hours a day, by looking around itself and looking around. So we expect the reduction in traffic accidents will be over 90%.
So what's-- some further details on this accelerated progress. It's this deep learning neural nets. And again, because these are general lectures, I'm not saying too much about it. It's modeled after the brain. The ideas come from the mid-1980s, combined with several orders of increase in compute power.
It was actually based on an hypothesis by a computer scientists named Moravec, who said, if computers get powerful enough, these techniques will start to work. Now of course, we actually-- it's not obvious. So sometimes when you look back, you say, oh, of course it's going to work. Look, it works. Nobody knew it worked, because by 2000, almost all AI machine learning research had moved away from neural nets. In fact, there were four people in the world working on neural nets in 2000. They're all multimillionaires now.
It changed after 2011, 2012. So suddenly it was discovered. Hey, this works. So but those are the best discovery, the ones that you don't expect.
Algorithmic advances are still important. So sometimes, students ask, maybe we should just wait for these faster computers. They use new algorithms. So they took the basic concept from the 1980s, and they added a lot of-- hundreds of papers-- of advances. So it didn't just work out of the box. There was actually some work. And now it's being pushed very hard to even work better.
Final thing-- big data, which we didn't have. We needed to train on a million images, hand-labeled. Well, that's only possible nowadays. In fact, Mobileye has a whole group of people-- I think about 1,000 people-- continuously labeling road data coming in from Tesla cameras-- at least, they used to work with Tesla input-- and label more and more data. The system isn't quite perfect yet.
So what was the Moravec hypothesis? Well, this was a simple hypothesis. He would just-- he had this sort of fancy plot-- it's little old. This was done, I think, in the mid-1990s-- where he plotted memory-- just how much memory you have-- and processing power. So how much computation could you do per second?
The plot has two axes. And he put various things-- devices-- up. Little computers-- so the 1995 home computer is here. 1996 home computer. Here is the Deep Blue Chess Machine. And then he inter-weaved it with what we knew about brains. Here's the spider brain. I always find it intriguing that the chess machine and the mouse have about the same compute power. So a mouse could be quite good at chess if they only took it on. Then the Terraflop-- this was supercomputer in 1996.
And we're up here-- human, elephants, et cetera. So what we expect is that around 2030, so that's within most people's lives here-- yeah?
AUDIENCE: How could we find processing speed for human processing?
BART SELMAN: We look at the neurons signaling rate and the number of neurons. So these numbers are in some sense all rough estimates. And you shouldn't read them too literally. It's basically how much-- neurons are much slower than chips. But we have 10 to the 11th of them. So we have many more of them. So those are the estimates. Yeah.
And so based on these estimates, around 2030-- this is interesting-- your cell phone will have the capacity of a human brain in this type of compute power. Now, as people have pointed out, it will also be connected to the Compute Cloud. So it's not just your phone. It will be 1,000 times more powerful or a million times more powerful than a single human brain.
Again, I want to stress, it's not obvious. If we had known, everybody would have been working on neural nets all the time. It was still a surprise we got it to work. Our scientists got it to work.
So, yeah, historical aside-- I have two of them in this talk. The first neural nets were actually developed at Cornell. So Rosenblatt, 1958. He is the first one to come up with what's called the Perceptron. And he invented-- started the field of machine learning using neural nets. Unfortunately, the patent is long gone. But you see the big C here. That's Cornell. So pretty good.
Progress, continued-- so what else did we do? There was this confluence of circumstances. This crowd sourcing-- machines need to understand the concept of our world. We needed these images-- 100,000-- actually, more like a million-- labeled road data. So crowd sourcing-- it's called Mechanical Turk, for example, where you put little tasks up. And people can do them online-- helped getting this data.
Engineering teams-- often overlooked in academia. But if you look at something like Watson, which is the IBM program that beat the world champion Jeopardy players, that was actually largely an engineering effort. About 100 engineers-- IBM put them together in a big team and just put them in the challenge.
They worked on this for about five years and got to a system that was powerful enough to beat the best human Jeopardy players. It could not have been done in academia. But strong commercial interests are pushing these teams now. And we'll see another example of that in a moment.
So finally, investments in AI systems are now being scaled by order of magnitude to billions. Investments used to be in millions, hundreds of millions-- not small, but not particularly big. Now we're up to billions. So Google, Facebook, Baidu, IBM, Microsoft, Tesla-- they all have their AI labs now. Investments-- 2 billion is actually a low estimate at this point. The military-- this is somewhat concerning-- is investing $19 billion in AI development. I'll get to that a bit later.
So it's an AI arms race. The idea is that the company that wins is going to have a tremendous competitive advantage, because it will have the best speech understanding, the best intelligence to work with. And they will have the data. The data is sometimes the hardest things to get-- the data you need to train the system. So it's an interesting time.
Here. Let me just give you a list of the milestones. So AI for many, many years was-- I wouldn't say failure. But it wasn't making progress we thought it would. Early on AI started in the late 1950s. People were quite optimistic that it would be solved in the '70s and the '60s. Nothing much happened.
The first real advance-- the first sign that we were maybe making advance-- was in 1997, when Deep Blue defeated the world chess champion. IBM's Deep Blue defeated the world chess champion. And there was a sort of a sense that maybe things would change. And things have changed.
So 2005-- still a bit of a time gap. Stanley-- that was the car I just mentioned, the self-driving car-- did a reasonably independent route and showed that you could do self-driving cars, purely by GPS and lidar, no computer vision. But it was a reasonable achievement.
IBM's Watson, in 2011. We're still jumping quite a few years. So these advances come, but not as quick. So what I like about them, they're at world level, at least these two. They're beating the best human. That's sort of the standard we want to set. IBM's Watson beats-- wins Jeopardy.
Then the deep learning revolution came, 2012. Geoff Hinton actually, from the University of Toronto and at that time Microsoft, was a believer in this from the-- he worked on the algorithms in the early 1980s. He came from psychology. He's interested in how the brain works. And he was always convinced that this should work at some point. He just had to wait 40 years.
So computer vision is starting to work. Microsoft gave a demo of real-time translation, speech to speech, English to Chinese. And I saw that little demo. You can probably Google it. It's on the web. It's very impressive. It's not there yet. But it's all-- it's not fully there.
It simulates the speech, I think, that Microsoft's CEO at that time, or the head of research-- the speech was generated in his own accent, his own voice. But it was real time speech to speech, so hearing, translation, and speaking altogether. There's room for improvement. It shows the beginning.
Here's another success-- AlphaGo. So after Deep Blue happened, people said, well, that's chess. There's something not that hard about it. Let's take a much harder game-- Go. Go was known to be very much harder than chess by orders of magnitude. And the sense was it might be 30, 40 years before we tackle Go. It took 50 years before we could do Deep Blue.
You know, we see five years-- well, it was about-- it's lower-- 15 years later, Go is suddenly defeated. This came about 15 years before we thought it would be doable. And DeepMinds Google did it, partly learning, but also some more traditional techniques.
Google's WaveNet is something you should Google [CHUCKLES] I guess. Yeah. So WaveNet gives human-level speech synthesis that is indistinguishable from humans. Very worrisome development. It means we can now synthesize-- somebody can give you-- I just heard about that. There are now scam callers going on that simulate your mother or something like that, and will ask you to do something. And it's becoming hard to know that it's not your mother. OK? So it's a little worrisome. But that's starting to become possible or is possible now.
Watson-- so this was-- now we're going to 2017. We're talking January. We're talking last month. So IBM announced that Watson automated 30 mid-level insurance claim workers. Now, this might not sound like a lot. But they took an office of insurance claim workers that had about 35 people in them. And 30 of them are being replaced by this Watson software. They found it has the same quality as the workers-- mid-level workers.
The system was about $2 million to develop. But now they can replicate it all over the insurance industry. So it's going to change the world of work.
Stanford just announced, end of January, the automated dermatologist that looks at pictures of potential skin cancers and found that it can-- this was published in Nature-- they found it can reach human-level accuracy, as good as the best dermatologists can get. OK? So again, human-level, top-level performance.
And just two weeks ago, poker, heads-up, no limit Texas hold 'em. I'm actually not a poker player. But apparently, this is tough. And it's tough in part because there is no limit to the bets you can raise, the money you can bet. And the human players-- this was considered to be out of reach for another 10 years. But again, a CMU program managed to beat the top human players.
AUDIENCE: It may be worth pointing out that last year they lost badly.
BART SELMAN: Yeah.
AUDIENCE: This is the second competition. They had a competition last year. Last year, the program was creamed. And this year, the program creamed the players. So that's what happens in a year.
BART SELMAN: In a year. Exactly. And actually, if you read the transcript by the players, it's very interesting. They felt the program was putting tremendous pressure on them while playing, where the problem was bidding so high that they thought, it can't be true. And they crumbled. So yeah.
The program does-- or part of the program does-- is actually modeling what the people do, because that's why it's poker. Your opponent has the hands. You learn from what the opponent does what kind of hand he or she may have. And you have to model that. So that was done. OK.
So this is now-- so just look at the sequence. So there's five years between things. Now we're reaching human-level performance, in like three in the last-- in January and February.
I had one bottom line lesson, though. Microsoft PowerPoint auto-numbering does not use AI technology. So it kept insisting that this should be 17. 17, 18, 2019. I almost had to throw out the machine out of frustration. So Microsoft has some work to do.
Ah. It's my second historical aside. And we have to see whether the thing works. Watch these two cars. The one on the right is MIT, and the one on the left is Cornell. This is very early on-- 2007.
So unfortunately, this disqualified both teams, although clearly, MIT was the guilty party here and cost us the win. They may have cost us the overall win in the competition. So don't trust those guys. So first collision between fully self-driving cars.
So next phase-- this just keeps going. So and it's this integration, actually. AlphaGo is sold as a deep learning [? metaview. ?] It's good to-- whenever you do something, you have to say, it's machine learn-- deep learning. But actually, it has some very clever reasoning techniques that were invented in 2000-- actually 2006-- so about 10 years ago. So it combines these techniques.
So that's what we're seeing now is we're going to combine this perception deep learning with inference and planning and decision making. And that will keep pushing these technologies further. And we've determined investment is going fast.
Now, I don't want to give the impression that we're all done. Sometimes that sort of is-- there's always a risk of that. There are certain areas where we're still very poor at. And we actually don't know whether we're going to solve them in the next 10 to 15 years, or whether it may be the next 30 years. So progress in a certain sense is hard to predict.
I think in many areas we will continue to make solid progress, because we understand them. But there's some general areas, for example, really understanding natural language at a deeper level. Truly understand language like we understand natural language is still a real problem. It's surprising that something like machine translation, which actually is fairly decent now-- the latest Google machine translation is quite impressive-- is totally done in a statistical way using deep learning techniques.
It's good to remember that Google has no clue what you're talking about. So it will translate it for you in any language you want. But it has no clue what you're talking about, which is good. So in some sense, it does translation without understanding. It's surprising you can do it. So there are all these surprises.
We can do certain things. Jeopardy-- you can win Jeopardy-- Watson was able to do that without really understanding the questions the way we do them. Apparently, it's not needed for the task.
So and I think the insurance office that they automated-- the system doesn't really know what an insurance is or why you should have it. But it still can give you the right claims, et cetera, and your adjustments. So we can do certain things without having full deep understandings like humans have.
What is missing is often what's called common sense knowledge and reasoning. So and I want to give a little example. There's a little example. So when we hear-- this is from Oren Etzioni, Paul Allen AI Institute. They're working very hard to fix this problem. But it gets this very simple sentence. "The large ball crashed right through the table because it was made of Styrofoam." So it's a strange sentence.
But so now you can ask, what is made of Styrofoam? What is the "it," the ball or the table? Well, it crashed through the thing. Styrofoam is very light. It's probably the table. OK? So when we hear a sentence like that, we say, OK. That's what this pronoun refers to. And we have no trouble understanding sentences like this. Machines can't yet.
So common sense is actually needed to deal with unforeseen cases-- cases not in the training data. Common sense is sort of the way we understand the world. Now I want to give-- why can it be important? We can do a lot of things without common sense. But I wanted to just mention it.
I was going to include the video. But I didn't want to make it too depressing. So here's a Tesla actually driving. And it's going quite fast, self-driving mode. Unfortunately-- so this is in China. I think it's in Beijing-- unfortunately, for some reason the street cleaning crew is driving the cleaning truck on the left-hand side of the road here.
And the car vision system does not pick that up. It partly doesn't pick it up because the visibility is very bad. It also doesn't expect anybody to be driving towards you on this side of the road, when you're on the highway. So it's something that goes so against the priors, as we call it, of the machine learning of the [INAUDIBLE] vision system that it literally didn't see it, crashing at full speed and killed the driver.
What I find interesting about the video-- and you should Google the video-- why do humans not drive into cleaning trucks in China all the time? And that's very interesting. You see the car driving. And what you start observing is humans start veering to the right. So the humans start moving to the right. So what a human driver would do is say, wait a minute. I don't know what's going on. But everybody seems to be moving to this lane. I better do that too. OK?
That's common sense. But Tesla has none of it. And it said, OK, great. Everybody's moving to the right. I can go faster. And that's what happened. So there may be some common sense needed to get-- to avoid these kind of accidents.
Now, I think what probably will happen, we will not solve the common sense problem. We will accept these kind of accidents. They will be sufficiently rare that we still have much safer driving than anything else, than human drivers. So we'll probably just proceed without it. But hopefully, at some point, we will get better common sense in our systems.
So now the emergence of these systems is going to have major impact on society. And actually various teams or various groups of people are actually preparing for it. So for example, the White House-- there's really dozens of studies at all kinds of levels. But one example is the White House Report-- Executive Office, in October, 2016, "Preparing for the Future of Artificial Intelligence." So groups are starting to realize, this is going to change our society. We should think about what could happen.
And in this lecture series, what we'll do-- we'll focus on four issues. Actually we're going to mainly focus on the first two. But we'll say a little bit about these other two. So what are the societal issues? Economics, employment or really unemployment, the risks to human employment of intelligent systems. I'll say a bit about that.
Also, is it increasing wealth inequality? It probably is. The companies that have the technology are going to be the winners. So there's a huge impact on economy and employment.
There is an issue of AI safety and ethics. These systems are going to make decisions. The big difference is with traditional computer systems, we were sort of in control. Now we're giving control to the car to decide whether to pass the car in front or not, to stay in the lane, to make all kinds of decisions for us. So AI safety-- and are these decisions ethical? Are these decisions the way we want them to be made, made in ethical ways? So that will come up, and we'll come to that.
Military impact-- smart autonomous weapon systems. It's a big issue. And I'll say a few words about it. And finally, this is what hits the press the most. What if these systems become super intelligent? The systems are way more intelligent than we are. What will it mean for us? Can we live with those machines? Or is it actually going to be a problem?
So let me say a few details about each. So first of all, it's not that nothing-- people realize that this is going to change society. So people like Elon Musk have come out and actually pulled out of his own funding. He is funding a big program on AI safety and ethics at the Future of Life Institute at MIT, run by Max Tegmark. And in the meantime, several other billionaires, I guess-- they tend to be billionaires-- have stepped up and has also made investments for society to find ways for society to deal with this progress.
So first thing-- and so I'm going to talk about these topics. And I'll point out which lecture is going to touch on them. So economic impact-- technological unemployment. So I'm going to first give this example, the self-driving vehicles. It's five to 10 years off. It's already happening, but in five years, it will be legal, except in certain states, in certain areas-- 10 years for sure.
Significant reduction in accidents-- 90%, 90+%. And that will partly be what drives this development. People always ask, well, it will not be legal. People will not allow it. People want to drive themselves. But do you really want to drive yourself if you could make it 90% safer if nobody else would drive either?
So in the end, accident reduction will take care of a lot of things, including even the risk of accidents and who pays for the accident. Volvo has come out to said they'll pay for any accident that occurs with their self-driving Volvo car. And Tesla has the same plan. They will pay for it. The accidents will be sufficiently rare. It will just be written off as a cost of doing business. So this thing, this stuff, will happen.
Now here's something I think about. Transportation covers about 1 in 10-- mostly male-- but roughly 1 in 10 jobs in the US, 10%. It's not so easy to replace that with other jobs. So that's a big concern, especially when they're not highly trained jobs. So it's very hard to find, to replace 1 in 10 jobs with other jobs.
Here's another surprise. Who will be fighting this? Hospitals. Hospital business is good business-- in terms of car accidents, about 30,000 fatalities a year, about 100,000 nonfatal accidents. So hospitals will not be big fans of self-driving cars. So but again, if they come, there will be a reduction in need for hospital emergency rooms.
So that leads to these questions. Retrain people? But for what? Knowledge worker-- I'll come to that. STEM fields-- we like them. But they're way too small. You're not going to take 1 in 10 jobs and turn them into coders or something like that.
Even if it was feasible, there wouldn't be enough jobs for that. So if you actually look at sort of the charts of these jobs, IT industry is actually very small compared to other industries. And of course, we know why. Google doesn't need a lot of people to make a lot of money. So most of these companies have very few workers compared to hotels or other types of businesses.
Then the second example-- IBM Watson style-- I mentioned it-- automation of 30 insurance admin jobs. Now the systems are expensive to create. But once you've created-- and this took I think about five years for them to build the system-- no actually, probably about three years. And it cost about 2 million dollars. It might actually internally have costed more, but a few million. But then once you have that system, you can deploy it in all kinds of insurance company.
And it places at risk mid-level knowledge-based jobs, mid-level management also, mid-level knowledge-based jobs. So that threatens the knowledge worker. So most jobs with a significant-- this is how Moshe Vardi, who is one of our speakers, puts it-- most jobs with significant routine component will be affected. There's significant incentive for computers to pursue this automation.
It's not easy to stop. It's cheaper to do it in an automated way. So you can't just make laws that say, you shouldn't be doing it. People are going to save money doing it. And it's estimated about 40% of the jobs will be put at risk. So that's a significant number.
And so there's the risk to these issues. How will society prepare itself? Will we have something like universal basic income? Without work, how do we get people to feel useful? It will have an amplification of wealth inequality. All kinds of questions-- luckily, we have all the answers. No.
But we have two specialists, Moshe Vardi from Rice University, and our very own Karen Levy from IS and Law. They will talk about these issues and how it will transform us. These are mostly policy issues. There's not really technological issues here. This is, how will we deal with it as a society? And I believe we will, of course. But we have to start thinking about this how.
Second issue-- AI safety and ethics. What do I mean? I'm going to-- have identified two areas. This issue with machine learning-- data-driven approaches-- very popular, very powerful. And it's starting to provide decision support at all levels of society. I just gave some examples, financial loan approvals, hiring interview decisions, Google search rank order, college application selection, medical diagnosis, what's in your news feed, your year-end raise.
Machine learning techniques are getting good at that. They look at past data. They build a statistical model. And they say, this is the best thing you can do based on past data and what they can figure out. So this is going to be very effective.
And again, if you're a financial company, yeah, you'd rather have a system make loan decisions that is provably as good as they get, than a human. Interviewing decisions, hiring decisions-- you'd rather have a system do it if the system is better at it than any human, or at least as good at it as any human, et cetera. So they'll be hard to stop.
However, what do we know about the hidden biases in these decisions? Are data-driven decisions fair? So we're waking up to that question now, because it's becoming important. And we're finding, of course, that machine learning approaches have hidden biases that often arise from the data.
So hiring decisions, you're going to be modeling on past hiring decisions. So if you didn't hire-- you sort of subtly discriminate against female employees-- based on past data, you will continue doing that. OK? Because that's what you did before. Before, you used to give certain groups of people bigger raises than other groups of people. You will continue doing that, because you did it before. So the data often has all kinds of biases in them.
Similarly, the algorithms are good at finding regularities, statistical patterns. But certain biases, they can eliminate. And others, they can't. So we have to think about that.
It's actually interesting. It's not the US, but the European Union is at the forefront of these concerns. So they're working on laws. Actually, they already have some example laws that-- I'm not sure they are fully approved yet-- but they require explainable machine learning results. So you get rejected for your loan application, or you get rejected for college or something like that, or for your job. You need-- the company that does that needs to give you some explanation, not just saying, my system did it for you.
So and they want guarantees that these methods adhere to non-discrimination laws. Now this is very interesting. Again, this is where computer scientists will need to come in to help to make these laws work.
The nondiscrimination law-- I just heard from Rich Kirwan, who used to be a professor here and was very much into this field-- he said there's a very interesting thing. So one thing they try to do now in the European Union, he said, you should not collect any data that pertains to race, age, or any kind of features that you feel could be discriminatory. And you should train your machine learning model without those features.
It's exactly the wrong thing to do. What will happen is your machine learning algorithm will find other features that correlate with those features you didn't collect. It will be able to figure out your race. It will be able to figure out your age. It will be able to figure out everything else. And it will hide it in your machine learning model and then use it as if you had given those features.
What you have to do is train your model with all those features exposed, build a model based on that, and then use it without giving values for those features. That will work. So it's one of the things where computer scientists will be very important to give input on how to get things done fairly. But it's often counter-intuitive, what is the best way. So it's not so easy to get it done.
I like to say, I mean, this used to bother me a lot. Google used to say for a long time, results are fair because they are decided by an algorithm and data. And algorithms and data are always fair. This line used to work great, for a while.
But people are starting to wake up. Wait a minute. Data has biases in it. Algorithms have biases in it. You better expose and explain what you're doing. And you have to stay within our laws. Again, we have two specialists here, our very own Jon Kleinberg, and Killian Weinberger, who will talk about algorithmic fairness and data fairness in their lectures.
A second area-- autonomous goal-driven systems that plan and reason. So this is another development which is interesting. Traditional programming coding is a painful process. You have to specify exactly what to do.
Traditional robots used in industry are very carefully programmed every step of what they make. In fact, it's so expensive that Toyota at some point reduced its numbers of robots they were using in the company in manufacturing, because it was too expensive to reprogram them all the time for new models, for new variation of cars.
That is changing now. Why is it changing? Because we are-- let me see-- we are developing systems where you only give high-level goals or instructions. And the system will synthesize actions to perform. So these are called planning systems. It's sort of an advanced form of scheduling. But it's really a planning kind of system that will decide what action to take. You just specify high-level goals-- make that car. Do this-- at the high-level way. And the computer will go and the system will synthesize the plan.
Now the problem is, how can you ensure that these decision making systems do what we want them to do in a responsible manner benefiting humans? How can we make sure that the way the problem is going to be solved by a robot is the way you want it solved? And there's all kinds of examples where the computer might find ways, or the system might find ways of doing things that would actually violate our ethical principles. But it's a good way to get to the goal.
Stuart Russell has a whole institute at Berkeley. We're actually partly involved with that. It's called the Value Alignment Problem. We want to make the values of these systems, the underlying principles they follow-- we want to align them with human values.
And it's actually Dan Weld, who's our speaker on computational ethics. And actually, he was the one who, together with Oren Etzioni from the Paul Allen Institute, he revisited-- the first paper in 1994-- he revisited Asimov's laws of robotics, including the one, do no harm to humans, and showed that there are incredible difficulties in implementing such laws.
I always liked this little example they give in the beginning of the paper. So they imagined a household robot taking care of your business at home. You go to work. You still have a job. No. You go to work. I'm kidding. Don't worry. You go to work. And you tell your robot, why don't you go and take the car to the car wash and have it nicely cleaned for when I get back home.
So you come home. And the car has not moved. So you go to your robot and say, wait a minute. What happened to the car wash? And the robot says, well, you know, I've been programmed to do the greatest good to the greatest number of people. So I've donated your money to famine relief in Africa. It's much better spent than on your car wash.
And now, try to argue with that. It's true. So those are some problems you can run into. And you get this. So we have to get these robots to behave correctly.
Now, the ethical issues are often framed as extreme issues. So we've got this little example. Should your self-driving car risk the life of the pedestrian or save passengers? You have two people in the car. There are five people on the sidewalk. What would you do? So they are often extreme. And extreme terms, I think, are not so useful. It's actually-- because you can argue, well, how often do you have to make that decision? Not so often.
However, you should think of much more practical situations. You are in your self-driving car. And you say, this is going a little slow. I'm going to miss my meeting. So you tell your car, can you just pass the guy in front of us and speed up a little bit? And now the car has to decide whether to do that or not.
It's a very reasonable request. The problem is it slightly increases your own safety risk. But you might want to-- if it's an important meeting-- you might want to get there. But it also increases the risk for people in other cars. So now the question is, should your car-- let's say, you stay within the rules of speed limits, et cetera. So it's not illegal. It's just increasing the risk for other people a little bit in these other self-driving cars.
So the car has to make an ethical decision. Should it go? Should it listen to the person who's late to the meeting, who should just have gotten up a little earlier? Or should I just do it? And who's responsible for the possible accident and what happens after that?
So these are little. These are much smaller ethical decisions. But they have to be made thousands or tens of thousands of times a day on the road. There's all these cars going around. And so the way I look at it is ethics is back. It's been a very academic discipline. But when you actually build the cars, you actually have to think about it.
In fact, Google has a patent out on the slight increase in risk it takes if it passes another car and whether that's acceptable or not. So they actually have already done calculation on that. And they have a patent where they claim that's the way to do it.
So these are ethical decisions. And again, we have two amazing experts, Ross Knepper and Joe Halpern, who will talk about, can we build systems but have these ethical rules in them?
War and peace-- so this is-- we actually don't have a speaker on it. But I just wanted to mention this, because it's become a significant issue among AI scientists and others. The concern is about getting smart AI and AI-based autonomous weapon systems. Unfortunately, it's a development where we don't have a solution at this point.
There's a lot of pressure in the military to take the human out of the loop. The answer has always been keep the human in the loop when you make a drone decision or in these kind of things. But there's enormous pressure to take them out of the loop, because we need to take ever faster time-critical decisions. The systems have to react so fast, within milliseconds or seconds-- milliseconds-- that there's no time for humans to be involved in these decisions.
So another area there is cyber security, cyber defense discussion, where countries work on AI-based autonomous software. So these are issues that are far from resolved. There are discussions at all levels, both national and international. The United Nations is working on it. Various governments are working on it. They're working on maybe nonproliferation arms treaties modeled after nuclear weapon development, hoping to constrain these developments.
So we actually had a panel of AI scientists advising the White House in 2016, trying to get these issues to the government and see what they can do with it. We were allowed to take a picture, but only with the scientists, because these are secret people here. They are not visible.
So final point-- super-human intelligence. The future-- super-human intelligence. That's a question. And if you look at Stephen Hawking, he said actually an interesting phrase. AI will be either best or worst thing-- either be either best or worst thing-- for humanity. This is actually somewhat broken. So it could go really well or really wrong.
And I just want to say about this that when these machines-- at some point, they'll reach our level of intelligence. In 20, 30 years, they could probably get most of our capability. What happens after that? They can self learn. They can improve further. They could become much, much smarter than humans.
But here, I want to put a little bit of a different view out than that's generally advocated. With the push for AI safety research funded by Musk and all, we'll most likely ensure a tight coupling between human and machine interest. So it's not really clear. There's good hope that we will always be able to work with the machines and design machines that have our interests, have our values in mind, and are value-aligned with us.
So even if a machine outperforms in a range of intellectual tasks, it doesn't necessarily mean we won't be able to understand it. In fact, if you think about it, humans can understand complex solutions, even if we don't discover them ourselves. In fact, that's what you do when you're in university. You listen to other people's ideas. And you observe them, absorb them, and understand them very easily.
Coming up with them might have been very hard. And that can be left to smart machines. But as humans, we may still be quite able to work with that. So that's actually the positive side. So I'm less worried about the super intelligence being such a threat. We're optimistic that that actually can be managed quite well.
Whoops. So we're at an exciting intellectual journey in the history of humanity. And here is our program. And I hope you attend some of these lectures at least. Thank you.
So if anybody has questions, we could do some. Or let's see. Yeah.
AUDIENCE: When you're talking about needing common sense, even for figuring things out like that, the whole problem of categorization in cognitive psychology-- there have been significant advances. But people still have no idea, really, about how to approach categorization. Are we going to get that, which I think, has to be a basis for [INAUDIBLE] common sense.
BART SELMAN: Yeah. Yeah. So now, that's a good example. So people are trying to do deep learning things for categorization. I don't think they're that successful. So I do think that when it comes to common sense reasoning, we still need fundamental ideas coming out of other areas like cognitive science to grapple with some of those issues.
The surprising thing are some of the advantages-- I mentioned this-- how much we can do without having figured that out. So that's somewhat surprising. Common sense is important for being human. We are able to do a lot of things without having that. So but that's-- I always give common sense and things like categorization as examples of-- that could be a stumbling block that we really don't know whether it's going to take 10 years, 20 years, or 50 years. Yeah.
AUDIENCE: So clearly as humans, when we learn something, we don't look at a million examples to learn. So are there algorithms that do just as well on smaller data sets? If so, how do they usually do it?
BART SELMAN: No. That's an excellent question. It's actually one of the big open questions in machine learning right now. How can you learn from a few examples? We actually learn by being told or reading a book. So that kind of learning, our systems don't do yet. It's actually part of common sense. I could have put it as one of the other areas.
Now, it's an example where often we can do a lot, because now we can collect data anyway. Or we can simulate environments. So let's say, our physical world-- you can write a simulator. You can generate a million examples with the simulator and learn from that. So the way machine learning systems get around it is by generating even data.
But ideally, we would like to go to systems that could learn from a few examples or even from just being told. But that's another open area. So yeah, I don't want to give the impression that AI has been solved. I think some things we've made tremendous progress. And other things we know are still quite open.
AUDIENCE: Professor, two more points there. First of all, when we learn stuff, we already are building on a large basis of knowledge. And for most of these systems, when we talk about learning, they're learning from scratch. Right? So that they presumably-- so we haven't solved the problem of boosting yet. So once you've got a lot of stuff already figured out, how much does it take to get to the next step?
Also if you think about little kids, little kids get a lot of data. Right? You spend the first three or four years of your life-- mommy's pointing out red truck, this, this, this. I mean, it's maybe not a million. But maybe psychologists in the audience know. But you get a lot of stuff. You really get millions of unlabeled items.
BART SELMAN: Unlabeled. Yeah.
AUDIENCE: Little kids are looking around all the time.
BART SELMAN: You get unlabeled data. And I guess the transfer learning is also-- you train, let's say, on these images. And then you do a different task and use your neural net. So there's some work where people say, there's some knowledge already in the network. But yeah. Other questions or comments? Yeah.
AUDIENCE: Yeah, I wonder if anything's been discussed at all or thought of at all about socially acceptable ways to slow down the birth rate, possibly. The problem of putting huge numbers of people out of work because they're basically made obsolete and being cast into this future where basically the only sort of viable roles, at least that we're imagining, are these technologies roles-- It seems like part of a solution might be less--
BART SELMAN: Less people. Yeah.
AUDIENCE: --[INAUDIBLE] humanity.
BART SELMAN: That's true. I mean, we'll have two speakers on this issue. We shouldn't get the idea-- the idea that work is the most meaningful way to spend your life is actually a fairly recent idea in human evolution. So people have argued that although it looks like a tremendous problem right now-- what do you do if not everybody has a job or potentially has a job-- it's not totally clear. And other people have argued that, no, we may all be way happier when we don't have boring jobs. Interesting jobs are something else. But most people have mundane jobs.
So these insurance claims people that were put out of work-- it's tough in society right now, because there's a stigma around not having work. But that will change when 90% of the people don't work. So there is this question. We don't know quite which way this is going to go. But I don't want to make it too bleak about this. It could actually be a good thing. And society may actually deal with it.
And I assume that the people-- actually, I know the people talking about this will address some of these other alternatives that we haven't thought of. So we're looking at an unknown future. But we have to be creative in what could happen. And it may sound bleak to us now. It could actually be a win for everybody. OK? Yeah.
AUDIENCE: I read quite a few articles about the AI work revolution and how that ties into universal basic income. And I'm wondering if that's touched upon in that talk, even if people [? will ?] work.
BART SELMAN: Yeah. So the two talks, Working With and Against AI and the Vardi talk, will touch on universal basic income. So I think it's one of these things. I think economically, it will be fairly easy to solve. The machines will do the work. But the work gets done. And there has to be some willingness to distribute some income. But it doesn't have to be that much income.
But the other aspect of not having the work itself and finding meaningful things to do-- traveling, creativity, or something like that-- that may be a harder problem. But yeah, Vardi will talk about that. Yeah.
AUDIENCE: There is some history here. A lot of these ideas came up with machines a hundred years ago. Or more recently, Cornell's alum Kurt Vonnegut wrote his first novel, Player Piano, on exactly this topic of work. And there was either people who tended to the care and feeding of machine, or they did busy work on the roads. So these are not new issues.
BART SELMAN: These are not new issues. I think one thing-- there's this sort of sense, and some economists will argue that, oh, we always invented new work. I have the feeling this is a little different. So mechanical work was replaced by knowledge work, by knowledge-intensive work. Now you start having machines that have the intellectual capabilities and the physical strengths to replicate a human. It's not immediately obvious to me that there still is this huge amount of work out there that will be invented.
It's not impossible. But I'm more skeptical. And this is sort of what makes this a unique point in human history, that we finally, or we will be, have entities with us. We have designed them, actually.
AUDIENCE: Maybe the next stage is emotional work.
BART SELMAN: Yeah. Caregiving. There's still room for work. I'm not-- but I actually think that we should think more broadly. Does everybody have to work? Is this actually necessary to be useful for society? So I hope we sort of revisit that bigger question. OK. Yeah.
AUDIENCE: I guess one part is actually even the human performance is less than machines right now. But we consume so little energy compared to, right now, the learning systems. There's no sign-- I don't think there is any way to scale down the energy. The more powerful you get, the energy always scales up. At one point, we may be better because--
BART SELMAN: We may be more energy efficient. Yeah, not moving is also very efficient. Yeah, maybe. I mean, I wouldn't hold my breath, because honestly, so they're making pretty energy efficient machines nowadays too. But yes. So that could be. And we will be unique in the end.
So yeah, human creativity or empathy-- there are many things where humans are very good. Now the flip side is machines are also good at that. So one thing-- reading emotions, for example-- with deep learning now, there are systems with high-resolution cameras that can read human emotions better than humans. And so you get this system. Well, if I'm in an interview, and I have this camera analyzing my face, and do I really want that? So that's another-- privacy issues. So it is surprising what we can train the machines to do.
AUDIENCE: How do you measure that?
BART SELMAN: What?
AUDIENCE: The level of emotion.
BART SELMAN: I guess you ask the subjects. I guess, so they asked people. This is, I think, more about lying and things like that. Are you lying or not? And they know the answers and the questions. And so they are getting very good, looking at the pupils actually. So it's high-resolution cameras looking at pupils when people are speaking. So it's worrisome. This is done. I don't-- it's done. So yes. So they have a way to figure it out. Yeah.
AUDIENCE: How do you think advanced AI would affect the finance industry?
BART SELMAN: Yes. So there, it has already a significant impact on high speed trading systems. I'm actually sitting here, and we actually just had a meeting in-- trying to think where it was-- I think, yes, so in the Origins project at the University of Arizona. And we had one session on trading systems. And there, the people were actually fairly optimistic.
So the systems are getting fairly effective automated trading. But they are able to set up rules to manage the risks. So there, we actually have a feeling that it will change the industry. In fact, the IBM team that did Watson was hired away by BlackRock-- which is the big hedge fund? Yeah, BlackRock, one of these big hedge funds-- and the funder of that hedge funds wants to replace all people in the hedge funds with Watson. So it will change that industry.
But it will be-- it doesn't necessarily mean increased risk for the general population. They can control it. It was one of the sessions where people were quite certain that AI safety would be fine, even though it's going to change the way things are done. Yeah.
OK, so we'll stop. Thanks for coming. I did actually-- there was a-- did people sign up on the email sheet? Because we want to send people a reminder. If you haven't done it-- is it going around there? If you haven't done it, make sure you put your name and email there, so that we can send you a reminder for the next lecture. But thanks, all, for coming.
We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact firstname.lastname@example.org if you have any questions about this request.
Computer science professor Bart Selman discussed the many possible impacts of the deployment of artificial intelligence Feb. 27, 2017 in Olin Hall. The talk was the kickoff lecture in a series on “The Emergence of Intelligent Machines: Challenges and Opportunities,” co-created by Selman and fellow professor Joseph Halpern.