SPEAKER 1: Hi, everybody. It falls to me to call this to order. I'm Tom Bruce. I'm actually the director of the Legal Information Institute here at the Law School. And the event you're seeing today with the [INAUDIBLE], "The Law of Robots," is a sort of co-production of the Intellectual Property and Technology Student Association of the Law School, the Information Science Colloquium Series, and us at the LII.
It's a real privilege for me to have Ed Walters with us here, today. This is not his first time at Cornell before with us. He has been a good friend to the LII for many years, and to Open Access [? to law ?] in general. His day job is running the most innovative company in legal information. And when I say 'most innovative,' I am talking about everything from his business model to the technology that they use to the processes that they use [INAUDIBLE]
Ed and his company, Fastcase, have contributed enormously through opening up new information to the public, and to the provision of affordable legal research services [? both to the bench ?] and the bar. But Ed's not here to talk about that. He's actually here to talk about robots. I don't know what qualifies him to do that, exactly, but we might infer from his history that he's qualified to talk about a lot of things.
Before starting Fastcase, Ed worked in Covington and [? Burlington, ?] [? Birmingham, ?] and Washington, and in Brussels, right? Where he advised Microsoft, Merck, [? Smithklein, ?] the Business Software Alliance, the National Football League, and the National Hockey League. From '91 to '93, he worked in the White House, first in the Office of Media Affairs, and later, in the Office of Presidential Speech Writing. Which ones can we attribute to you?
He's written for The Washington Post, New York Times University of Chicago Law Review, The Green Bag, and [? Legal ?] [? Times. ?] His JD is from Chicago, where he served as an editor of the Chicago Law Review, and he clerked on the fifth circuit. He's a member of the Virginia State Bar and the DC Bar, and has been admitted to practice in the US Supreme Court and the Courts of Appeals for fourth and fifth circuit. So without further elaborate qualification-- Ed.
EDWARD WALTERS: I'm Ed Walters. As Tom said, I'm the CEO of Fastcase in my day job. This Fall, I'm teaching a class at Georgetown University Law Center, called (EERILY) "The Law of Robots." I have a six-year-old son, so I have to say it that way. The bravado in the voice-- "The Law of Robots."
This is a really bad idea because there is no law of robots. I'm teaching a class on vaporware, which is not really my nature. The class [? governs in ?] how we might regulate and think about regulation and policing of robotic systems not from our science fiction future-- from our chaotic, messy, haven't quite figured it out, but are dealing with it on a day-to-day basis present.
So I'm going to start today talking about three revolutions-- the first is the Industrial Revolution. The Industrial Revolution, I think most people would credit, started [INAUDIBLE] somewhere 1776. It ran through about 1856, and it was a change that happened around the world. For the first time, we have mechanized tools, manufacturing, and distribution in a way we never had before. We created a brand new method of commerce. People were wondering whether machines would replace people.
For the first time, people moved en masse from rural areas to cities. Markets opened around the world. No longer were you the blacksmith for your town. You could be the blacksmith for [? Europe. ?]
So although this was a time of great expansion, it was also a time of great peril. Europe colonized vast parts of the world. And it was a time of labor chaos, as well. So people were worried at the time that there wouldn't be enough work for people-- all the work would be done by machines during the Industrial Revolution.
In fact, the opposite happened-- there was so much work, there wasn't enough labor, there weren't enough people to do work. And it led to 100 years or more of really terrible working conditions. The law, by the way, was like 100 years behind. The laws that would regulate the Industrial Revolution were to follow 100 years later-- things like child labor law, minimum wage laws, and the abolition of slavery, for Pete's sake. We're hundreds of years behind the effects of the Industrial Revolution.
I don't want to say it was all bad. It also [? then ?] preceded an American century of manufacturing and industry. But it lasted about 220 years, and law really came the last-- maybe 75 to 50 [? on. ?] So the second revolution today is the internet revolution. I don't know when [? to peg when it ?] [? started. ?] It started by having this going back in the 70s and 80s, right? But let's call it 1993, maybe, when we all started getting discs in the mail from AOL.
This revolution was an information revolution that was spawned by inexpensive processing power, cheap storage space, and the internet. This was a software revolution, and it lead to a world that was closer together, a world that is more globalized. It led to the democratization of education, and also some colonization. There was a fair amount of digital colonization that has happened during the age of the internet, as well.
It brought us chess-playing computers, Wikipedia, and Watson, the processing machine built by IBM to take on hard tasks. But also, there's an unprecedented myriad of cat videos, cats dressed as sharks, riding Roombas, chasing ducks, Jersey Shore, and new ways of sharing pictures of your meals.
As with the Industrial Revolution, our law trails this information revolution every bit as much. The cycles are shorter, the change happens faster, but our law, in many ways, is still trailing behind, racing to catch up with the implications of this new internet revolution. We face questions about what it means to be in a networked society, what privacy means. And epistemological questions, like whether Taco Bell should be considered a person for the Constitution?
So it's worth noting what powers the second revolution. What powers this information revolution is Moore's law. I feel a little bit funny talking about Moore's law in a room like this. I have a feeling that many of you have talked about this and know about it. Moore's law was named after Intel co-founder Gordon Moore, who in 1965, said the processing power of chips will double and the price and size will halve every two years.
When he said this in 1965, by the way, computers were as big as this front row of benches, right? It was a truly preposterous notion. But time really sort of bore that out. Every five years or so, someone says, Moore's law is now coming to an end, and then someone invents some crazy new way of implanting chips on silicon and it continues unabated.
This, by the way, is the path of Moore's law. If you sort of look, the red line is the theorized path of Moore's law. And the yellow line is the actual path of microprocessor growth over time.
By the way, I'm going to come back to this. Don't be fooled by this chart. This is a linear chart because the y-axis, you'll note, is exponential, right? The law people, nod your heads. The [INAUDIBLE] people--OK. Just trust me-- if that was, like, 10, 20, 30, 40, the line would look like this, all right?
So Moore's law points to a time when machines will be much, much smarter. We can see that a lot today. I think it's clear to say, though, Moore's law tells us, this revolution won't take 100 years to happen.
So I want to stick-pin in the calendar here for Watson. Watson was a real moment in the history of this second revolution. It was a natural consequence of Moore's law, where machines were smarter. And I'll come back and talk about Watson in a little but. But it was a revolution in itself. It was the first time that people saw that machines could actually be smarter in a more fundamental way than people could. So let me just stick that pin that I'll come back to point to later.
OK, so we talked about the Industrial Revolution, which really was hardware revolution. And then the internet revolution, which is a software revolution. The third revolution I want to talk about today is a robotic revolution. And this is where those first two revolutions really come together, where the means of manufacturing and production meet this software revolution.
And they are truly embodied together-- machines that have intelligent software that power them that do things we can only begin to imagine today. They perform harder tasks, in some cases, better than we do. And the robotics revolution promises to be every bit as big as the Industrial Revolution and the information revolution, and every bit as transformative.
So that's what we're going to talk about today. This is not a tech talk. I'm not going to build robots in front of you. I'm not going to code anything while I'm up here. I really do want to talk about how we should think about regulating robots, both in law and in code. What does that look like? So three revolutions-- Industrial, information, robotic.
Now I know what you're thinking-- this sounds like a class taught for dilettantes. Robot law is kind of one of these kooky things everyone wants to teach every year. It'll be like a nice gimmick for a law school class.
Robots are science fiction. They're off in the future-- this is Hal, or R2-D2, or something like that. This is something that we don't really have to worry about for 100 years.
It sounds like fiction, but it's not. It's not our future. It's our present. We're surrounded by robots every day.
If you drive a car, the car that you drive, most likely, was built almost entirely by machines. They actual labor that goes into building a car is something like 90% robotic. Some of the most precise surgeries in the world are being performed either by the machines themselves, or machines assisting doctors in the operating room. If you know someone who's had prostate surgery, chances are pretty good that the da Vinci robot was a very important part of that surgery.
And we fight our wars and patrol our borders with militarized drones. These aren't like 'eyes in the sky.' These are the new version of our boots on the ground. And Google's self-driving cars are pretty well-known. How many of you have seen a self-driving car, by the way, in person?
So if you drive in San Fransisco, if you take the 101, it's lousy with them. You literally can't drive from San Jose to San Fransisco without encountering one of Google's self-driving cars on the road. I was over there about a year ago, and I saw one of these self-driving cars, and I immediately became my 15-year-old self again.
I said, I wonder how good these things are? So I pulled up next to it. I looked over into the car. There's a Google executive explaining a bunch of venture capitalists how cool it was.
I start easing over in the car's lane. The car responded beautifully. It actually kind of slowed down and moved out of my way. Not so much the Google guy in the driver's seat-- he was horrified.
But self-driving cars are going to be on the road in 2020, and they won't just be tested in California. They'll be tested in upstate New York, Washington, DC-- everywhere, pretty soon. States are beginning to pass statutes that effectively say, please, Google, come test your cars here.
And we're starting to see robotic assists, as well. So robotic exoskeletons are becoming more mainstream. The kick-off for the World Cup involved this [? actual ?] kick by a guy who was a paraplegic, who controlled a robotic exoskeleton with his mind.
And when you fly in an airplane, much of your flight is controlled by auto-pilot. It used to be that pilots were really only engaged during take-off and landing, but now you'll be happy to know that there is software for that, too. So there's [INAUDIBLE] pilots. You don't need to have that intercom call, 'does anybody know how to fly a plane?' The plane will actually land itself if it has to.
So I could go on and on, but quietly, these autonomous systems are increasing all around us. They're becoming faster and smarter, and they need us less and less. I think it's one testament to how prevalent these things are that we just don't see them. We don't even notice the robots all around us that are doing things like changing traffic patterns. And they're getting smarter faster.
Remember what I said about Moore's law? This is the regular scale. So a linear increase-- you can see the way most things grow is a linear increase like that red line. Moore's law says that computer software will get better and more efficient and smarter, along the green line.
So brain scientists believe that a computer, to do the work of a human mind, would need an exaflops, processor. A processor that would need to do a billion billion floating point operations per second. That's 10 of the 18th power. The biggest computers today are about 10 to the 15th power, about 33 petaflops.
If you sort of look at this graph, you can see where Watson is, the biggest, smartest computers in the world aren't yet as quite as smart as a rat, which I find incredibly reassuring. Between where computers are today and where a human brain would be, you actually would need to have about a 100,000-fold increase. The thing would need to get 100,000 times more powerful before it rivals a human mind-- also very reassuring.
But-- and this is an important but-- the growth is exponential. So these things aren't increasing along the red line I showed you before. They're increasing along the green line.
So these same brain scientists and computer scientists now think that a computer will have that exaflop processor somewhere around 2018-- not in our distant future. In our very near present, a computer will have more processing power than a human brain. And before too long, that computer will be the size of a phone in your pocket. Those same scientists think, by the way, that by 2026, a single computer will have more processing power than every person who has ever lived, combined. So this is not something for our remote future.
And I want to underscore this kind of an exponential nature of growth, and the pace of it, with a story. This is a story in part about Watson. But more a story about Watson's grandfather, a computer called Deep Thought. So in 1988, there were two computer scientists named Gordon Campbell and Feng-hsiung Hsu. And they had a crazy idea-- they wanted to build a chess-playing computer.
Now, there was a lot of people who built chess-playing computers before, but these guys wanted to build the ultimate chess-playing computer. And they wanted to challenge the greatest grandmaster in the world to a game of chess. Now this was silly, because at the time, the best chess-playing computers never really could beat a grandmaster at chess. The grandmasters were much, much better.
So they wanted to beat a grandmaster. And they didn't just pick any grandmaster. They picked the Michael Jordan of chess-- Garry Kasparov. So they challenged Garry Kasparov to a game of chess.
Let me underscore-- Garry Kasparov was the man, all right? He was undefeated in match play for like 10 years. From 1986 to 2005, no chess player ever beat him in match play. So they played in 1988.
And Garry Kasparov did what everyone thought Garry Kasparov would do. He toyed with the machine. He destroyed it. And after the match, he said, one day, someone will invent a computer that it will take all of faculties to beat. But that is many, many years from now.
It was a huge embarrassment for Campbell and Hsu. But one important thing happened after that-- IBM took notice and hired the guys. They were grad students at Carnegie Mellon. IBM brought them in-house and said, this is an interesting technology. We'd like to invest in it-- go.
So they began in 1988 to build Deep Blue. And I think IBM sort of understood that exponential curve, and saw where things were pointing. So in 1996, eight years later, they challenged Kasparov to a rematch. And Kasparov, being very Kasparov, sai, of course. I'll play your machine any time you want.
And actually, this time, Deep Thought won one of the six games, but Kasparov won the other five very handily. It really wasn't even close. So it was curious then when Campbell and Hsu came back one year later, in 1997, and said, again.
Kasparov says, guys, you had eight years. How much better do you think this machine could have gotten in the last one year? Well, the answer was, much better.
In 1997, Garry Kasparov won game one of a six-game series. And that was important because it was the last time that a grandmaster was ever able to beat a chess-playing computer. The best chess-playing computers, have been, since that first match in 1997, always better.
The Deep Blue won match two, played matches three through five to a draw, and then in the sixth match, destroyed Kasparov. It was embarrassing. He wasn't even close.
And humans have not been able to beat the best chess-playing computer ever since then. So what happened? Kasparov confidently in 1997 because he was thinking along the red line. He was thinking about linear increases. How much better could it have gotten in the last year? But the answer is that Deep Blue was learning across the green line. And when you get to that inflection point, the answer is, it could learn a lot in that year.
I tell this story because it's kind of a cautionary tale. When we think about robots, we think about stupid toys, right? We think about remote-controlled helicopters. But these remote-controlled helicopters aren't going to be helicopters in a couple of years. They're going to be much more functional robots.
And we all know what happened from there. Deep Blue spawned Watson, another IBM project which destroyed the two best Jeopardy players in the world, Ken Jennings and Brad Rutter, in 2011. People thought Jeopardy was different. It wasn't like chess, with mathematical rules. It requires understanding and played word puzzles. There's no way computers are going to understand that-- turns out they can.
So today, doctors are using Watson for telemedicine. People are able to lease cycles of Watson. By the way, one of the biggest groups that is leasing cycles of Watson right now-- law firms.
[INAUDIBLE] is doing a lot of purchasing of cycles of Watson to make the next generation of legal systems. Let me introduce you to your lawyer-- Watson LLC. Actually, we're not quite here today. We're more here. Our processors really only have the power of a rat's brain.
But it's important to remember than the growth is exponential, not linear. And so these systems will get smarter much, much faster, and in ways that we can't anticipate. And like the Deep Thought computer of 1988, we shouldn't dismiss it. We shouldn't shrug it off. We should look in our very near future and see what's coming.
I think it's worth saying a word here about artificial intelligence, which is a term that I actually hate. Artificial intelligence is the intelligence we attribute to computers and robotic systems. But I'm not sure what's artificial about it.
So computers aren't very good at figuring things out yet. The processing they do is pretty rudimentary. It's sub-rat. But that doesn't mean it's artificial, right? It just means that it's clumsy.
So it turns out we humans are not very good at multiplication. We're slow at it. We get it wrong all the time.
Computers are much better at multiplication than we are. Does that make what we do artificial math? No, it's just math. We're just not very good at it. The difference is, we're not going to get much better at it.
Intelligence is something machines are going to get much better at and very fast. So I don't like to call it artificial intelligence. I think of it more like emergent intelligence.
By the way, can you imagine a computer that you turn on in the morning and it took 30 minutes to boot? And then work slowly all morning long, trying to figure out where its drivers were, how to get started, what programs it had available on it. And then, suddenly around noon, it started to get a little bit better and would work pretty well from about one o'clock until about 2:30, creatively, in ways you hadn't expected, organizing your inbox, figuring out all kinds of problems you didn't even know you had.
But then at 2:30, began slowing down again, and from 2:30 to 4:30, incrementally slower until the machine eventually crashes at 5:00 in the afternoon. We have this computer. It's called man.
One of the problems we have with our intelligence is that we have to start from scratch with every single one of us. Every generation has a group of people who start as babies. They don't know anything.
We spend most of their lives trying to get them up to speed. By the time they're capable of worthwhile contributions, real original thought, you have a very finite window. And then, over time, we begin to lose that capacity. Machines don't have that problem. You can actually carry that intelligence forward forever.
I think it requires a fundamental re-imagination about we what we mean by intelligence. And I do think that calling it artificial may be is a little bit unfair to the machines. I like to call it emergent intelligence.
OK, so artificial or not, our machines have vast capacity for intelligence, not in our distant future, but right now. Not an issue for our grandchildren-- an issue for us. Unlike Garry Kasparov, we are armed with the knowledge that computers grow smarter exponentially fast. And we can see where the lines cross somewhere in 2018.
So armed with this information, we have some choices to make. Our future with robots can be really great, or it can be scary as hell. And that choice is really left to us, which depends, in part, on how we build robots and how we regulate them.
So how do we regulate robots? Our third revolution is a challenge for today, but we regulate it with the laws of the past. We have issues for 2015 that we're regulating with laws from 1815. We can use statutory law or common law, we can reason by analogy, but that's really all we have today. And I think it's useful when thinking about robotic law to draw lessons from the last revolution, from cyber law's revolution, to see what successes we had, what worked, and apply that for robot law, going into the future.
So in the beginning of cyber law, there was this great conference at the University of Chicago in 1996. Browsers were just sort of getting off the ground. Netscape was very hot. AOL was a really big deal. And at this conference, The Law of Cyberspace, Judge Frank Easterbrook sort of led it off. And Judge Easterbrook began this conference by throwing a gigantic pail of ice water on the entire thing.
He said, what in the world is cyber law? This is the dumbest thing I've ever heard in my life. I'm very proud there's no such thing as the law of the horse. We don't have to make laws every time there's something new.
It's called common law, and what you do is, you apply the law as it exists, [? into ?] [? facts. ?] We do this every day. By the way, for those of who you want to do cyber law, don't. Because every time we try to regulate something new, we screw it up. We're really bad at it. So cyber law doesn't exist and don't try to make it exist. Enjoy the rest of the conference.
Also speaking at the conferences was Professor Larry Lessig. And Professor Lessig says, OK, of course we use common law to regulate things that are new. We apply the law to new facts. It's thinking like a lawyer to reason by analogy. This is what we do.
However, cyber law is different. There are some assumptions of the real world that don't carry forward into the cyber world. For example, people don't self-authenticate. You can't tell that when someone comes to your store, they're a kid. These things don't self-authenticate in the internet the way they do in the real world-- just one example.
But there are many reasons that cyber law is different from telecom law or the law of property, and we should treat it differently. He wrote a long article about this called "What Cyberlaw Might Teach." And I think you could boil the question down to, is cyber law exceptional?
Is it so different that we really do need a different set of laws to regulate it? Or is it just another example of something we see all the time? Can we just apply existing law to cyber law?
I don't think I need to connect the dots very quickly to tell you that this is the question we ask in my class-- are robots exceptional? This is a question that is posed by Professor Ryan Calo at the University of Washington. But the question is very simple-- if we apply common law to this new species of machines, these thinking machines all around us with capacity to harm the way a PC doesn't, with capacity to act autonomously and make decisions in a way that your PC doesn't-- if we apply common law and existing statutory and regulatory law to these new machines, what does it look like? What's the outcome?
It's not a very fancy inquiry, I have to tell you. 'Are robots exceptional?' boils down to, are we happy with the outcomes when we employ existing law to robots? So that's the question that we ask in my class.
In some cases, we're going to reason by analogy. We're going to take existing law and apply it to robotics where it will fit, and then we just see if we like the outcome. Does it serve our values as society, the values like trying to create the least amount of damage, trying to compensate victims, trying to protect people, trying to punish people when they do wrong, trying to create efficient modes for innovation and transacting commerce?
If we apply the old law to the new facts, do we get the kind of world that we want to live in, or is there something [? we don't want ?] [? to live in? ?] It's hard, by the way. I mean, out students are going to write papers about this. We'll see how well they do. It also involves extrapolation, by the way. So you have to say, here's how it applies to the robotic systems of today, and here's how it applies to those same systems 15 years from now, when they're 20 times smarter than we are, when they have a much greater capacity for independent thought.
So a couple of examples of how we do this analysis. So for tort law, the whole base of tort law, and especially with [? odd ?] repeals, it's kind of based on the idea that when someone acts negligently, when they aren't careful enough, and they create damages, they're responsible for those damages, or maybe their insurance company is responsible for those damages. It's based on negligence. It's a question that you can ask a jury, did this person make a mistake?
We're not there today, but in the future, the self-driving car conversation will be much different. We'll live in a world where the computers all make completely optimal decisions, perfectly, every time. There'll be mistakes, too, but the vast majority of accidents will be accidents between two parties that were not at fault at all, acting completely rationally in every single way.
When you take that negligence case to a jury, by the way, a jury of its peers, I think, is going to be pretty hard to put together, right? Panel a jury of 12 people and ask them to figure out which AI assumptions were more rational, I think they're going to have a really hard time. And if you really can get to the root of the matter, or really can understand how the algorithms work and what decisions they made, they may often realize nobody was negligent. Both machines acted completely rationally and non-negligently.
Which makes a lot of people think that, in the future, we may move from a world of negligence and tort to product's liability. In a product's liability regime, you may say that even though the mistake wasn't made by the machine itself, it was created in a way, or programmed in a way, or manufactured in a way that inherently causes harm. And you might have compensation for injuries, product's liability, that is much lower than it would be in tort, but also with a different standard of proof.
Incidentally, I think this also has a change in how we prove things. For the non-lawyers in the room, it's relatively easy to put on a negligence case. You ask whether the person acted reasonably or not, and a jury decides whether they would think it was reasonable or not.
Product's liability is much harder. It's much more expensive to put that case on. You have to have a deep understanding of the manufacturing process to prove that a product was built in a faulty way, or designed a faulty way. It's a much harder and much more expensive case to make, and maybe, to fewer people being able to seek remedy when they're injured in self-driving car accidents.
By the way, I should hasten to add, this should also completely change the way we insure cars. If you are manually piloting your car, that's going to seem like a very risky decision somewhere down the road. You may even pay higher insurance rates because you chose, manually, to drive the car.
I think on the deep end of this pool, the software is going to have some very difficult decisions to make. So a good example is if your self-driving car is about to be in an accident, that is unavoidable, with a school bus, does your self-driving car take action to save your life, or does your self-driving car take action to save the most lives? It's going to be a software decision. We don't get to make that choice. Or if we do, we might have to regulate it.
As a society, we might say that we want to save the most lives. We want to create the smallest amount of damage in the field. When you're buying a car, you might want to know that it's going to act to save your life. These are kind of our decisions, they're policy decisions, and they're policy decisions that may get decided in law.
OK, so we've begun regulating self-driving cars. There are statutes in five states that regulate self-driving cars. But what I think is interesting about our first take at this is that the statutes typically regulate the driver. They require certain licenses to be able to operate a self-driving car. They decide who is the operator a self-driving car when there's no one in the car? Who has liability?
They have special requirements in DC that say that you can have a self-driving car all you want, but you need to be sitting in the driver's seat the entire time, ready to take the wheel if you need to, which kind takes all the fun out of a self-driving car, I think. Incidentally, Google has a self-driving car that has no steering wheel at all. Can't drive it in DC because DC says you got to be ready to take the wheel.
So what I think is interesting about these is that they are all regulations on people, and they're not regulations on cars. So for example, if a self-driving car is driving on the road and a police car pulls up behind it, there's no requirement that the self-driving car pulls over. How do you code that in to recognize police cars and actually pull over when you're supposed to? The regulation's on the driver.
There's also no special license plate requirement. I think I would want to know if there's a self-driving car operating next to me, right? But there's no regulation of that either.
I think most importantly, none of these states have said that self-driving cars have to be any good before getting on a road. You'll have Google self-driving cars. I'm sure you'll have Apple and Audi and Mercedes Benz. But at some point, we're also going to have, like, a Yugo self-driving car, powered by a Palm Pilot, or something. Do you guys know what a palm pilot is?
Different lecture-- there's no requirements about how good the self-driving mechanism needs to be. They can be terrible. And as right now, they're treated the exact same way as the best self-driving system on the road. One of big benefits of self-driving cars is that cars will be able to recognize other cars, detect traffic patterns and avoid accidents.
But as of right now, they can't talk to each other. There's no standard. No one has said this is the platform on which all these cars are going to talk to each other.
So how do we do that? Is it going to be Apple, or Google, or somebody else that does it-- Microsoft? We're going to have a Windows 10 for traffic patterns? No one's really sure, right? And it will end up being a question of innovation, but also a question of regulation.
When robots commit crimes, when a self-driving car speeds, or murders somebody, how do we decide who is responsible? So today, if you apply existing law, we have pretty good mechanisms for the use of tools. If you break into someone's house with a screwdriver, the screwdriver manufacturer isn't liable, right? It's an instrumentality.
And if you use a robot to commit a crime but the robot wasn't really acting in a robotic way, it wasn't making any autonomous decisions, I think you everyone would say, the person who is robbing a house with it is the one who is responsible. But what if the robot starts making decisions along the way? When you're breaking into the house, the robot wasn't instructed to kill somebody, but actually does, who's liable then? Is the burglar liable as an accomplice for murder? Is the robot itself liable?
Take a different example-- you ask the robot to mow your lawn and the robot goes and murders your neighbor. Now this is a completely different example-- how much liability do I have for instructing my robot to go mow the lawn without really knowing that it was going to murder my neighbor? Two actors in this story, by the way-- what liability does the person have, the owner and operator have, maybe the manufacturer or the software developer?
But also, what liability does the robot have? We're going to figure out a way to punish robots. I mean, what do you do? Do you throw a robot in jail? What does five years in jail mean to a computer that will live forever-- blink of an eye, right? Does a robot even have a concept of jail, of time, of anything at all?
Do you wipe the computer's memory? Will that even matter when people have backups of their computer, so when you wipe up the robot's memory, it just restores it to the last backup. Look, I mean, these are ridiculous questions, right? But there's going to have to be a way of regulating the autonomous criminal behavior of robots.
One of the most important things that we talk about in this course is that we have a very good way of regulating people. We have a really good way of regulating things properly. We don't have a whole lot in between.
Metaphorically, you could compare robots to children, to pets. At some point, will we end up metaphorically treating them like us, like people? Somewhere, it's in our extrapolated future where our robots are every bit as human as we are, and you can't really tell the difference between a robot and a person. Will our robots have individual criminal responsibility?
And by the way, who's going to enforce this criminal law? There's an APB out for a robot. Who's going to go get it?
Robots can be really dangerous things. They could be armed with all kinds of things. Are we going to send a uniformed police officer to go get a robot that could be armed with God knows what? It seems like a very dangerous proposition.
Copyright law-- I think there's not a whole lot to be said about this. Robots and computers right now are making some really awful music. But the question is, can you take that music and use it yourself, or is that music copyrighted? And if it's copyrighted, who owns it?
Related-- when a monkey takes your camera and take a picture of himself, you don't own the copyright. When a computer takes your algorithm and makes a hit song, do you own the copyright, or does the computer own the copyright? Should we have the same copyright law for machines when they make music, or when they paint pictures?
Do they need to be incentivized to do these things the way people are? Who knows? Some people have said they should have a copyright, but over a shorter duration because you don't need to create the same incentives.
This last Fourth of July, somebody took a drone and it flew it up into the fireworks on the Fourth of July. Anybody see this? It captured some really beautiful images of the fireworks at fireworks level, which is extremely cool.
The problem is, my uncle Jerry is going to have one of these in a couple of years. Uncle Jerry is a little bit crazy. Everyone's going to have these things.
Next Fourth of July, it won't be one drone. It'll be 300 drones. And the Fourth of July after that, there'll be 3,000 drones. Is it just me, or does this seem like a huge mess? I mean, drones will be falling out of the sky, whacking people throughout the entire celebration.
We have some regulation of drones. We have property law and trespass law that says where you can and can't fly these things. But only to about [? 80 ?] feet and below a certain FAA threshold, as well. In the middle there, it's the Wild West. Anybody can doing anything they want.
You can imagine the parade of [? horribles that ?] come from these drones. For example, why do paparazzi need [? to stake ?] [? anymore? ?] Hover a drone outside the window.
Can you imagine the cloud of drones out of Angelina Jolie and Brad Pitt's house? And what recourse do they have if they're not trespassing and they're not in the airspace? There really isn't any recourse. There's really nothing they could do about these drones.
OK, so these aren't issues for 2030. They're not for our future. They're issues for 2015. Our law's going to have to change, but how? And it's time to start asking these questions and working on the answers, and fast.
So these are hard questions. They don't have easy answers. Are robots exceptional? In some ways, they certainly are, but I don't think the answer to that question is going to be binary. It won't be they are exceptional or they aren't.
The challenge of law and of intellectual property and information science is going to be to figure out where robots are exceptional, where applying our existing law leads to outcomes that make us very upset? And where they're not exceptional, where we can simply apply Frank Easterbrook's common law and not create a law of the horse? These are the questions we're asking in "The Law of Robots" this fall.
So here's my [? call-- ?] robotics are our third revolution. It combines the Industrial Revolution and the Information Revolution in thinking machines. This robotics revolution has the potential to be every bit as big as the Industrial and the Information Revolution that the internet brought us.
It's going to present some challenges. It's going to surprise us. It's going to terrify us. It also has the potential for a great deal of good if we can get out ahead of it, if we can regulate robotics effectively and intelligently, and not 100 years from now, but right now, before it occurs. There's a huge opportunity.
If you look at the Industrial Revolution, one thing it did was change the labor economics of the world. We saw that work moved around the world, in a globalized world, to the place where the labor was the least expensive. So as a result of that, jobs that were grown in the United States, for example, moved to different places in the world-- China, The Philippines, Vietnam.
The robotic revolution holds the potential that many of those jobs could come back. Remember, in the Industrial Revolution, people were worried-- are we all going to lose our jobs-- when exactly the opposite was true. It created an era where there was so much abundant work, there weren't enough people to do it. Robotics has every bit of that potential.
If the work is going to move around the world to the place where it's done least expensively, you could imagine a manufacturing revolution. And by the way, a lot of the R and D being done for this robotic work is being done right here, in the United States. So you could have a new era of manufacturing in the United States, done robotically.
I think it's natural to worry that there won't be any jobs for us in that world-- legal jobs, medical jobs, information science jobs, maybe deans and law professors, as well. But I think it's more likely to be the opposite-- there will be untold new jobs, jobs building robots, manufacturing robots, policing robots. All kinds of new manufacturing jobs that we never could have dreamed of a century ago. Whether this world is going to be a dystopian nightmare, though, or an entirely new century of worldwide progress, really depends on us thinking now, ahead of time, how we regulate robots not in our remote future, but in our present. Thank you very much.
We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact firstname.lastname@example.org if you have any questions about this request.
Ed Walters, CEO of Fastcase and adjunct professor at Georgetown Law, discusses emerging ideas and issues around regulation of robotic systems, Oct. 2, 2014. Co-sponsored by Cornell Law School and the Information Science department.