REBECCA SLAYTON: I'm Rebecca Slayton and I'm jointly appointed in the Science and Technology Studies Department at Cornell and also the Reppy Institute for Peace and Conflict Studies. I've been working with Professor Fred Schneider in computer science to organize this speaker series, which is sponsored by the Mario Einaudi Center for International Studies and is done on behalf of the Cybersecurity Working Group. The Einaudi Center brings together thinkers from many different disciplines to address complex international issues.
We're especially interesting issues that require technical and non-technical perspectives and where the stakes for ordinary citizens are high. Cybersecurity is clearly one of those issues. We've just come out of the election season that was troubling for many reasons, but one of them was that there appears to have been some use of hacking in an effort to affect US electoral politics by other countries. Today we'll be hearing about the role that law can play in enhancing cyber preparedness.
We're delighted to welcome Fred Cate, who is Distinguished Professor and the C. Ben Dutton Professor of Law at Indiana University's Maurer School of Law. He has a number of other distinguished titles. He is Vice President for Research at Indiana University, as well as the Director for the Center for Law, Ethics, and Applied Research and Health Information. He also served as the Founding Director of IU's Center for Applied Cybersecurity Research and National Center of Academic Excellence and Information Assurance Research and Information Assurance Education from 2003 to 2014, where he's now a senior fellow.
Professor Cate has also been very active in both national and international policy. He is a member of the Department of Homeland Security's Data Privacy and Integrity Committee and Cybersecurity Subcommittee. He is on the National Security Agency's Privacy and Civil Liberties Panels and he's also on the OECD's Panel of Experts on Health Information Infrastructure and Intel's Privacy Security External Advisory Board. So I could go on listing many other distinctions, but I don't want to take more time from our speaker or from our discussion so I'll just ask you to join me in welcoming Professor Fred Cate.
FRED CATE: I'm just delighted to be here. It's a great pleasure. It's a great pleasure to be back with my friend and colleague Fred Schneider, who is the father of modern cybersecurity. So if anything is not working well, feel free to ask Fred about it. I always feel like a little bit of a fraud when I talk about cybersecurity because I don't know the first thing about the technical side of cybersecurity.
What I do know is a fair amount about the policy side of dealing with information governance, whether it's governance around privacy or governance around liability for use of information, or for the past 15 years focusing in on the security of information and what happens when the security of information and/or networks is compromised. So what I'm going to do is work without PowerPoint.
That leaves me a lot of freedom to change what I'm going to say in the middle of saying it. I do use notes, so if it looks like I'm looking down here, I just want to make sure I say the right things at the right time. And then my goal is to leave a lot of time for questions and answers for you to comment because, of course, it's a great opportunity for me to hear your thoughts about these issues.
So the topic that I really want to address is about the role of law in cybersecurity. But I want to just start back and make sure that we are on common ground or that if you disagree with the ground that I think we're on, you have a chance to say so early. So the first and I think inescapable point is just to remind ourselves how much data we generate every minute of every day in our society.
All of that data that comes from our phones which are constantly ratting us out, providing our location, collecting information about us, in many cases sharing that with others; the data that comes from the sensors with which we around ourselves-- the sensors in our cars, in our offices-- the data that's part of all of those digital communications that we send every minute of every day. I used to when giving talks like this, particularly at like public libraries and rotary clubs and places like that, I would use numbers.
But then all of the millions turned into billions and then the billions turned into trillions and at a certain point, the numbers have no meaning anymore. We are generating such a sea of data, an avalanche of data. We are awash in all of this data. And then, of course, there's all the data collected about us, with or without our knowledge and consent. All of our searches on the internet, all of our interactions with computers in any form, the data collected by the sensors that we encounter every moment.
Remember, every one of you right now, I'm willing to bet, is carrying a cell phone and that cell phones has got video and audio and location information. So it's not even reasonable to think any long longer about being aware of the data that we're generating because of course, I have no idea what data you're collecting other than I'm wearing a mic. So we have this massive volume of data and we use these data increasingly to determine critical things about us-- who we are, how we verify ourselves, our creditworthiness, do you get access to your home or not when your digital locks will choose whether to recognize you or not.
We use these data all the time to make critical financial decisions, medical decisions, health care today increasingly-- we talk about precision made medicine or personalized medicine, all based on getting data about the individual to make decisions about that individual, rather than about groups or to make larger assumptions unsupported by data. OK. Two, we increasingly use data and data systems to do things in our economy, to control things that we might previously have controlled with a physical interface.
We now use an electronic, digitally controlled interface. So whether we're talking about controlling a nuclear power station-- which is, of course, done today almost exclusively with computers and those computers are networked-- or any of the critical infrastructure, the fundamental infrastructure. So for example, the switches on train tracks that move the train from one track to another as it moves used to require a person to go out and move a physical lever. Then we cleverly linked that lever to a physical switch up in a tower and then somebody said, let's connect that lever to the internet.
And now the vast majority of switches in the United States on train tracks are wirelessly controlled. They are controlled from the internet. That is true of all pipeline controls and natural gas pipelines, all controlled from digitally linked sensors. So we are increasingly using these information sensors to do things that have highly instrumental effects. Just in time supply chains, go in, you buy something at Target, they scan the barcode, the system says, order another one of those things. And because of that, they keep almost none on the shelf because the system is so digitally managed that the flow of data tries to ensure that there's always a new one there.
Just in time is what we call it, but it's almost just always not in time. Think about taking your car to be worked on. They say, you need a new carburetor, you need a new fuel injection, you need a new whatever and then one gets ordered for you. It is a data instrumented system. They're not stocked any longer and we save a lot of money by doing it that way. Increasingly, of course, it's been in the transportation system we've been talking about the role of digital systems, and particularly thinking about cars.
Not just the thought of autonomous vehicles, but think about all of the ways today in which we use technologies to augment the operation of cars-- the computers that calculate the fuel injection, the computers that tell you if you're backing into somebody, the computers that provide your location, the computers in high end cars that monitor your retina to see if your eye is going to sleep so that it can shake the wheel to wake you up, the computers that tell your car how to park, the computers that increasingly will drive on cruise control not just to maintain speed, but to maintain lane control. Those are all digital systems and they are all addressable in one form or another in a networked way.
We use the same thing, by the way on airplanes, and particularly modern airplanes. It's not just that they are electronic, that's been true for quite some time. The days of you move the rudder pedal and that connects to a cable that then moves the rudder are long over. It's long been the case that you send an electronic signal, but that electronic signal is now increasingly on modern planes addressable wirelessly.
So if you think about the Airbus 380, the big double-decker very fancy plane, as with all planes in order to get FAA certification it has to have three managed systems for every controlled surface. The third system is an iPad that the pilots can use if the other two systems go out. How does that iPad connect to the plane? It does so wirelessly.
Is there a special wireless connection? No. They use the same wireless hub they use for gaming and for your connections to the ground. So these are all ways in which these electronic systems are not really collecting and using data about us, but are actually controlling the infrastructure of our lives. They're opening and closing doors, causing cars to stop or to move, and having other practical effects. Now, three, the simple fact is these systems and the data that they generate or use are not typically secure.
And in fact, it increasingly looks like we don't know how to really secure a system and still have it be workable. I want to be clear. We can absolutely secure a system if we don't care whether it works, right? Just take your computer, put it in a vault, no problem at all. But of course, then you can't connect with it and you can't use it for anything. So if you want to have a system that is accessible, that is useful, that is economically affordable, you necessarily today end up with a system that has some degree of insecurity attached to it.
And that insecurity seems to be growing, not weakening, over time. So let me just give you a couple of quotes just to sort of place us in time. So in 2009, the White House wrote in its review of cybersecurity systems, the architecture of the nation's digital infrastructure based largely upon the internet is not secure or resilient. Without major advances in the security of these systems or significant changes in how they are constructed or operated, it is doubtful that the United States can protect itself from the growing threat of cybercrime and state-sponsored intrusions and operations.
So that was 2009. That was the conclusion of the president's 60-day cybersecurity review which President Obama launched immediately upon coming into office. 2010, former NSA and CIA Director Mike McConnell writes in The Washington Post, the United States is fighting a cyber war today and we are losing. It's that simple. Those are pretty unambiguous words. I mean, you might look at threat analyses and trying to figure out what the hidden subtle messages they're giving you, but here it seems kind of unmistakable.
As the most wired nation on Earth, McConnell continued, we offer the most strategic targets of significance, yet our cyber defenses are woefully lacking. The stakes are enormous. 2010 also marked the first year that cyber threats appeared on the assessment of national security threats facing the United States, and it headed the list. Suddenly, it leapt to the top of the list of national security threats that we face as a nation.
And so just look at what's happened since then, right? You mentioned the elections, we've had Russian hackers in the DNC; we've had Russian hackers or hackers associated with the campaign in Arizona and Illinois breaking into state voter databases; we've had major breaches in almost every industry sector you can think of, whether you think Target, Anthem, Sony, which has had repeated breaches. In many cases, we think there are state actors involved in these breaches.
We don't always know. We have evidence that leans in a direction and we have a certain amount of speculation. We often say that it's now become almost trite to say, but there are only two types of institutions in the world, those that have been compromised and know it and those that have been compromised and don't know it. Those are your only two options when we think about cybersecurity.
The US government seems particularly vulnerable. And again, I could go through lists and lists of federal agencies, there's almost nobody exempt. But I think we need look no further than the June 2015 disclosure that the Office of Personnel Management had been hacked, 21.5 million background security check files containing some of the most sensitive information you could imagine in a personnel record. If ever there was a crown jewel that you were going to protect, it would be these files and we lost all of them.
But it's not just attacks on data, it is also attacks on systems that we are both seeing and also we are learning more about the capability for. So maybe the earliest example of this is, of course, Stuxnet, the virus allegedly created-- very targeted-- by the US and Israeli intelligence that went after the control system for Iranian centrifuges-- a virus which, by the way, we still see in the wild today. It reduced the Iranian centrifuge capacity by 30%. But since that time, it's also shown up in computers in Azerbaijan, India, Indonesia, Pakistan, and the United States, not to mention other countries.
We've seen attacks that destroyed operations, have ended operations at Venezuelan oil refineries, destroyed computers at Saudi Aramco oil refinery, shut down portions of the Ukrainian power grid, and just in the past year stole $81 million by exploiting weak controls at the Bank of Bangladesh, which then use them to obtain the money out of the banking system from the Federal Reserve Bank of New York. These practical kinetic responses.
These aren't just data that's been stolen that we worry about, this is where we see systems in operation, right? We saw the denial of service attack, the D-DOS attack on Dyn just in the past month. Again, that slowed down a prominent DNS server serving some of the most widely used websites and services in the world. We just had announced in the past week the Friend Finder network attack, 410 million records.
Let me just say, for those of you not familiar with Friend Finder networks-- and you should be because they've had three major breaches in the past year and a half-- in this case, they run an adult escort finding service, as well as provide services for other networks, including the Penthouse online system-- no longer, they used to. It's a little hard to know how they got 410 million accounts since they don't have anywhere near that many active account holders.
So what we've learned out of this is they saved the data of each of their old account holders, even after they ended their contract with Penthouse and so no longer had any legitimate reason to have that data. They kept it so that somebody could break in and steal that information. And the information was either all in the clear, in plain text, or where encrypted, the encryption was so weak that the people who have analyzed it have been able to determine 95% of the passwords that were taken.
So again, ignoring the most basic things we know about security-- saving old data, not using adequate protection, and not learning from prior mistakes. So we're left with this painful conclusion that as of this moment, nothing is actually secure if it is also usable. We can raise the cost of attacking a system to be certain, but we cannot say with any confidence, give me your data in any usable way or let me have control of your car or your house or whatever and I can secure that, guaranteed.
Now, fourth-- and really very much the point of why I'm particularly happy to be here today-- although we often think about cybersecurity and the government most commonly talks about cybersecurity in terms of a technical issue, right? Think when President Obama holds his cybersecurity summit, he goes to Silicon Valley because that's the center of our technology group that deals with high tech industry. It is in reality increasingly proving to be much more associated with the challenges of individual and organizational behavior, with legal and economic incentives, not with technology.
Now, I want to be very careful what I'm saying here before the whole building comes collapsing down on me and they find nothing but my broken corpse here in the morning. I'm not suggesting that technology is not important-- indeed, it's critical in the fight against cyber attacks-- but, rather, that technology is not where our greatest vulnerability is. It is, rather, an institutional and organizational behavior and in legal and economic incentives for behaviors we have long agreed are desirable.
So let me just give you some examples for why I say this. You're obviously welcome to disagree with me, most people do. So one of those is that in most of the major attacks we've seen-- in fact, in all but maybe two or three of the really large, major published cyber attacks we've seen in, say, the past three or four years-- they've started with a phishing message, they've started with a social engineering attack. There was no brute force attack, nobody cleverly broke into the system breaking through the technology.
Instead, they found a human who would very kindly give them the way in. In some ways, it doesn't even seem like a fair way to break into a system. But either by suborning somebody or by confusing them or by blackmailing them or by just getting them to fall victim to a phishing message, all of these attacks-- virtually all of these attacks-- start with a human failure, a failure to protect credentials. So no amount of good technical cybersecurity, no amount of technology is going to protect us if the people who have lawful intended access are giving away their credentials, whether that's giving them away to a phishing message or to a friend or family member or just giving them away by choosing such a weak password or reusing it across sites.
Remember also, most of the successful attacks that we have seen in recent years have exploited vulnerabilities which we have known have existed in most cases for years. So in other words, they are taking advantage-- the attacks once you get the information, you now break into the system, you now want to move through the system into places that the user did not have authorized access-- that this takes advantage of vulnerabilities for which there are patches or other remedial tools available, they just haven't been installed or in a few cases they've been installed incorrectly. So again, how are we going to protect, how we're going to use technology to protect if, in fact, people won't deploy that technology even when it's given to them for free in an automated fashion to use?
We continue to see widespread use of very weak cybersecurity protections like single factor authentication, passwords. Passwords have been poor protection for years, for ages, we know lots of reasons why. There are easy ways around that, multi-factor authentication being the most obvious-- you require something other than just something you know. Usually, that's done by accessing your cell phone, sending you a text message, or providing some other link. It can be done through a token, it can be done in any number of ways.
But again, we're not doing it in most parts of our economy, largely for cost and competition regions, which is consumers don't like it. We don't like when we go to log in a website and we're told we can't unless we also do something else. And so there's a tremendous resistance for competitive reasons. Let me just share with you a survey from last year-- I'm not a big believer in surveys and after the past week, none of us should be.
This by KPMG on the state of cybersecurity in the federal government. So I'm just picking on the federal government here. And generally, this was good news, that's how it was presented. 59% of federal workers say their agencies struggle to understand how cyber attackers could potentially break into their systems; 40% say they are unaware of where their key assets were located. This was good news because it was better numbers than the year before. Let me give you, though, the more startling number.
When you get beyond IT personnel and you survey people in other parts of the government, in human resources, for example, 39% of those surveyed said cybersecurity was an unimportant or very unimportant issue to their agency. In purchasing and procurement, the number was 41%; in communications and public relations, it was 48% who said don't really know or care about cybersecurity. Well, this is a problem. This is, again, not a technological problem, this is a fundamental behavioral problem because again, it is always, or often, the human that is the weak link in most attacks.
And so it's not just that humans, despite the fact we are desperately trying for good cybersecurity, we're being duped. It's that some large minority of us who work for the federal government, at least, don't seem to think it matters in the first place. I am reminded again and again of the importance of this conclusion that we have to remember that as important as technology is, the reason we have this crisis-- and I believe it is a crisis-- on our hands is precisely because of our inability to motivate people and institutions to do the sensible things that we all agree they need to know what to do.
And let me just conclude on this point with a quote from a document which, actually, Fred Schneider put together as part of the 60 day cybersecurity review that President Obama conducted. The NSF assembled a diverse group of people who work around cyber. Fred served as our steward and moderator. And that report is filed with the Cybersecurity Review Group said, cybersecurity is not purely a technological problem, nor is it purely a policy, economic, or regulatory problem.
Too often, technologists today do technology without understanding law, investment policies, and economics. Policy wonks make policy without understanding technology. Building trustworthy systems will require combining technology and policy. So see, I didn't have to come at all. Fred had already made this point many years ago very eloquently. Excuse me for this.
The problem is nobody seems to be listening. Now, let me say in an ideal world, markets would work better here. In other words, I'm going to assume that we would all prefer to see markets work, rather than have a role for the government, and that in a good market you would have appropriate incentives for security; institutions would respond to those incentives, as would individuals; and you would end up with some optimum level of security. But for many reasons, which I frankly don't have time to go into here but I think have been well documented, we are suffering many market failures here, right?
And often because the costs are felt by somebody other than the party that has the ability to take some preventative action. So think about the attack on Dyn launched from baby monitors. So the people who didn't secure the baby monitors, namely the manufacturers of the baby monitors, they didn't suffer any harm or loss as a result of this. That harm was suffered over here by the websites that were being served by Dyn. But the incentives to often simply don't line up, and for this reason we have seen a fairly classic set of market failures here.
Now, this is not a new observation. Let me say I don't want to make any claim for saying anything novel here. In fact, I want to go back and quote from 2003, some of you were toddlers then. Economists Bruce Berkowitz and Robert [? Han ?] wrote then that the government has largely rejected regulation government standards and the use of liability laws to improve cybersecurity. These are the basic building blocks of public policies designed to shape public behavior.
So one must wonder, why they are avoided like a deadly virus, so to speak? That's kind of economics humor there for you. Again, that is from 13 years ago and it remains true today. When the 60-day cybersecurity review came out, it did not recommend regulations. And I will always remember being on a conference call with an executive from IBM who said, well, we dodged a bullet there. Well, it's true. It's true.
But think about the situation we're in today. The nation's highest cybersecurity official is the cybersecurity coordinator in the White House. This is an individual with no budgetary authority, no operational authority, no authority to make any part of any government agency do anything. A coordinator to deal with this. Instead, we have three primary pillars of where we see law operating in cybersecurity today-- breach notices, 47 states and the District of Columbia all have breach notice laws, many federal regulatory agencies have them. If your information is breached, you get a letter telling you your information has been breached, there's nothing you can do about it. You should worry, but there's nothing that can be done.
Information-sharing, voluntary information-sharing. We've got a federal law in December, just coming up on a year now, that provides legal immunity liability for companies that voluntarily share information about cybersecurity threats with the government. And we have a voluntary missed cybersecurity framework now in its third year since they began working on it. Now, just compare this for a moment with any other area that we care about as a nation.
Think about civil rights-- we have the Civil Rights Act of 1964, we have the Occupational Safety and Health Act of 1970, we have the Clean Air Act amendment of 1970, federal water pollution control. Whether it's food safety or toxic waste or auto safety-- auto safety the subject of thousands of regulations, far too many regulations. I'm not in any way advocating for pile-on regulation here. In these areas, we have laws that impose obligations. In cybersecurity, we have a coordinator and voluntary information-sharing.
Bruce [? Schneider ?] comments in this regard that the government should think about cybersecurity regulations the same way it looks at food safety, create thousands of rules and penalties when companies fail to meet standards. If you want robust security, you're going to need a lot of borders and incentives pushing people down the right path, [? Schneder ?] says. I'm just trying to find non-lawyers who would agree with this point. The simple reality is today, most institutions face few, if any, legal incentives to engage in good cybersecurity.
They may face PR incentives, there may be some market incentives. And when I say involve the role of law here, one possibility is you actually set standards, you regulate. But that doesn't have to be, that's not the only approach by any means. You can also use a variety of tools. For example, tax policy can create incentives. Procurement regulations-- if the Defense Department said, we only buy technologies that meet certain standards, that would create incentives.
There are a variety of ways in which the government can act to incentivize better cybersecurity short of a cybersecurity coordinator administering a voluntary information-sharing framework. Now, I want to conclude just by talking briefly about the international aspects of this challenge, which are vast-- really, unbelievably vast. And that should be, I think, obvious why, right? Remember, attacks cross borders freely because our networks cross borders freely.
That's one of the major changes in cyber that is different from other types of dangerous areas like nuclear proliferation or biological or chemical weapons, and that is almost anybody can launch an attack and they can do it from almost anywhere. Because we have created a very successful interconnection of networks that we call the internet that links people across jurisdictional boundaries. Therefore, thinking about this in any sort of single jurisdictional way seems intrinsically shortsighted.
Second, we see international markets for hacks, for stolen information, for the tools that can be used by other cyber attackers. And it's been striking-- I think it's been almost unbelievable to many lawmakers-- that there can be such visible proliferating markets where we see auctions for this type of information. You can go in today and you can buy 1,000 breached credit card numbers. You pay a certain price, you pay more if you have the CCID number with it. It is all an active international market.
It's only the defenders who act nationally. The attackers act globally incredibly effectively. A third reason for this is that we have to remember there is a pretty clear role of governments in launching and in addressing cybersecurity challenges across borders. So whether it's us attacking Iranians or whether it's Chinese or Russians attacking us or whether it's other countries attacking other countries, it is one of those areas where it is not reasonable to think of it quite like a natural disaster, like we're all being victimized by winds that we don't understand.
Instead, we have a very serious problem that we are on all sides of this jurisdictional divide both engaged in cyber defense and cyber offense. And we should not lose sight of that fact, right? And therefore, when you have governments acting on governments-- or, as it is increasingly the case-- governments now acting on private entities in other countries, that is traditionally the role of international law to deal with.
We have international effects. Almost all big cyber attacks have international effects. Think about the Friend Finder networks. Those 410 million users, they're located in countries around the world. It's rare today that we see large breaches that are purely domestic. It's also rare that we see attacks that attack just in one country, they usually attack in multiple countries. And there are other reasons, obviously, besides these.
But if ever there were an intrinsically international challenge, it has obviously got to be cybersecurity. It's hard to imagine anything being more fundamentally interconnected in a way that totally ignores national political boundaries. But having said that-- and for those of you who thought I was going to have some good news by the end of this, I hate to disappoint you-- these same characteristics, while they make it much more important that we think about cybersecurity and particularly the role of law and policy in addressing cybersecurity challenges, they heighten the importance of that, they also make it much more difficult, right?
As I assume you all know, it is much harder to get an international agreement passed and enforced than it is to get a domestic law. Remember, to get an international agreement passed you still have to come back to the Congress. So it's not enough to say, oh, well, we've got a dysfunctional Congress, we'll do it through some international forum. Unless the Senate approves the agreement, it does not have binding legal force. It is much more like an executive order at that point, it can be reversed by any other president.
Second of all, we have a real challenge because of this fact that governments on the one hand, while generally I think truly being committed to wanting to see better cybersecurity, they also want to preserve latitude in their own ability to use cyber attacks and cyber vulnerabilities. And so the last thing that technologically advanced countries want to do is in any way create or sign onto an accord that limits their own ability to take advantage of cyber vulnerabilities.
There's a third challenge here, and that is we have wildly different legal cultures. Even if you just look at large countries-- I'm not going to talk about the entire 217 countries who we generally think of when we start listing countries-- even if you just look at the large players, the extraordinary differences between legal cultures in, say Russia, Brazil, China, the United States, and the European Union, they are striking. Ad in India, you've lost almost any sense of commonality.
Let me just give you one practical example, rather than belabor this point. Jack Goldsmith, who was the Assistant US Attorney General in the Bush administration has made this point repeatedly, and that is the United States keeps chastising China about economic espionage. So he makes this point, I quote, economic espionage is expressly prohibited by US domestic law, but it is not prohibited by international law, either written or unwritten, and it is widely practiced. So when we say in the US, what we do is OK because we only do intelligence gathering from other governments, but what the Chinese do is terrible because they engage in economic espionage, that may mean something in the US cultural environment.
Whether or not it's true is a completely different matter. But the assertion may have meaning in the US political culture. It has no meaning once you cross the border. This is not a widely shared norm. It is not at all recognized in legal norms, this distinction. And so, again, before we could even think about trying to get to a meaningful multinational agreement, we're going to have to get to a multinational vocabulary. We're going to have to be clearer on some basic norms that inform these decisions.
A fourth point, we have extraordinarily competing notions of what we might think of as the competing values here. Again, let me just focus on one and that's privacy. So one of the most common objections raised to strong cybersecurity proposals is that these might very well invade privacy. And they might, they might compromise privacy. On the other hand, we protect privacy very differently from, say, the European Union under its new General Data Protection Regulation or under its Data Protection Directive.
We have very different views of the role of government in protecting privacy. In the US, for example, there are no limits-- I think that's still an accurate statement-- on what the government may do once it gets your information. So therefore, when the government makes a promise like, oh, don't worry, the NSA is just going to use this for anti-terrorism purposes, it won't be used for anything else, there is no US law that will support that claim-- none whatsoever. Once the government has it, it's free to use it and it does widely. That's why we retain data for so long.
This is not true in many other countries. It's true in some other countries, it's not true in all other countries. So again, before we're going to get far on trying to create some sort of agreement about protecting cybersecurity in the face of competing values like privacy, we're going to have to get to some more common understanding about what it means to protect privacy. And that is going to be a huge lift, as well.
Let me just end with two last-- I don't want to keep piling on. I feel like really even I feel like I need a drink now. But we also have to recognize when thinking about these issues that these aren't new. I mean, cyber is not the first place we've encountered difficulties dealing with challenging international issues. But there are a lot of features of cyber that exacerbate the issues in ways that other issues have not, and I think we've touched on a number of those already, one of the most obvious being how widespread cyber vulnerabilities are.
Anyone can launch an attack from anywhere. So with nuclear, for example, we knew where nuclear missiles were. We could spot them from space, we could use sensors to identify them, we could talk to countries that had them. When somebody launches an attack on Anthem, we don't know was it the Chinese government, was it the Russian government, was it an associated group, was it a non-associated group, was it a terrorist organization? So again, we are trying to create international agreements, norms, standards, treaties, whatever where many of the players are not represented at the table.
The real parties that we are having to deal with here are not-- and they are not commonly shared even as to the perspective of what they are. Is it a state actor or not a state actor if the PLA tries to extract economic information out of a US server? But these are not new issues, they're simply made more challenging by the characteristics of the technology and its impact. And then finally, we have to deal with the reality-- it was true before the election last Tuesday, I think it's more true right now-- that we're not in what you would call a highly internationalist time in this country's political life.
Proposals to spend a lot of time and energy on multinational agreements to do something, particularly something that could affect core US industries-- remember, many of the companies involved here that are complaining the loudest are based in the United States-- it's not clear that's going to meet with a lot of welcome from either the President-elect or the Congress. And I would be willing to venture out, although predicting almost anything is dangerous these days, that that will not be very welcome by our political leaders. And so we have to face that we are doing something at a time when it was already hard, it's made harder by the factors we've talked about, and now it's made harder still by the fact that the political environment is not terribly welcoming of this.
That doesn't mean it's impossible. Let me say, if industry pushes hard enough for it, industry may very well get movement in this area. I don't at all mean to suggest it's impossible. And just because I will try to find one positive thing to say to end, and that is we have seen a lessening-- I mean, we think we've seen, anyway, a lessening-- in attacks originating in China since we had the agreement signed between the Chinese and US presidents earlier this year, which carved out some things not to focus on, but which identified targeting private infrastructure as something to stay away from.
And so it suggests that maybe in spite of all of the things that are weighing me down and I'm now wanting to share my burdens with you, that in spite of all of those, maybe there's still a possibility at least for some bilateral movement that would help advance this at least one small step at a time. Thank you very much.
Oh, I can do questions unless it's dangerous. I mean, if there's a-- yes, please.
SPEAKER 1: So you briefly mentioned liability rules, but you didn't go into it in much depth. I was wondering if you could talk a little bit more about it. So it's clear we have [INAUDIBLE] because we have to do something and it's not likely we're going to do anything. But let's just hypothetically talk about, well, if we could do something, what are some of the more realistic options? And it seems to me that there's a lot of strong reasons to support some [INAUDIBLE] and that even conservatives would prefer a liability scheme over robust government regulation and an administrative state.
And you mentioned the liability of the immunity provision that exists and sort of an argument against that [INAUDIBLE]. That seems like an intervention in the kind of legal [INAUDIBLE]. And it seems like you wouldn't have to do much to craft a liability scheme with very, very nominal damage and then combine it with mass tort litigation and think of it like [INAUDIBLE] in that case. And you could basically create an incentive where if a country is sufficiently [INAUDIBLE] sign the system, the liability would be so massive it would put them into bankruptcy.
Even if it was $2, $5 for every $4 million account that's recklessly disclosed, that's $2 billion. So can you just talk a little bit more about what the issues are in creating [INAUDIBLE]?
FRED CATE: Yeah. Let me say I agree with everything you said. I think it's going to be really hard. It may still be the right thing to do, I'm not in any way backing away from that. Part of the challenge is in a traditional liability world, something happens, you know who caused it, and you know what the effect is. So I hit you crossing the street, I'm the driver, I was at fault, then we assess your damages.
The problem in a cybersecurity world is that the cause and the effect may be separated by hundreds of intervening factors. So company makes baby monitors and puts no security at all on them, those are then used five years later by a malicious attacker who takes advantage of a botnet created by another malicious party to then attack another entity which provides service to the entity that's actually harmed. And so the first thing a court's going to say is, can you show me any causation there? Can you tell me who was liable for what? How do we apportion liability?
And the company that made the baby monitor is going to say, oh, it had protection. We set the password at password. What do you mean, it had great protection. And so then they're going to say, well, it was the individual owners, the parents who bought the baby monitors-- I'm assuming that's generally who buys baby monitors-- it's their fault. And Congress is not going to do anything, I'm guessing-- again, you're welcome to disagree with me-- that's going to assign liability to individuals here, other than criminals-- I'm not talking about that, but civil liability.
And so in some ways, we're going to almost have to end up with some form of sort of statutory or stipulated damages, where if you do something incredibly stupid there would be-- and many states do this-- it'll be $10 a device. And if you sold a million devices, that's $10 million. You write a check to the US Treasury for that amount. And that is largely the way I think liability around information type of things has worked on both privacy and security side the date.
The challenge will be finding a mechanism to do that, and we're going to need a law almost certainly, and then finding a way to assess who did something really stupid. Because again, when the baby monitor was made five years ago, was it really stupid to not have protection on it? I mean maybe. I'm not defending them. But it would be a harder claim than today making the baby monitor.
I might just say, at the risk of being overly tedious, right now the principal enforcer of cybersecurity activity in the United States is the Federal Trade Commission. They have no cybersecurity authority whatsoever, but they have a general power given to them 100 years ago to act against unfair and deceptive trade practices. So they've taken the view that bad security is an unfair trade practice.
That power has just been struck down by the 11th Circuit Court of Appeals. So for the moment, at least until that case is resolved and that will have to go to the Supreme Court and it will take some time, even the FTC doesn't look like it has the authority to proceed against even really egregiously stupid security. Other questions, yeah?
SPEAKER 2: So you gave a broad list of things that are wrong with the picture and sort of put it under the umbrella term cybersecurity. I'm wondering, are we doing ourselves a disservice by thinking of things in term of cybersecurity, rather than trying to subdivide it further. We don't think about human security, right? We have food security, we have transport security, we eve have different private versus public transport security. Is one good step a way to narrowing down this problem in cybersecurity into these other, more actionable--?
FRED CATE: Yes. Without question, yes. And that is the way, certainly, the law would have to work on it would be to break it into smaller bites and go after a bite at a time. So for example, you might have a particular set of rules or liability provisions or whatever around connected devices in cars and it would just be part of thinking about transportation security as it relates.
I think conceptually, though, it's still kind of useful to think of it big. I thought you were going to say to think about it even bigger because in a way, cyber can no longer be separated from anything else. If your power plant explodes, it's not going to really matter to you why it exploded. And calling that a cybersecurity attack, as opposed to a kinetic attack of people with mortars, it may be important for the analytical purposes but the effect is going to be very similar.
And so I keep wondering-- I have no idea, so I'm just asking the question first so I don't have to answer it later-- whether the whole notion of cybersecurity will become lost as we instead just think about broader security, sort of security and reliability being sort of tied up together. Yes, please.
SPEAKER 3: So when it comes to government regulations, I think that ever since the DES standard from, what is it, 1977, seemed like it was really sabotaged. A lot of people who do computer security are going to be instantly mistrustful of especially any US government regulations, Chinese ones, and Russian ones. So should we expect them to cooperate with any such regulatory procedures?
FRED CATE: Your point's exceptionally well taken and that mistrust, of course, is very well-founded. I couldn't in any way disagree with it. I would say there's almost always mistrust of the government when it regulates. And sometimes that ends up scuttling the regulation and sometimes we go right ahead and we just live with the mistrust and we have the regulation.
I would just again say that regulation can take a lot of forms. It doesn't have to be, you must use this particular standard. In fact, I can't imagine a worse regulation than saying that because the standard presumably might change every six months and Congress takes 10 years to pass another bill and so that would not be an efficient way to go about this. But for example, one way in which we see standards imposed to some extent today is through cyber insurance.
So if you want to buy insurance against loss of your data through a cyber attack, the insurance company is going to come in with a Bible of things you have to do and companies will do all of those things because they want to get the insurance protection. Well, we might ask whether there's something to be gained there-- in other words, do we really want the private insurance industry setting standards for cybersecurity or is some part of that a role that more of a public authority might play, even if it's just setting a minimum standards or basic standards? Yeah, go ahead.
SPEAKER 4: So-- sorry, I don't-- when there are standards, especially the standards are something that they don't understand but they find annoying or something that are not implemented in an easy way, as you mentioned, tend to sidestep them. And this happens a lot within the government itself. Even the internal Clinton Secretary of State story about her emails involves hilarious incompetence with copying files and FedExing a laptop. How do we make standards for usability, given that standards are famously arcane?
FRED CATE: Well, I, as I said, would favor standards as a last resort, and certainly from the government. I think you're going to find more success with saying, you're responsible for outcomes and you can do whatever you want to achieve that outcome. So let me again just give you a really practical example, so obviously a much easier topic, I appreciate that.
We spent 30 years trying to get people to use belts. We did everything possible-- we automated the belt to move for you, we had educational campaigns, we showed crash dummies going through windows, we published statistics-- and the uptake rate was trivial, it didn't even make it into double digits for decades. And then the federal government said to state governments, we don't care how you do it, but if you want Federal Highway funds, you'd better get your seat belt use rate up higher. And then we saw the vast majority of states, in a matter of 48 hours-- that's not quite right, two years-- adopted mandatory seat belt laws.
And they eased in. Initially, you couldn't be stopped for not wearing a seat belt but if you were stopped, you could then be ticketed for it. They started with trivial penalties and then they upped the penalties. Today virtually everybody wears a seat belt. It's achieved an astonishing-- and highway deaths have dropped, although they're slightly back up now because of the use of information technology. You would generally say that was a success case.
And it did not work until finally, after 30 years of messing around, with the government just said, you're going to have to do it. We'll let 1,000 flowers bloom for how you do it, but you've got to get your seat belt wearing rate up to this point by this time or you lose federal funding. Now, could you imagine something around cybersecurity for that? Not for perfect cybersecurity, but just for a start, just for a beginning? So for example, could you say, you may not ship an electronic device that is connected or connectable to the internet unless that device has a unique password? That would be pretty simple.
So your baby monitor would now have to have a unique password, it couldn't be set for password. So again, three years ago, maybe four years ago-- the data is a little bit lagging behind-- we know that most cable modems or DSL connections that were in people's houses were protected usually with their default password and then they started changing that. They started printing a unique password on every one, on a label, and now we see a lot more unique passwords. And why did they do that? It was some combination of market force and state regulators that said, particularly for AT&T, if you're operating in a regulated way in our state, you're going to have to meet higher security standards.
And so little tiny pushes at the margins might start some more significant results. We will never regulate our way out of this. I don't in any way want to suggest that. I just want to suggest right now we are spiraling down into a sinkhole and if we could find something to slow that spiral, that would seem like a good thing while we wait for the technologists to figure it out or we wait for a better approach. Yeah.
SPEAKER 5: On that happy note--
FRED CATE: Yeah, sorry. I'm terribly sorry. I should just go back to Indiana.
SPEAKER 5: So actually, it relates to his question. Just so you characterize the fundamental vulnerabilities in the systems as being social and organizational and not technical, right? And just to sort of return to your opening framing, I mean, to me, it doesn't strike me as being such a stark difference, right, in large part because if people don't like updating their software or they don't like using a strong password, isn't that an indictment of the technical infrastructure just as much as it is of the social and organizational incentives that people have?
And in fact, if you think about those incentives sort of globally, people get all kinds of incomprehensible notices about the security of their web browsing. They're presented with tons and tons-- we cry wolf a lot, right? People get all of these false alarms about things that they ought to be doing, that they don't understand what they're doing. And so we often, I think, indict people as acting irrationally when it comes to security, but in fact, I think an argument could be made that they act quite rationally in terms of if you include in the analysis the amount of research that people would spend in the service of these false alarms.
And sometimes the things we tell people to do in the name of security, like change their passwords a lot, end up paradoxically leading to worse security practices, right, because they reuse passwords and that sort of thing. So I'm really intrigued by-- generally, I'm in your corner about the need for incentivizing these things, particularly on the institutional level, the regulatory level. But I wonder if we might not view it so starkly as a difference between law and policy on one hand and technical systems on the other, but that if we have a technical system that can so easily be compromised by a phishing attack and we now that people are just going to be people, doesn't that also indict the technical system for requiring what it does?
FRED CATE: Yes. I mean, I think you make an excellent point. I'm glad you made it. The quote I read, for example, from the document that Fred Schneider provided, you'll remember it said we need both and we need them to work together. And I think that is certainly an accurate view of the situation. However, since I've come this far, without question the largest part of the market that does not update equipment is companies, it's not individuals.
Individuals do behave rationally here. Much higher update rates among individuals than we have among large company users. So in that case, I would say it is still pretty irrational behavior. You're now being provided a free thing to fill the void, the hole, the vulnerability, and a company two years later has not implemented it. Well, have they never shut their machines down in two years? I mean, really? Was there never a time to just hit, Install and let it install across the multiple systems in the-- so admittedly, technology can play a critical role in making that easier, making all machines administered from a central point and things like that.
But I would still say there's an awful lot of neglect would be the nicest word. I would describe it a sort of corporate willfulness about things we're just not going to do. I don't know why, I'm just stuck with transportation examples. Think about child car seats, they're incomprehensible. I mean, nobody without a PhD could figure out how to use one and even then you need instructions with pictures. But again, we made it an extraordinarily serious crime to not put your child in the right car seat and today use of car seats is much, much higher.
They've not gotten much easier to use, and so clearly you need both. I don't at all want to sort of disagree with you on that, but I do think there's a lot here of really irrational behavior. Think about OPM. They didn't bother to encrypt the records on everyone who has a security clearance in the United States? I mean, is there any-- do you want to-- do you want to defend OPM?
SPEAKER 5: I'd rather not.
FRED CATE: OK. Yes, sir?
SPEAKER 6: Congratulations, you've managed to depress us in an exciting way.
FRED CATE: Oh. Well, thank you. What a kind thing to say.
SPEAKER 6: Not in a depressing way. Let me try and go back to something that has been speckled through your talk, and that is the many examples you gave [? for ?] which markets are worth. Your seat belt example was the best one. And yet I heard you say about 2/3 of the way through your talk, we don't want regulation. Well, if markets don't work, then I'm sorry, we do want regulation, we need regulation because that does work.
We don't like it, it's politically unpalatable, as we've seen, but if it works this is the best way to do it. My second question is a more empirical one. I read a lot of the literature from privacy advocacy groups and there's a great deal of support for encryption at the moment. Is that something that's going to help the general problem of cybersecurity or not?
FRED CATE: Great questions. And on the first, I'm so glad you raised it because I may not also have been overly clear. Personally, my own political view is one that I don't favor regulation as a place to start. I think there's no question but what we need regulation. But having said that we now need regulation, it doesn't mean we need to regulate everything.
There are some things like the type of security you have, as I said, I don't think belongs in a regulation because it will change so quickly regulators won't be able to keep up. So I think the challenge is figuring out what's the most effective and efficient type of regulatory approach to use and where to apply it most strategically. But we absolutely need it. There's no way we're going to walk away from that.
And let me just say this is going to be a huge challenge for this Congress and this president because if they see it as a national security issue, they're going to look one way; if they see it as another burden on industry, they're going to look another way. So I don't think this has gotten any easier or harder as a result of the change in administration. On encryption, encryption will help a lot. I mean, strong encryption, end to end encryption is extremely useful and now increasingly being deployed by default on many of the consumer devices that we use that we know collect and use data.
I don't think it's going to be an answer, though. So it'll be a good thing because particularly when it comes to a lot of the ways in which we're using control systems and things like that-- natural gas pipelines and so forth-- encryption is not necessarily going to help there. And moreover, these are, in many cases, old systems. We're talking trillions and trillions of dollars to upgrade all of these systems into a more modern format.
And then some things either I won't say cannot be encrypted because I'll get shot down immediately, but it's just not going to make sense to encrypt. Metadata, it's very hard for metadata to serve any purpose if it's encrypted. If you're addressing blocks of information on the internet, every block needs to have an address that can be read and so you can't really encrypt that. And so I think we're going to find that there are lots of places where encryption is not going to be a great answer, also places where we don't have much space or room or battery power where, again, encryption just is too big of a drain on the system. But a critical step. Yes.
SPEAKER 7: I'm just curious, are there any organizations [INAUDIBLE] that actually do cybersecurity better than the rest? I would suspect, for example, that investment banking would have their own strong safeguards against cybersecurity or cyber crime. [INAUDIBLE].
FRED CATE: So the answer to that, of course, must be yes. I mean, somebody must do it better than everybody else. I don't really know who they are, and here's the reason why. Because remember, in almost every case we're in a system. So even if you have an investment banks that's doing a great job on cybersecurity, it's connected by Fed Wire and it's connected by other-- which it has no control over whatsoever.
So the Federal Reserve Bank of New York transferring tens of millions of dollars on the instructions of a bank from Bangladesh, well the Bank of New York would say its system worked perfectly, it was not compromised at all. But an end user of the network was compromised and so the whole system, in that sense, didn't work right.
And I think that's true and it's hard to imagine a place where it's sort of a supply chain, service providers. We rely on networks to process you know credit cards. I mean, so again, I think there are companies doing a better job. I would also say I think there are states doing a better job. Massachusetts has what we usually describe as a comprehensive cybersecurity law. That's not quite true, but it goes far beyond just saying, you have to send notices if you have a breach.
And that was the first state to do that, and that was a big thing and now California's followed suit and I suspect some other states will follow suit. But again, if I am arguing-- and I am arguing-- that this is hard to deal with on even a national level at such an intrinsically global problem, dealing with it on a state level is even more painful, to be perfectly honest. Yes, sir?
SPEAKER 8: So this is a really good informative talk. I have one very quick imperative questions and then [INAUDIBLE]. So the imperative question is, does the US [INAUDIBLE] actually recognize cyber attacks or does it [INAUDIBLE] cyber attacks within its realm in terms of looking at the supply side, how do you flag somebody for having conducted the cyber attack? And the [? second ?] question is, if it does, then how does the United States government then sort of square the illegality of cyber attacks with conducting cyber attacks on its own? So there are government agencies which do conduct cyber attacks on other agencies, state or non-state. So the question is, how do you [? ideationally ?] sort of grapple with that idea if you are going to be sort of setting up regulation with regard to this?
SPEAKER 8: Yeah. I'm sorry I recognized you. I should have moved on to somebody else. Those are great. That is really hard. It's a real problem, but it happens all the time. In other words, lots of things are acceptable if used in one way by one part of the government but not acceptable of use than another way by another part of the government. And this is where it would be helpful if we would first maybe nationally, but then with an eye towards doing it multinationally, try to think about what separates an appropriate use of a cyber attack from an inappropriate use.
So for example, it has long been accepted-- long been accepted-- under international rules of conduct that trying to collect information from another state is lawful. You can even spy. It may be unlawful in the territory of that state, but it doesn't violate what we call international law, which of course is not law at all, but a set of agreements. And that is important.
So if we say, OK, so now are we going to say cyber attacks for the purpose of collecting information-- well, as you know, at least the current administration and the past administration have said, no, no, no, no, we're not going to say that at all. If you're breaking into Anthem to collect data, that is not an acceptable thing under our reading of international law, it's certainly not lawful under the US law. So in that sense, that needs to be a more thoughtful discussion led by experts both inside and outside of the government to try to get some integrity in that answer.
But here's the problem, which is for completely understandable reasons-- I don't mean to imply any criticism here-- we don't want to say what we do offensively. And therefore, if we engage in the discussion of this is lawful but that is not, governments worry we'll give away what it is we're doing. And moreover, we don't want to restrict our future flexibility about what we can do or not do. And so as a result, although the US in many ways for decades has sort of been a country that's kind of tried to lead debates on vexing issues, we've stayed somewhat silent on this issue.
And that is something that is going to have to occur and it's very hard to occur from outside of government. It needs to be aided from outside of government, but we can have meetings all day long but until somebody in the White House or the State Department or the Congress says, I care about this issue and we need to work this out, I don't think we're going to have a lot of effect. That was [INAUDIBLE]. Dr. Schneider.
SPEAKER 9: You made an analogy or comparison with a health law and a pollution law. And they didn't always exist. So an obvious question, perhaps, is based on that metaphor, can we understand that there is a difference and that's why in theory, cyber law doesn't exit, or are there commonalities and there is this triggering event that hasn't yet happened and so by now we're going to succeed or fail [INAUDIBLE] regulations anyway?
And I will point out that for food safety law, while people were dying and then there was food safety law, and having somebody know that you're registered in AdultFriendFinder isn't the same caliber. So could you talk a bit about the extent to which this is an analogy that will inform our future?
FRED CATE: Yeah. So I think your second way of viewing it is the one I would say, which is I think this is a process, just as it was with all other forms of safety law. I mean, people often forget when cars first came out, there was so much concern about that they were going to cause damage to horses and to people that cars had to have people walk in front or beside them with flags to warn people that a car was coming.
And that was the law in many jurisdictions for years, that you could not use a car without having somebody out front to warn that a car was coming, just like today we have trains that warn you if a train is going to pass a road. That was obviously stupid, it's been eliminated today. That would dramatically reduce the effectiveness of cars. It's an evolutionary process, and like with seat belts it took decades.
And so one argument which particular my friends in government liked to say is, Fred, you're just impatient. Just give it time, this will be a heavily regulated area, but it may take 30 or 40 years to get there, just like it took with cars or food safety or any of these other things. That may be right. I mean, in fact, I suspect that is right.
I'm not convinced, though, that the threat horizon here is going to tolerate that very well. So what could happen, of course, is we just have a major event in the next three or four years-- I mean, god forbid, I'm not recommending this-- but that it occurs and then we respond. The problem is, as a nation it's hard to respond to a major event with a very thoughtful solution. We tend to respond badly.
And so another reason to be thinking about better ways of thinking about this is even if you intend to do nothing, even if we're just going to sit on our hands and wait, is then when something bad happens we will at least be in a position to say, we've thought of five or six things that the government could do to actually be effective here. The other thing I would say, and again I want to be careful here because there are a lot of good people working very hard in government and industry and academia and elsewhere and I don't in any way want to undermine that, but there's a certain amount of just total dishonesty going on about cybersecurity today.
There's a reality disconnect. We say it's a crisis and we cannot fight off an attack and then we do little. And if you think about it, there are other areas where if we really think there's a crisis-- compare it with military activity. We know exactly how to invade a country, we do that very well. We bring in overwhelming force, we take months to do it, we put a general in charge, we give that general almost dictatorial authority, and then we get the job done, for better or worse. I'm not advocating it, I'm just saying that's how we approach it.
So either the rhetoric is wrong-- the people in government who are saying, this is a crisis, it's the number one issue facing the United States, the biggest threat-- they should just stop saying that if we're going to keep saying, and we've got a great guy in a basement office in OMB who can head this for the White House. They're not consistent approaches. And although the government by no means has to be consistent, welcome to reality, in an area as important as this, I think we might ask for something a little better. I mean, there are a lot of people working hard on this. Yes, please.
SPEAKER 10: So to carry on with regulations, you were earlier mentioning the privacy issue, I believe. And I'm a law student currently writing on that and the transfer of personal data from the EU to the USA. And therefore, I was just curious, do you know if it has been discussed to include in the Privacy Shield provisions of cybersecurity or do you think it's doable to do that in such a transatlantic agreement, international agreement? Would it be efficient, because you said encryption of metadata is not a great solution? And my second question would be just, you mentioned international law on cybersecurity, which is not international at all, but a set of agreements.
FRED CATE: No, not law at all.
SPEAKER 10: Oh, not law at all-- I'm sorry-- instead of agreements. And I wanted to know what you mean by that.
FRED CATE: OK. So great questions and congratulations on being a law student. Privacy Shield is this agreement that's being worked out right now-- I mean, we thought it had been worked out but I think it's still being worked out again-- between the United States and Europe to replace the Safe Harbor, which is the agreement under which we can move data out of the European Union with its strong data protection regulation into the United States. And I'm not aware of any discussion of cybersecurity in relation to that, other than about securing the data, which is an obligation in both countries when you handle personal data.
The question about my saying the international law of cybersecurity is not international law-- generally speaking, there's not a lot of really what one would call international law. There are lots of ways in which law affects the dealings between countries or between people in different countries. You can apply one nation's law in another nation, you can have an agreement-- a bilateral agreement, you can have a multinational agreement.
But in the sense of law, like law that is either codified or law that is binding, in a sense you could go into court anywhere and get this law enforced? Outside of things like, again multinational agreements, like the General Data Protect Protection Regulation in Europe, which will be binding international law, but only for the 27, 28, 29 countries that it applies to, we don't really have anything like that around cybersecurity. But as a practical matter, we don't have anything about that in a lot of areas.
It's almost a misnomer to use the term international law, as opposed to international custom, international agreements, international norms. And sometimes those combine together to form something that we might call law, but it's not law like you could go into court down at the county courthouse here and say, I'd like this law enforced. They would tell you to find a state law or a federal law that applies, I suspect. OK. I think it's time for me to sit down and--
REBECCA SLAYTON: All right. Well, thank you very much.
FRED CATE: Thank you.
We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact firstname.lastname@example.org if you have any questions about this request.
Fred Cate, the C. Ben Dutton Professor of Law at Indiana University, discusses the role of law and policy in enhancing cyber preparedness. The talk, given Nov. 16, 2016, was part of a series on the international dimensions of cybersecurity, presented by the Mario Einaudi Center for International Studies and cosponsored by the Judith Reppy Institute for Peace and Conflict Studies.