Episode 33: Woodrow Hartzog on the dangers of regulating AI with half measures

Yves Faguy: Hi, I’m Yves Faguy, and welcome back to Modern Law.

You’re listening to Modern Law, presented by the Canadian Bar Association’s National Magazine.

We’ve been talking a lot about AI on the podcast and on CBA National. And one of the issues that keeps coming up is the challenge for a country like Canada in selecting the right approach to regulating AI risk. It’s not as if though there’s a single model out there. Take the EU. It’s trying to set the gold standard for the world, much as it did with its GDPR privacy regulation. The US is contemplating various bills, but it’s anyone’s guess how that plays out. And for the most part, it’s just applying existing laws and regulations through regulators like the Federal Trade Commission, the Consumer Financial Protection Board and the US Department of Justice.

Now, these agencies have been instructed recently, back in October, by the Whitehouse through an executive order, to follow eight guiding principles on AI safety. Meanwhile you have China and its approach is to ensure that all information generated by AI aligns with the state’s interests.

All are key players to watch as we try to understand where the future of global AI governance is headed. And today we’re going to take a closer look at a perspective coming out of the US. So I’m thrilled to have on the show renowned international privacy expert Woodrow Hartzog. He’s a professor at the Boston University School of Law. He’s a faculty associate at the Berkman Klein Center for Internet and Society at Harvard Law School. He is also an affiliate scholar at the Center for Internet and Society at Stanford’s Law School, and a non-resident fellow at the Cordell Institute at Washington University where he’s currently working on a project about AI half measures in collaboration with Neil Richards and others.

In fact, Hartzog recently testified before the US Senate Committee on the judiciary on the importance of substantive legal protections when it comes to AI. He’s been arguing that current AI policies and oversights are far too weak, and has been calling on Congress to move beyond the half measures.

Woodrow Hartzog, welcome to Modern Law.

Woodrow Hartzog: It’s a pleasure to be here. Thanks for having me.

Yves: Tell us a little bit how you got drawn into privacy law. I understand you started out as a trademark attorney, did you not? How do you explain the evolution to what you’re doing today?

Woodrow: That’s right. So things started for me as an undergraduate. I took a class on media law. And back in those days, media law was taught according to the medium, so the law of television, the law of newspapers, the law of radio and magazines. And we got to the end, this was 1997, and I raised my hand and said, “What about the internet?” And my media law professor said, “Nobody knows what the law of the internet is,” and it blew my mind.

And so I think I walked out of class that day and filled out an application for law school, as I remember it more or less. And so I was always interested in information and I began to get really interested in information privacy law, post September the 11th. So what happened in the United States with USA Patriot Act, and a lot of the surveillance practices that followed, deeply concerned me and really interested me in the ways in which our rules could shape just or unjust surveillance practices.

But when I went out in the early 2000s to become a privacy attorney, I met with my mentor and said I want to be a privacy attorney. And he said, that’s not a thing. And it really wasn’t at the time. I mean it really was not the sort of career path that we see now with information privacy and surveillance. And so I went to the next best thing which at the time, Napster was kicking up, and intellectual property law was deeply involved in issues of information technology.

And so I began to study intellectual property as well. And that’s how I wound up as a trademark attorney. But I eventually found my way back to the world of information privacy which is where I’ve been ever since.

Yves: You’re the second guest now in a row, because actually we interviewed Carys Craig who you were on a panel with recently at Ottawa U.

Woodrow: Oh, of course, yeah.

Yves: You’re the second person in a row who’s mentioned Napster as a factor triggering you toward a practice of privacy law.

Woodrow: The technology that launched a thousand liable careers. I wrote the music column for my university’s newspaper. And I’ll never forget, I wrote a column after I had seen Napster for the first time,. And I said, “Guys, you never have to pay for music again. You should stop buying CDs immediately.” And I got an email from the local law student who says, “You know, you’re advocating mass copyright infringement, right?” And I said, “What? What’s a copyright?” And so that also sort of piqued my interest on the relationship between law and technology.

Yves: I think a good place to start this conversation, because we’ll be talking about privacy and the challenges posed by a rather speedy adoption of and embrace of artificial intelligence tools – I want to ask you this: people often wonder what the phrase reasonable expectation of privacy means in terms of the law. And I’m wondering what that term has meant to you and has that evolved over the stretch of years that you’ve been studying privacy law, over the last, say, 20-25 years or so?

Woodrow: Sure. So that’s a great question. The term reasonable expectation of privacy, when I first started, it’s kind of the introduction to a journey of understanding what privacy means in the first place because what you quickly find out is that privacy can mean anything. It means a lot of different things.

It could mean control of information or secrecy or intimacy or personhood or autonomy or obscurity or trust, or as we now are getting into contextual integrity. And I don’t think that there’s any one right answer to that question about what privacy is, what’s reasonable under the circumstances, which is why I’ve come to conclude that the reasonable expectations of privacy tests is actually broken and we should get rid of it in the law, because we made a wrong turn in thinking that we would turn the term privacy into a term of art that had some sort of specific meaning when it’s completely contextually dependence. And even our conceptualizations of it can change according to what we want to accomplish in any one given situation.

I’ve come to think about information privacy in terms of power and in terms of relationships. And so I think a lot about how the access to information or the ability to use information bestows power. And when that knowledge is gained by others, other people lose power. And so what I really think about is the extent to which that power is used justly or unjustly. And that has really come to shape my understanding of privacy in modern disputes.

Yves: When you’re saying we should get rid of the reasonable expectation of privacy, I guess as a concept, I guess in law, does that apply both in the criminal realm as well as in the private realm? From a Canadian context, I really tend to think of reasonable expectation of privacy as almost in the criminal context, more in the criminal context than really in the private.

Woodrow: I would actually apply it in both contexts, or if we don’t get rid of it, at least we should have some better rules clarifying what we need in any specific context, because otherwise what happens is that it allows decision-makers, whether it be lawmakers or judges, to pick the version of information privacy that’s sort of most consistent with notions of the outcome that they want in the end.

And as a result, they can say, “Oh, well in this instance, privacy means secrecy, but in that instance, privacy means betrayal or trust within relationships or confidentiality.” And so in the very least we should clarify it, both in criminal law and in consumer protection and data protection law, because otherwise it just gets to be this sort of amorphous tool that anybody can use.

Yves: And has it been eroded, our notion of reasonable expectation or privacy has been eroded by just our relationship with technology and data over the last 20 years? How much has that to do with it?

Woodrow: Everything to do with it. So the other fundamental weakness of the reasonable expectations of privacy test is that our expectations of privacy can and do get slowly eroded over time. So when you base your threshold for privacy violations on what people expect, you give permission to powerful entities and governments to set those expectations how they want over time.

And so, we know that the arc of surveillance only goes up, that people only become more surveilled over time. We also know that people inevitably get used to it. They get used to it for two different reasons, one of which is they come to see it as unexceptional. In other words, the more it happens, when people ring doorbells on every single door, we get used to it. When we get used to our face getting scanned when we board an airplane, we come to see it as unexceptional. That which was exceptional becomes routine.

Once something becomes routine, then another psychological phenomenon happens which is that we come to think of it as favourable. And the reason why is because it happens all the time. And in the background our mind is like, “Well, surely someone would have objected if this were a bad thing. No one has, therefore it must be okay.” At least it must not be bad. And so we do this with everything. And so the problem with conditioning privacy violations on what people expect is that we can be conditioned to expect anything, and will inevitably.

And so unless there’s a firm backstop in privacy rules, we will be acclimated to all surveillance and our reasonable expectation of privacy will inevitably be ground into pace over time, because there’s never an occasion for us to stop and say hey, maybe this goes too far. Because the way it happens is that something, someone does something that seems creepy. They put the ring doorbell camera up for the first time. You may recall having experienced the first time you walked up to someone’s house and they had a ring doorbell camera, and you’re like, “Oh that’s weird. I guess we’re videotaping people when we come to people’s doors now.”

And you feel weird about it for a little while, and then it just becomes commonplace. And that’s what happens with all surveillance technologies.

Yves: So a lot has happened over the last little while. I mean we’re talking about the speed of adoption here obviously of technology and our relationships, how fast our relationships with adoption of technology change. A lot has been going on in the field of AI recently. And just a list of a few things here: it’s been a little over a year since chat ChatGPT 3 was released to the public, a little over a month since the Whitehouse issued its executive order establishing new standards for AI safety and security. It’s been about three weeks since the open AI board versus CEO fiasco with Sam Altman.

About a week ago, we had the AI Safety Summit where the big questions were: should AI be regulated? Can it be regulated? How should it be regulated, all that kind of stuff. So obviously, the world is reacting to this technology and I’m wondering what goes through your mind as you watch all this happen in such rapid succession?

Woodrow: You’re right, things are moving fast. The first thing I think about is we know how this is going to play out.

Yves: Do we?

Woodrow: There’s a lot of discussion around, oh, this is the newest thing. The world is unprepared. And we are. But we know how it’s going to go. We know that corporations are going to take these tools and use them for financial gain. We know that they’ll take every scrap of data they can get. They’ll attempt to control our behaviour for the most financially lucrative ways. We know government’s going to find these tools irresistible because they also want to control the way people act in their natural sort of inclination.

And so it should come as no surprise that any of what has happened so far has played out the way it has. The other thing I think about is it’s a real shame that the fate of billions of people hinges upon the whims of tech CEOs that have barely connections to reality. I think that a lot of these CEOs in these companies live in a dream world that’s strongly untethered to the suffering that’s actually happening with people on the grounds.

And so, I hope that we can get past the sort of golly-gee-whiz aspect of a lot of AI, and really get down to the ways in which we know for sure companies are going to try to leverage these tools, and that we have tools set up already to stop this.

That’s the other thing. You hear a lot of the law is never going to be able to keep up with the pace of technology. And that’s not true. That’s not true because we can create laws that completely anticipate a lot of the problems that arise. What we lack is the will to enforce them or the commitments to hold these companies accountable to basic principles of loyalty and care and confidentiality and non-exploitation and non-discrimination.

These are old and established values in legal frameworks. We just lack so far the commitments to hold companies to them. And that’s maybe based on the idea that oh, well what about innovation? But that canard is getting old quick.

Yves: But let me just ask you – and I do want to get to the regulatory environment that’s being discussed right now – but it’s interesting what happened in that board dispute over firing or rehiring or not firing Sam Altman, because it seems to me that the golly-gee-whiz really did win out in that one. And I’m not saying that the board handled that matter well at all. Having said that, the advocates of AI safety lost that battle pretty badly.

Woodrow: Yeah, which is all the more of an example in a long line of examples to show that self-regulation is doomed to fail. And we should not even entertain it as an idea because companies are going to do what companies do and have always done. And so it’s regulators’ job to make sure that they have a set of boundaries to do that within. And the idea that we should defer to tech companies because they somehow know better strikes me a complete abdication of what governments should be doing in this space.

Yves: So how would you describe what’s going on from a regulatory view in the US, and this is important for me to ask you because we’re speaking to you from Canada, and we’re struggling a little bit with our own debates around AI. There’s an AI act that is being debated currently that hasn’t been voted into law. People are saying we’re moving a little too quickly, we should look at what Europe is doing, we should look at what the US is doing. Why does Canada want to be first? It’s never first, it’s always third. So I’m trying to get a sense from you, I mean should we be waiting on the US? Or is the US finally moving ahead? What’s the situation there?

Woodrow: Don’t wait on the US because you’re going to be really disappointed if you wait around too long. So up until about a year ago, what was happening in the United States on AI was the same thing that has been happening for quite some time. Congress was doing nothing probably for several reasons, one of which was they didn’t want to disrupt innovation because AI, like the internet, was a brand new little baby, even though it’s not, and we wouldn’t want to do anything to jeopardize the sort of development of that, as though companies have a complete inability to respond to regulation.

But over the past year I’ve noticed there’s been a little more bipartisan agreements that this is a system of tools that needs rules quick, before things get out of hand, because they see what happened with the internet, and the horse has left that barn. And they feel as though they should have acted much more soon and perhaps we’d be in a better position. Perhaps we would have avoided a lot of the harms that we saw that rocked people when the Snowden disclosures came out, or Cambridge Analytica.

And so they see that and I think are thinking about making a move earlier. There’s also a confluence of interests that have made for kind of strange bedfellows, one of which is concern around the vulnerability of kids, which has brought a lot of people to the table that previously were not there.

Another one has to do with this sort of weird ‘AI is going to kill us all’ strand of thought that has brought some people to the table. I think that that is probably wildly overblown. And someone was talking about how this is just another version of AI hike, that oh wow, this is how powerful AI is. The real problem is that not super intelligent AI is currently being used in ways to create harms for people right now.

And that’s the reason why we need to act, because again, if we don’t act early we become acclimated to systems of surveillance, we become accustomed to these tools that become entrenched. And as a result, it becomes a lot more difficult to pass rules, but not inevitable, because that’s the other thing that I want to push back against, is this idea that AI is inevitable. It’s not. People control what tools are made and that’s what laws are for.

And so you’re seeing now at the Senate and at the state level some bipartisan agreements about the need to regulate AI in a really robust way, in a way that would require pre-clearance for a lot of AI tools in a way that would outright prohibit certain kinds of practices, in a way that would get rid of intermediary immunity like in section 230 in the United States.

And I think that to the extent that lawmakers are coming together on this is a positive development. Now, I don’t want to get too enthusiastic and say we’re going to see something pass because that’s an entirely different conversation than seeing bills introduced. But it’s a start.

Yves: How do you receive the Whitehouse executive order?

Woodrow: I think the Whitehouse executive order is a mixed bag. On one hand I applaud the initiative to try to get ahead of the problem. I like the way in which money is advocated to be distributed. I like the call for more research and more information about the harms of AI. I like the calls for all sorts of more serious engagement about regulatory agencies with AI.

That being said, it also still feels like a bit of a half measure in that we don’t have a lot of outright prohibitions or really sort of meaningful, substantive rules. There were some things that feel kind of underbaked in the AI, particularly ways in which AI interacts with labour exploitation and other areas.

And so, maybe this is just politics in that there was some good in there and I was happy to see it, but other areas that seem to embrace the inevitability of AI in a way that was disturbing to me, because it seems to be skipping past the conversation about whether particular AI systems should exist at all. And until we start having those conversations, it feels as though a lot of these tools are going to be pushed into society according to the whims of venture capital and not in a way that is the result of meaningful, democratic deliberation.

Yves: I want to pick up on that point because people do talk about: is there risk? We’re rushing into regulating AI without really understanding the harms, some of which we can’t even imagine I think. I think this presents a challenge for lawmakers in writing up the rules that will govern the use of AI tools when we have trouble defining what is actually harmful and abusive.

But then that kind of leads me to a pretty naïve question which is – and I apologize for this – but why not an all out ban? I mean yes, it does sound super naïve, and I know people will say this will result in a technological stagnation. By banning AI or uses of it, it will eliminate innovation. And I think you’re sort of alluding to this as well. But at the same time, we invented the atom bomb and we didn’t share it with everybody. We restricted its use for obvious reasons.

There was a headline at one point. There was a survey done at the Yale CEO Summit this summer. Forty-two percent of CEOs said AI could destroy humanity in five to ten years. Now, they’re just CEOs, what do they know? Fine, but even if the odds of destroying humanity were say 10%, or if you told me there was an asteroid that was about to hit Earth or was about to brush by closely to Earth, but maybe hit us, would be a chance of 2% or 3%, I think the planet would get together to make sure that we would get that asteroid off its course.

Why is it so ridiculous to suggest that we should have an all out ban before we figure things out for a little while?

Woodrow: Oh, I don’t think it’s ridiculous. I think it’s incredibly wise. I think that we should get significantly more comfortable with outright prohibitions on AI systems because the alternative is to allow the sort of slow drip of AI systems to get entrenched in our everyday lives at which point if we do wake up and realize, “Hey, that was a horrible idea, shouldn’t have let that go,” it becomes a lot harder to dislodge them, not impossible, but harder.

And so I think it’s a great idea to think about outright prohibitions and systems where the default is our requirement to get permission before you’re allowed to develop it, because then that lets us innovate in the areas where we want AI, where it actually could be on balance better for society rather than letting 1000 AI flowers bloom, many of which will make life significantly worse, certainly for the vulnerable populations, and probably for everyone.

So I’ve advocated for an outright ban on facial recognition technologies and the least facial surveillance, and the least emotion recognition and face surveillance in public from that. And the idea that we have to sort of permit all AI to flourish strikes me as really misguided, because there are – and this is why I say we know how this is going to end – we know how these tools are going to be used.

Someone creates a deep fake creator and then it’s like, oh wow, you mean everyone’s using it for pornography purposes, for cyber harassments, to create deep fakes of their classmates in really harmful injurious ways. We knew exactly how that was going to play out. And it did as predicted.

And so, I think that we need to have a little more confidence, given the historical record of tech companies in this space, confidence in regulators to draw lines out prohibiting those sorts of systems that we know are going to be unbalanced, create more harm than good, and then create a system by which we can actually meaningfully engage in a democratic deliberation around the AI systems that we want, and focus on enhancing those.

And a lot of people say, well that’s a bad idea because it will hinder innovation. But if it hinders innovation into tools that will make society worse off, and we feel pretty confident it will, then that’s good, because it will channel innovation towards the systems that we think will actually improve our lives.

Yves: How much of our attitude towards this is informed by this global race for AI innovation? And I’m not talking just about in the corporate rule. There is a race among countries, China, US, the EU. I know the EU is perhaps considered less innovative on this front and therefore is taking a little bit of a harder line certainly than we are in North America in terms of AI safety. How does that fit in to the portrait?

Woodrow: So I think that’s a more complex picture than is often painted. So whenever I mention the need for outright prohibitions on particular systems, a lot of what I hear is, “Oh, well what about the other country? What about China developing the AI, or what about the EU developing Ai?” A), it’s worth noting that China in fact has relatively – it’s a complex picture about their regulatory environment and their use of AI. B), it feels like a really misguided race to get in to create the tool that can create the most harm to society as a whole. That’s not a race that we should be participating in.

And this is an opportunity for real leadership. And we talk about sort of global leadership. This is an opportunity for us to meaningfully come together as a collection of nations and say: these tools are instruments of human suffering and will make people’s lives worse off. And we’re not going to create a market in them in the same way that we’re not going to create a market in blinding lasers, that these are tools that unbalanced, are worse.

And I think it’s okay that we then seek innovation in other areas that can be more helpful. In a way, it will actually make us more productive in the areas where we can design systems that won’t lead to such harm. And so I really push back against the idea that we have to indulge in the development of all these systems that are so harmful and so destructive simply because some other country is doing it.

Yves: The EU was relatively successful though – I think some people would start to question that a little bit – in terms of pushing its views on the regulation of privacy. And it did that through the deployment of the GDPR and rendering its scope practically extraterritorial. Now it’s got the EU AI Act which is expected to become law pretty soon, could be just a matter of months. How much do you think that might influence our approach to regulating AI here in North America? Do you think that Brussels effect will carry over again?

Woodrow: That’s a great question. I think it could. It’s difficult to know. As we’re recording this podcast I believe that they’re almost a full day into the final negotiations around the AI Act. I don’t know what the final version is going to look like. I fear that some of the more robust provisions like prohibitions on face surveillance in public and emotional recognition, are going to get watered down.

But maybe that makes it even more likely that the model can be exported to the US because they’d probably be more receptive to a less robust world. Certainly the AI Act is structured in a way around these tiers of risk that might be seen as attractive in the US. And I think that we’re seeing that even get kicked around. I haven’t seen some of the exact language in bills that are being discussed in Capitol Hill, so I don’t know whether Senator Blumenthal and Hawley’s AI bill is going to look a lot like the AI Act. But it may sort of be structured sort of somewhat similar, at which point I think that we could say that we’re going to see another Brussels effect.

What will really matter is whether California, which is often the first actor in the United States, creates some version of that and passes it, maybe through its ballot initiative process which is the way in which we got the CCPA or I guess the CPRA as well.

Yves: Which is the consumer protection.

Woodrow: Right, California Consumer Protection Act and the California Privacy Right Act which has just modified the CCPA. And then other states said, oh, that’s a great idea, and sort of passed some version of that. And now there’s somewhere right under 20, I think, states have passed privacy rules so far. And if California does it again, then I expect you’ll see it again.

And what I would be interested to see is whether the rest of the world also follows along as well. One of the interesting aspects of the GDPR and the way in which it was enabled to sort of export that model was a refusal to allow data to transfer across borders unless the destination has adequate protection, adequate rules.

And so I don’t know whether something like that is going to end up in the AI Act, but it would matter if it did.

Yves: Right. So let’s talk a little bit about perhaps what an AI law or an AI legal regulatory framework should look like. What do you think policy makers – where do you think they should draw inspiration from in terms of regulating AI? And I’m wondering, again I mentioned earlier the atom bomb. I mean should we be drawing inspiration from highly regulated industries like the nuclear industry or is that too much? If it were up to you, how would you try to approach this issue?

Woodrow: I’d start with kind of two observations. One, AI is not one thing. So to the extent that we can be sensitive to the different kinds of AI systems and the different contexts that we want to deploy them and the different affordances that those systems offer, we should be sensitive to that, because it feels like a mistake to try to treat all machine learnings systems the same.

The second way I would try to approach this would be to gather the most effective previously existing regulatory approaches or legislative approaches to those particular AI systems in context and see what works. And so in some instances, it might be borrowing from nuclear energy and thinking about having a really robust pre-clearance setup, or maybe the FDA is the right model for certain sorts of AI systems.

But another approach could also be borrowing from products liability law. I mean there are all of these amazing doctrines that came out of the industrial revolution and the tortes revolution in the United States around consumer protection rules and products liability law and making sure there are protections for employees in labour and thinking about revitalizing labour organization. And so it’s going to take I think a commitment on multiple fronts to reenergize a lot of the hard thought protections that have already been instituted in various areas of the law that we’re now seeing be clawed back or scraped back. And maybe it started in the 1990s around the internet, but revitalizing those protections.

Because at the end of the day, what people want is to be safe and they want to have their information be held in confidential ways and they want to not be betrayed. And they want to not be wrongfully discriminated against, and they want to have a safe and fair workplace and have a voice in their conditions of working.

And so I think that if we keep those values first and foremost front and centre, democracy, equality, fair allocation and use of power, then we can draw upon a lot of existing tools or revitalize those tools rather than sort of create something from scratch.

But maybe certain sorts of models for various systems do need to be at least modelled, not created from scratch, but at least modelled on existing regulatory institutions like the FDA for example.

Yves: To your first point, one of the things that’s being contemplated here in Canada is having an AI commissioner.

Woodrow: Right.

Yves: And that’s coming under pretty heavy criticism from some quarters because there shouldn’t be a ministry of AI. AI is pervasive in all aspects of life and will be pervasive in all aspects of life there, therefore it should be handled in the labour ministry in a certain way. It should be handled perhaps in the competition industry a certain way. So I guess what you’re saying here is having a sort of minister or a commissioner of AI is a bit of an easy solution to a much bigger problem, and that we need to look at this from almost a product liability vantage point. But then who do we make responsible for AI that does cause harm to users?

Woodrow: Yeah, so that’s a great question. I would say that it actually might be a great idea to have a lot of technical expertise situated in its own governmental agency or institution. And I could imagine a commissioner of technology. Maybe it feels a little too narrow to only base it on AI because that’s the term that we use now. But maybe that changes in the future. And nano technology is also right around the bend – not nano technology, neuro technology, although maybe nano technology as well.

But neuro technology is also part of it, and so maybe we have a commissioner of technology.

Yves: Quantum computing.

Woodrow: Right exactly. But I could see a really effective role in a ministry of technology not as an enforcer of rules, but as a coordination of expertise and wisdom about information sharing, about coordinating efforts amongst all the other regulatory agencies that are in charge of in fact fair labour and employment conditions, or consumer protection, or data protection, or medical devices that have AI implanted in them.

So I do think that there may be a lot of wisdom in fact having this coordinating expertise facilitating role. And I think that would be a great idea which then answers maybe your next question which is to the extend that we have always created government infrastructure for solving certain problems. We should embolden those regulators.

So in the United States, the Consumer Protection Authority, the FTC, should be emboldened. DPAs should be created or embolden. There are all sorts of existing regulators and existing theories of accountability through courts and legislative regimes that could be expanded in a much more, I think, efficient and precise way to respond to the foreseeable harms of a lot of these systems.

Yves: You’ve written that transparency is a popular proposed solution for opaque AI systems but does not produce accountability on its own. Are we just running after a false promise on transparency?

Woodrow: It’s a false promise to think that it’s enough. Transparency is one of what I and other researchers at the Cordell Institute in Washington University have called AI half measures, which is necessary but not sufficient interventions to meaningfully hold companies that produce AI systems accountable. Transparency is the right starting point, but we have to recognize it as the starting point.

Where I fear we will run ashore is when regulators say: look at all the transparency that we’ve provided, it will enable self regulatory efforts, it will enable all sorts of meaningful things. But transparency alone doesn’t really meaningfully change a lot of power equations for people. So just because I know that facial recognition is being used on me in public doesn’t change my need to be out in public or my need to sort of move about my day. We need better and more protections for that if we’re going to really respond to the true dangers of it.

And so, I think that it’s just important to categorize it as the first step rather than a solution in and of itself.

Yves: You also spoke to the fact that you don’t think that obviously data protection laws are enough to take on the challenge of AI. So is that a reflection of what you might perceive as a failure of our privacy laws in managing the whole management of data in the online world over the last 20 years?

Woodrow: Yeah, I applaud so many of the data protection interventions that have happened over the past 20 years. But I’ve been critical of them as well because I don’t think that they are sufficient to solve a lot of the larger problems around power and information and technology that we’re seeing. And part of that has to do with the original goal, the stated goal of many data protection laws, which is informational self-determination. Informational self-determination is a laudable goal. It’s one that certainly you can see value in it and see it as necessary to a flourishing society.

But when that’s the ultimate objective, of a lot of these rules, it ends up getting cast out in ways that ignore the collective and social effects of information ecosystems. So it ignores the limitations of an individual to fully understand the risks that they’re facing and their own vulnerabilities within these ecosystems. And it also ignores the ways in which my data affects other people. And when we base our protections around the collective wisdom of millions of individual self-motivated decisions, I don’t know if that’s the same thing as a rule that’s the wisdom of what’s best for society.

And so, I think that it’s important to incorporate a more societal approach, a more collective approach, an approach that focuses on relationships, not just individual choices to effectuate their own individual informational destinies.

The best example of this is my sister awhile back kept taking the genetic tests. And I was begging her, “Please don’t do that. That’s also my data as well.” And sure enough, you know an epic hack of 23andMe with catastrophic results, which again, was entirely predictable. And so it just shows sort of the limits of a highly atomized, individualized approach to data protection.

Yves: There’s always that person in the family who wants to know the family tree. They should just do it the old fashioned way.

Woodrow: It just shows the limits of data protection, so not so much that data protection has failed in its stated goal. I think that the GDPR probably did a relatively good job of building out a meaningful infrastructure for what it wanted to accomplish, which is informational self-determination. though it’s fair to ask questions about whether consent rules are even practically possible given the aspirations of meaningful consent, which is a whole other thing.

Yves: And I think fewer and fewer people really believe in that anymore.

Woodrow: Right.

Yves: So you’re basically asking lawmakers to address these power imbalances, or to take this societal approach. What does that look like? Do you have any examples of what that might mean concretely in terms of how could we correct these power imbalances in the distribution of this data?

Woodrow: Sure, absolutely. So the broad answer – and you’re right, it takes a theory to beat a theory. And if I’m suggesting that we move away from control and consent models, what would take its place, the answer is something that Neil Richards, who’s a law professor at Washington University, and I have argued for years, which is we should embrace a relational approach to data privacy which would be anchored by a duty of loyalty and duty of care and a duty of confidentiality.

And here, this is a way to protect people no matter what they choose, because what would matter is the obligation on the recipient of information to keep the trust that they’re given. And in a series of articles and other proposals, we built out what a duty of loyalty would look like. And it’s gaining popularity. It’s been introduced into several bills here in the United States. And it also exists in regulatory proposals abroad like the Data Governance Act within the EU.

And the reason we like a data of loyalty is because it exists within relationships and is acutely sensitive to power disparities within those relationships. The greater the disparity, the greater the obligation on behalf of those companies to act in the best interests of the vulnerable parties.

Now, there’s a lot of discussion around that and there’s a fair number of concerns that people have had. And we’ve tried to respond to them, one of which, well, what does that actually look like in practice? What it looks like in practice results in what I call the 3 Ds, designs, default, data and dead-ends. We need meaningful accountability rules in the design of these information technologies. This looks like data protection by design and by default as part of it, but also other aspects of it as well.

And so I think that this is where products liability rules potentially could come in. We could have default rules prohibiting information collection in particular kinds of contexts, or in the very least, one of the things that I feel like the GDPR did really right is require a legal basis before you process data. So that’s a default presumption about whether information can be processed or not.

And then finally, we need data dead-ends. There needs to be some particular areas where you just don’t get to collect the data and you just have to deal with it. An innovation will route around it, and maybe we can channel that into a more meaningful and productive area.

All of that falling within this larger umbrella about a duty of loyalty to the vulnerable data subjects, which will colour the interpretation and the enforcement of a lot of those very specific rules.

Yves: Now, it sounds really good. I’m just wondering, technologically, is it even feasible, because maybe I’m confusing and mixing ideas here, but the one thing that sort of strikes me as kind of scary about this race to develop AI tools is what seems to be increasingly apparent, that the coders and the engineers themselves don’t necessarily know how these machines work.

And I wonder – and this is back to the issue of transparency or explainabilty which is another value that’s being pushed especially by the EU – I mean, if we can’t explain and we don’t really fully understand how these algorithms are working, how can we impose these duties of loyalty, care and confidentiality on the producers of these products?

Woodrow: I think that we A), can isolate the design decisions that are made and say: were these design decisions made knowing sort of foreseeable consequences to the people that are trusting you? And that’s a little easier to tell. There are maybe some unforeseeable consequences. And if those consequences are unforeseeable then maybe it’s fair that we would say, well that wasn’t a disloyal choice you made because it wasn’t foreseeable that it was averse to the interests of the trusted party.

And this is product liability law and negligence law, all based around this idea of foreseeability. So that’s fine. But if it is foreseeable, even if you can’t explain how it works, if the consequences are foreseeable, we should absolutely hold them to it.

And we should get a lot more comfortable with the idea that we don’t want to write a blank cheque to companies to say do whatever you want and then you get to hide behind this liability shield because you couldn’t have predicted what the harm was. Like if in broad strokes you have no idea how this is going to react, then maybe we should exercise a little bit of caution given if the outer boundaries of harm were potentially significant, which they are in a lot of cases.

And so, I think there are limits to the value of explainability. I care a lot more about A), ways in which design choices were made, B), ways in which these systems are deployed regardless of whether we can understand how they work or not, and C), whether there’s an outside benefit to the company and an outside risk to the vulnerable party.

Yves: What if the law were just to run its course? What if liability laws as they are in the books now were to run their course? What are the players exposed to? What are the producers of AI tools exposed to in the long run?

Woodrow: In theory or in practice?

Yves: I guess what I’m trying to get it is, A), like how do they respond to your suggestion that they should be held to a duty of loyalty, care and confidentiality? And B), is it in their best interest to embrace those values for their own self-preservation?

Woodrow: Ah, great question. Well, there’s no solution that I can foresee that allows companies to keep their current business model. That’s actually the problem with all of this. So any meaningful change is going to require companies to leave a little money on the table or change their business model up, because a lot of it is inherently disloyal.

We have to make peace with that. If we are trying preserve the corrosive business models now, we’re just going to get [unintelligible 00:52:46]. That being said, Ryan Calo who’s a law professor at the University of Washington, has said that one of the biggest problems is not necessarily that we lack new laws, we lack the commitment to enforce the laws that we’ve got on the books. And I would agree.

We have a lot of rules now that prohibit unfair and deceptive trade practices, abusive trade practices, unreasonably dangerous products, but we haven’t had the will to interpret or enforce those rules in ways that I think respond to a lot of the clear disloyal and dangerous behaviour of AI companies now.

And so, this is why I think legislative intervention is so important, because it’s not going to change incrementally. Companies are quick to give lip service, “Oh yeah, we definitely want to do what’s best for a society.” But if you read between the lines, what it means is, “We want to do what’s best for society while keeping our business model intact,” which is of course what they would say, because that’s why they exist as a company is to make money off certain business models. And they’re of course going to advocate vigorously for that. So maybe it’s a mistake to fault them for that, but this is where lawmakers have to get involved and change the incentives.

And if you don’t change the underlying incentives in the business models, for example by prohibiting targeted advertising or behavioural advertising outright, then we’re just sort of futzing around the edges and not meaningfully sort of getting at the heart of the rot that is causing a lot of the dangers that we’re seeing with AI.

Yves: So we started this conversation off by talking about how quickly things have moved over the last year, and it really has been quite extraordinary. How do you see the year ahead playing out?

Woodrow: I have no idea because if you were to ask me a year ago that we had bipartisan consensus on AI regulation, at least in various sub-committees, I would have said there’s no way we’re going to have that. And we do. And so, I have two sides of me that are constantly at war, one of which is a kind of fatalism that looks at the trajectory that we’ve hit so far and says Congress is not going to do anything, and until something changes at the state level, they’re just going to be happy to sort of let the sort of slow drip of AI continue to go out and restructure the way in which societal relations continue.

But at the same time, I am optimistic that there’s some glimpse of light sort of cracking through here on meaningful regulation. Maybe we should have recorded this later in the day and know how the AI Act was going to finalize, because if a lot of the more sort of robust provisions of that get watered down, then I’ll be less optimistic overall. But if they make it through the dialogue process, then perhaps we’ll see meaningful trickle-down effects form this robust intervention.

So I guess I’ll end by saying I’m cautiously optimistic.

Yves: Woodrow Hartzog, thank you so much for joining us on Modern Law and I appreciate you taking the time to have this conversation with us.

Woodrow: It’s a pleasure to be here. Thank you so much for having me.