Dorothy Chou full interview
If you read our responsible AI newsletter, read on for the full interview with Dorothy Chou.
Dorothy leads Policy & Public Engagement at Google DeepMind, and is an angel investor with the VC firm Atomico. She has spent her career building social justice, ethics, & accountability structures at technology companies like Google and Uber.
In our interview with Dorothy, we covered the development of effective AI policy, responsible AI deployment and the importance of representation, equity and inclusivity within the industry.
Milltown Partners: As the AI regulatory landscape develops, policy teams at leading AI labs need to ensure they contribute to effective policy making in a responsible way that retains public trust. Can you give us a sense of how you think about that in your work?
Dorothy: I think about policy dynamics across three levels - market forces and sector norms, in addition to formal regulations:
In terms of market forces, VCs are really at the top of the funnel. They fund and incentivize all kinds of different behaviour, and that flywheel they create makes founders who are seeking funding behave in a certain way. I know for example, Radical Ventures is putting together a diligence framework that VCs can use to ask a bunch of questions to understand whether the behaviours that they are undertaking are responsible. And similarly, Atomico has DEI and ESG as requirements for their term sheets which also changes founder behaviour.
Any company releasing technology is going to have a norm setting function. Especially for the bigger companies that are leading in the space, it's really important to consider when you deploy your products, and what are the tests and requirements that you are putting in place to ensure it’s enough to launch. Obviously, you can't catch everything and that's just why there's systems of redress, but if something goes wrong, there aren’t yet laws about this - for example responsible disclosure is something that's developed among different industry players and is now the way that people normally deal with bugs in software - I'm keen that that happens at the normative level. This is probably easier to do on the deployment side, but ultimately responsibility has to start at the development phase. Diversity of data sources is a crucial piece, so is how you review how the systems are performing internally - and all of those benchmarks are defined at the development phase by people who probably need to be taking up a lot of these standards and thinking through, “how am I affecting things?”
I also think corporate boards have a really big role to play - they're there for governing, both from a market perspective, and from an ethics perspective.
The difference in pace between regulation and technology might not be a bug - it’s a feature of democratic deliberation. People often get upset that governments can't keep up, or they despair that laws will always lag behind. But I think it's okay for governments to allow time for people to study emerging technologies and understand what the ins and outs are. There are ways to push companies to race to the top and be normatively responsible (see cybersecurity and responsible / coordinated disclosure as an example) and then put laws in place once the dynamics are clear. That’s how we can ensure laws are adaptable enough to keep up with the technology— future proof rather than quickly obsolete. The other option is to have laws that specify outcomes instead of process. For self-driving cars, for example, you can say, I want to decrease fatalities on the road by this much, and it doesn't actually matter how the companies get there, just that they get there. I do think regulation is incredibly important, but in some cases, I think it's okay for the regulatory process to move more slowly, if companies and investors can be really vigilant about shaping developments.
Milltown Partners: You’ve recently created a network through which Google DeepMind engages with early stage AI companies and labs on AI governance. Why is this important to you, and how do you think AI labs and companies that use AI models can work together?
Dorothy: I started this network because early stage companies often lack resources to focus on policy, and I see the network as an opportunity to broaden the conversations we are having with the government by inviting other key players to the table.
Policy is more than government relations. It also requires community engagement - with civil society and industry peers. Facilitating, understanding and listening are important normative practices and the more diverse voices included, the better for everyone.
Normative policy development requires collaboration across companies, not just those with the most resources. By engaging early stage AI firms on these issues, we can build consensus and shape policy through a democratic process and not just top-down mandates.
Milltown Partners: In your current role leading GDM’s Policy and Public Engagement team - and in your past roles at Uber, Dropbox and Google - you’ve identified and solved a number of challenges where policies were disproportionately affecting certain groups. What lessons have you learned about how companies can effectively identify and remove bias either within their organisations or in the external environment?
Dorothy: One example would be at Uber I worked on equitable safety protocols around drivers' criminal records. I found that a lot of our requirements were based on criminal records. However, it is a known fact that criminal records are affected by where people live, and which communities are being policed. So we quickly found that our requirements disproportionately affected people of colour and immigrants. Obviously, if you have driving infractions that are really problematic, you should not be driving for Uber but if you have an offence from years ago relating to a fraudulent check for example, why should that preclude you from being able to work?
When it comes to lessons learned, when we talked to advocates from civil society, they had differing views - for example some wanted the entire labour system to be reformed. But actually, people in impacted communities just wanted to provide for their families. By listening to them directly, we were able to reform laws which enabled a lot of people to join Uber who had previously not been able to work. When we monitored their driver ratings over time, they ended up having the same or higher average ratings compared to the rest of the driver population. It is sometimes important to come down to the brass tacks about what people need. When you engage with the communities that are impacted, they will advocate for themselves, so you should just pass them the microphone. We don't need to speak for them.
Milltown Partners: One major focus for many working on AI alignment and ethics is enabling co-design, equitable and participatory approaches to building AI with input from underrepresented communities and marginalised groups. This has been a key focus in GDM’s approach. Why is it so important for AI labs to centre equity in their approaches to responsible development?
Dorothy: AI companies today lack diversity and are very homogeneous. It's presumptuous to think teams like this can build inclusive services for everyone in society. While hiring diversely is important long-term, good public engagement with marginalised communities can also help in the short term. This is challenging but necessary work that civil society groups have expertise in.
In particular, it’s critical that excluded communities help reconstruct and reimagine better systems for the future. That future shouldn't be designed by a handful of people at AI companies alone.
That’s why at Google DeepMind we try to empower people to first understand AI, then envision their own desired future with it. When we partnered with Central Saint Martins, we gave students seminars on AI then let them visualise their own AI-enabled future. Eight out of ten focused on sustainability - very indicative of what the next generation cares about. Their ideas were so novel and their approach to design was done in a way that worked for them as opposed to what would work for a company.
We're also doing roundtables with people in various occupations to understand how they want to use AI in their fields. This unlocks much better insights than assuming what might be best for them. Fundamentally, we have to give people agency over the technology shaping their lives, not make assumptions on their behalf. This will ensure tech is built more equitably.
Milltown Partners: The AI regulatory environment is being shaped by various political, societal and economic pressures, and there are various different issues governments need to consider - impacts to jobs, national security, equity, productivity and more. What pressures do you think are top of mind for governments right now as they think about AI?
Dorothy: Governments need to consider how to incentivize companies to serve everyone, and establish testing and accountability systems. But the infrastructure for robust testing doesn't really exist yet. Rather than power struggles with companies, governments should instead focus on investing to build ecosystem capacity.
Government also needs more perspective from the public, not just industry. Government’s role is to represent what people care about. But lately conversations focus on government demands rather than how they are going to facilitate the ecosystem and the community to develop AI safely and responsibly.
The ethics and safety community is exhausted. We need more people entering this space. By funding scholarships and bolstering the communities that have to do the actual work, the government can make a real difference.
Milltown Partners: And finally, what’s the one thing you hope to see as an outcome from the UK Government’s upcoming AI Summit?
Dorothy: It's challenging to make significant progress at any multinational government negotiations, [so] beyond statements, I think it’s important to discuss implementing norms. The White House AI Commitments are great, but who will do the actual work of putting them into practice? Companies clearly have a role, but civil society is also key - they're the ones who red-team these models and help us uphold these standards.
And to close I’ll say this: beyond international agreements, I would love to see leaders agree on how they are going to facilitate civil society and bring them into the room. I know you can't have everybody at the table, but having some diverse voices will make a huge difference. When you look at the people making decisions about AI right now, they all look the same and that is something I would love people to pay attention to and consciously think about changing.