AI Answers: CMU's Rayid Ghani Testifies to Senate Committee
By Michael Henninger
On Thursday, Carnegie Mellon University’s Rayid Ghani testified as a witness during the U.S. Senate Homeland Security and Governmental Affairs Committee hearing entitled “Governing AI Through Acquisition and Procurement.”
Ghani, who graduated from Carnegie Mellon in 2001, is a Distinguished Career Professor in CMU’s Machine Learning Department and the Heinz College of Information Systems and Public Policy. Ghani served as chief scientist for the Obama for America 2012 election campaign and joined the CMU faculty in 2019. He started the Data Science for Social Good Fellowship to train computer scientists, statisticians and social scientists to use data on problems with social impact.
Before his trip to Washington, Ghani sat down to discuss recent AI advancements. The following has been edited and condensed.
From left, witnesses Rayid Ghani, Fei-Fei Li, Devaki Raj, William Roberts and Michael Shellenberger are sworn in at the start of the hearing.
Q: What AI is and how is it being used?
RG: Think of AI as a set of tools to help us make better decisions.
It's being used in a lot of aspects of society, ranging from companies using it to show us ads, sell us things, recommend movies — to self-driving cars. But governments and nonprofits are using it in similar ways: to help improve society, to help us improve public health services that we're provided or improve employment opportunities; and skilling people, or improving educational outcomes and criminal justice outcomes — often to be more preventative than reactive.
Q: What are some of the general concerns you've heard from the public about the use of AI?
RG: There are a few big issues: awareness, understanding, transparency and accountability. We often don't know where and when AI is being used. It's not as if you can go somewhere and say, “Tell me on a typical day, how was I affected by AI?”
We don't always have a place to understand where it's being used, how it affects us, how was it developed, why was it developed, what was it designed to do? How well is it working? Who is accountable for the actions taken using AI systems? Do I have any recourse if I don't want to be part of it? So the whole series of things that are going on, where the public isn't necessarily involved. And that needs to change.
Q: What are concerns you’re hearing from government about AI?
RG: Policymakers are falling within the spectrum of two extremes: One extreme is hearing all the hype around AI and thinking that it can solve every problem that exists without really having the expertise and resources to understand when it's applicable and when its not.
The other extreme is a fear of AI and thinking that it's going to destroy the world and take over everything. And, well, life will be over. And the reality is sort of somewhere in the middle; it's going to change a lot of things. It can help solve certain problems. It's better for certain things. And so I think the struggle they're having is in getting a better understanding of what it is, what it can do, what it cannot do, and most importantly, how to use and govern it so that it is an effective tool to help us achieve our societal goals and policies.
Q: The topic of Artificial Intelligence can create fear and distrust. What are the benefits of AI if handled properly?
RG: There’s a huge set of benefits we can reap if we design these AI systems to do what we want them to do explicitly. While AI has been traditionally used to improve efficiency, the biggest impact I hope is in the use of AI to improve equity in outcomes in our society. But that requires deliberately designing, building and deploying the AI systems to achieve that.
Now, the problem is that use of AI also comes with risk: risks and propagating biases that exist in a lot of human processes; risks around lack of transparency; and risk around who is accountable for any mistakes that this system makes. And so the question that we are all focused on is how do we ensure that the systems we build give us the benefits we care about, increase equity in all of these education outcomes and health outcomes, and minimize the risk.
That requires academia, industry, governments to work together to figure out what needs to be done. We need to build new frameworks. We need to train new people. We need to build new tools that are focused on tackling these problems in a way that's responsible and fair.
Q: How is CMU’s collaborative environment helping to advance your work in AI ethics and policy?
RG: We've got a long history of people who have worked in different disciplines, who cross boundaries.
We've got expertise in technology, and in societal systems. We have expertise in policy, expertise in ethics. And it's not just having expertise in those areas individually. It's having the set of people who want to cross these boundaries, who are able to not just be experts in those areas, but also collaborate.
That comes together when you bring all these people together and build systems that have actual, tangible societal impact.
Q: How is CMU working to bring harmony and transparency to our society as AI becomes part of our new normal?
RG: Traditionally when people have built their AI systems, they have been often built without including the people being affected in the process. My colleagues and researchers at CMU are working on how to include people impacted by the system in the design process.
We need to have participants be part of the design process, part of the evaluation process. System developers can only know so much about what needs to be built, but we need to understand the perspectives of the people who are going to be using these systems that we're building.
Q: What excites you most about AI?
RG: We haven't created a perfect world. We have lots of societal issues today. People like me have the background and the skill set to be good collaborators to help improve those things but we need to do exactly that — support and collaborate with the people on the ground who are doing this hard work every day. We've got a lot of issues around access to education, access to health care, access to legal help, access to transportation.
AI has the potential — and I say potential because if done badly, it could make things worse — to reduce some of these equity gaps. And I think we're at the important juncture today where we have to do something that explicitly focuses on this problem.