star twitter facebook envelope linkedin instagram youtube alert-red alert home left-quote chevron hamburger minus plus search triangle x

Carnegie Mellon Drives Progress on Public Interest Technology, Launching Innovative Projects to Prepare Future Leaders


By Scottie Barsotti

Public Interest Technology is a growing field aimed at using technology to improve the transparency, accessibility, and efficiency of government. No small task.

Predicting fires and landslides. Optimizing public transportation. Rooting out hate speech. Fighting opioid addiction and deaths. Curbing algorithmic bias. Technology is a powerful tool that can help government officials take on all these problems and many others in areas critical to the public interest and well-being.

While the use of technology to improve government and serve the public interest is not a new endeavor, there is new energy behind defining and promoting “Public Interest Technology” as a field unto itself, and new resources being invested to strengthen it nationwide. 

“There’s an effort to democratize access to information,” said Christopher Goranson, Distinguished Service Professor at the Heinz College of Information Systems and Public Policy. “What we’ve seen is that technology can leave people out and we’re seeing a pushback against that. There’s an interest in empowering citizens, and breaking down barriers between citizens and the governments that are supposed to serve them, so there’s greater accountability.” 

Public Interest Technology also provides new opportunities to amplify diverse viewpoints. Goranson emphasizes that with many of the problems created or exacerbated by technology, it’s likely that someone with the right background or perspective could have predicted those negative outcomes earlier in the process.

“We need to make those voices louder and bring them to the table, whether it’s at a corporate level or a government level, so that we can avoid running into some of these problems in the first place,” he said.

Yulia Tsvetkov, assistant professor in the Language Technologies Institute (LTI) of the School of Computer Science, emphasizes the importance of bringing diverse people together, as many technological problems are too complex and knotty to solve within any one discipline.

So, who needs a seat at the table?

“It’s problem-specific,” said Tsvetkov, “but I think the list would include experts in technology, experts in policy, statisticians, social scientists who understand peoples’ behavior, activists who understand peoples’ needs, lawyers who understand what’s possible and what’s not, and directors of big companies who have the money to sponsor this research.”

“That’s a great summary,” said Goranson, adding that he would include designers and experts in human-computer interaction.

A Growing Field Combatting Big Problems

In March, New America, the Ford Foundation, and the Hewlett Foundation launched the Public Interest Technology University Network (PIT-UN) and named CMU a charter member. More recently, New America put out a call for grant proposals for PIT projects that could have broad impacts across the field.

Two of those awards, roughly $180,000 in total, were given to CMU—one to Goranson and one to Tsvetkov. Those grants will help Goranson and Tsvetkov develop open-source, open-access blueprints for coursework in policy innovation and socially responsible language technologies, respectively, that will train future leaders in public interest technology and ethical AI. Goranson’s “Policy Innovation Lab” applies agile methodology and user-centered design to tackle public policy problems, working directly with government partners. Tsvetkov’s “Computational Ethics” course teaches students to identify and alleviate the negative impacts of bias in AI tools trained on user-generated data, like posts on social media, internet comments, email, and so on.

As an expert who works on AI, specifically natural language processing, Tsvetkov has a unique technical vantage point. She argues for the need to train technologists—public interest technologists and otherwise—to understand bias as they build new tools.

“If we train our tech and AI systems on biased data, those systems will perpetuate and amplify  those biases. These systems can make decisions about people or for people, but the systems are biased themselves,” said Tsvetkov. “An interesting part of this research to me is to understand how to design systems that alleviate social biases, and to train a future generation of computer scientists who will develop AI with an awareness of social bias.”

Tsvetkov believes that people tend to overestimate technology’s abilities in many cases, and assume a technology will make better decisions because AI can use more data and make calculations more quickly and effectively. She cautions that we don’t tend to think of or understand the risks of misusing people-centric AI as well as the potential benefits.

“AI does not have the contextual and cultural knowledge that we have, and that creates a risk,” she said. “Technology can be prone to mistakes of misclassification that a human would not make, that can be harmful.”

Goranson, an expert in applying technology to help address government challenges, believes that over time technology and data literacy continue to be essential skills for certain government functions and for people who interface with those government services.

“This movement is trying to say is that it’s not just technologists who have a stake, it’s anyone who might touch technology or be affected by it. It could be your ethicists, or advocates, or doctors. Or it could be your attorneys who might represent a population affected by a government service or technology. There’s all these tentacles and the field of Public Interest Technology is intended to attract anyone who has an interest in working for the public good,” said Goranson.

Public Interest Technology Reframes the Crucial Concept of Tech Responsibility

One sticky question is the technology space has to do with responsibility. When a technology creates a negative outcome, whose responsibility is it? Is it the creator of the technology for not catching the problem while building it? Is it end-users for not using technology responsibly? Is it companies for implementing a problematic solution? Is it regulators for not intervening or even understanding the issue? 

“We’re all in this together,” adds Goranson, “No one can absolve themselves of that responsibility. That’s really a big part of the impact this field can have is creating the understanding that we can’t just point fingers when a problem arises. Everyone has a role to play.”

Tsvetkov agrees, adding there is no simple answer to the question.

“It is hard to pinpoint who is responsible. I want my students to feel that, when they develop a new technology, it is their responsibility,” said Tsvetkov. “What I mean by that is: whoever is first aware of a potential problem, it is their responsibility to do something about it. We hope they will gain an awareness of the ways technology can be misused, and get them specifically interested in social good applications of their technology.” 

As the birthplace of artificial intelligence with the deepest bench of machine learning experts in academia, CMU and its students have outsized potential to build powerful technologies that are socially responsible. Once these ethically-designed technologies get out of the lab, public interest technologists across disciplines can take the ball and run with it, so to speak, but do so in a way that does no harm. 

“Technology helps government become smarter and more efficient and innovative, which is a good thing. Everyone wants a government that works better,” said Goranson.

“At the same time, the Public Interest Technology movement is helping people think more ethically and responsibly about technological solutions before they’re implemented, and making sure that all stakeholders have a voice in that process. And that’s really key.”

Learn more about PIT @ CMU