Using AI to find biases in AI

It is also difficult to know how bad the problem is. “We have very little data needed to model the broader social security issues with these systems, including bias,” said Jack Clark, one of the authors of the AI ​​Index, an effort to track AI technology and policies across the world. “Many of the things that matter to the average person, such as justice, are not yet being measured in a disciplined way or on a large scale.”

Ms. O’Sullivan, a philosophy graduate from college and a member of the American Civil Liberties Union, is building her company around a tool designed by Rumman Chowdhury, a well-known AI ethics researcher who spent years at the business consulting firm Accenture before joining Twitter.

While other startups, such as Fiddler AI and Weights and Biases, offer tools to monitor artificial intelligence services and identify potentially biased behaviors, Parity’s technology aims to analyze the data, technologies, and methods that a business uses to develop its services. and then identify risk areas. and suggest changes.

The tool uses artificial intelligence technology that may be biased in its own right, showing the double-edged nature of AI and the difficulty of Ms. O’Sullivan’s task.

The tools that can identify bias in AI are imperfect, just as AI is imperfect. But the power of such a tool, he said, is to identify potential problems, to get people to look closely at the problem.

Ultimately, he explained, the goal is to create a broader dialogue between people with a wide range of points of view. The problem arises when the problem is ignored, or when those who discuss the issues have the same point of view.

“You need diverse perspectives. But can you get truly diverse perspectives in a company? ”Mrs. O’Sullivan asked. “It’s a very important question that I’m not sure I can answer.”

Add Comment