Brattle Consultants Discuss AI’s Potential to Cause Disparate Impact in Housing, Employment, Credit and Lending Decisions in Recent Law360 Article
Published by Law360
With the popularity of artificial intelligence-powered tools like ChatGPT on the rise, organizations are increasingly using AI models to help make complex decisions. Doing so introduces the risk of unintended discriminatory practices – known as disparate impact – towards protected classes of individuals.
In a recent article for Law360, Brattle Principal Dr. Christine Polek and Senior Associate Dr. Shastri Sandy explore how the use of AI decision models can cause disparate impact in areas such as housing, employment, credit, and healthcare – and, in turn, violate relevant laws and protections. They discuss some of the red flags regulators and consumer advocacy groups have raised about algorithms’ potential to produce discriminatory results, and provide an overview of statistical, economic, and qualitative tools that can be used to evaluate and address AI-induced disparate impact.
The full article, “Don’t Assume AI is Smart Enough to Avoid Unintended Bias,” is available below.
View Article