Suresh Venkat Radio Interview
February 20, 2018
Professor of computing at the University of Utah Suresh Venkatasubramanian talks with Larry Mantle on AirTalk regarding algorithms for pre-trial risk assessment.
Interview from 89.3 KPCC “Air Talk”
The decision of whether to release a defendant on bail and on which conditions is usually left in the hands of judges, but some courtrooms are now turning to risk-assessment AI systems in an effort to make the process less biased.
One commonly used system — Laura and John Arnold Foundation’s Public Safety Assessment — is now used in nearly 38 jurisdictions, including four counties and one city in Arizona, and Santa Cruz County in CA. The system processes data on a defendant based on factors such as their prior convictions, past behavior and age, to create two scores on a scale of 1-6: the likelihood that a defendant will skip out on their court date and the likelihood that they will commit another crime. These scores are one of the many factors that a judge can choose to incorporate into their pre-trial sentencing decision.
Proponents of using AI systems in pre-trial sentencing are hopeful that this will reduce human bias and even replace the cash bail system. But critics are afraid that judges will grow too reliant on these scores. And there are concerns that the system itself may have prejudice baked into it. The argument goes that since these risk-assessment systems rely on data about prior convictions and people of color interact more with the criminal justice system because of pre-existing human bias, they will end up with higher risk scores than white defendants.
We talk with a researcher who is currently running a study on the Arnold Foundation’s Public Safety Assessment scoring system, as well as a professor who studies algorithmic fairness.