To improve how algorithms make decisions, computer scientists at Technische Universität Kaiserslautern are seeking insight from unrelated fields about how they define a good and fair decision.
Researchers from law, political science, psychology, philosophy and computer science gathered to exchange definitions during a recent workshop “Mathematical Quantifications and other models on the quality and fairness of (automatic) decision making” held at TUK in March.
Computer algorithms process datasets, finding trends and patterns, and then models apply those established relationships to new data. For example, identifying who in a population are likely to buy a certain product. While it seems relatively straightforward, there are many ways bias can be introduced, including how computer scientists design the algorithm based on the goal or thresholds, and how they weight different criteria or factors. The datasets the algorithms are initially trained on can also perpetuate existing imbalances.
“Most of the algorithms have little knobs and levers and you have to decide which levers are on or off,” Zweig said.
This might not cause great concern for shopping and advertising, but algorithmic decision making systems (ADMs) are being introduced to spheres where the stakes are much higher. For example, governments in Poland and Austria are using ADMs to decide how much financial assistance to give unemployed citizens. The U.S. is trying an ADM system to evaluate if criminals eligible for parole are likely to commit another crime.
As algorithmic decision making systems (ADMs) spread, Zweig and colleagues are seeking to clarify what makes a fair ADM system, and investigate if ADMs have any place in criminal justice systems. The multi-year study, funded by the German Federal Ministry of Education and Research (BMBF), is highly interdisciplinary.
Read the full magazine article here: unispectrum.de