A collective of young brainiacs out of MIT’s Media Lab has uncovered an issue of concern for those embracing machine learning platforms for high-level predictive analytics: what if the data sets we are using have an unconscious bias? With the way that artificial neural networks study data and perform pattern recognition, will bias in the data sets become more concentrated and focused over time?

The Algorithmic Justice League not only identified the problem, they are searching for and testing solutions. Founder Joy Buolamwini, a graduate researcher at MIT, began collecting instances of facial recognition that excluded populations. Working with the idea that inclusion is needed for full human potential, and that bias in data sets can, over time, lead to exclusionary and discriminatory practices in finance, law enforcement, and criminal justice, Joy started the collective known as the Algorithmic Justice League to begin work on finding solutions to testing algorithmic codes and data sets at several points in development to ensure that unconscious bias was not contaminating the data.

Using the STEM to STEAM model of empowering innovative thought through art and design, the Justice League is providing an open platform to collect instances of data bias from the global population, and supporting creative exploration and art to communicate the issues and solutions.

Civil rights seem to be under attack from all sides, and the last thing anyone wants is to have our machine learning platforms working with less than adequate data. They can only work with what we give them, and we are imperfect. But careful study and testing through the various stages of algorithm development will help data sets become inclusive, and machine learning platforms give us the maximum benefit possible.

Share.

Comments are closed.