In her speech on algorithms, Joy Buolamwini elucidates how seemingly impartial facial recognition systems (and indeed algorithms of all sorts) can ultimate carry biases against certain people.
Buolamwini first discovered the concept of algorithmic bias while working as a grad student at MIT. She was developing a prototype for a smart mirror that could look at users' faces and put masks over them, like a tactile, reflective version of Snapchat filters. In testing her work, though, she found that the algorithm had difficulty recognizing her face unless she wore a white mask. Buolamwini, a person of color, realized that the issue came down to her appearance, since the algorithm was completely flawless when it came to other faces.
To solve the problem, Buolamwini is advocating for programmers to use a more diverse set of stock faces when training machine learning algorithms, as diverse training sets will lead to diverse recognition and thus less bias.