Times Technology Went Straight Up Evil
Machine learning is an increasing presence in our lives. Complex algorithms make decisions on our behalf about what restaurants we should eat at, what entertainment we should consume, and which street we should turn down during a traffic jam.
Companies and organizations use them to make decisions about people under their care or employ, and that’s where things start to go downhill. Like so many technologies, they are only as good as the people who make them. That means technology, perhaps particularly intelligent technology, comes preloaded with inherent biases. It isn’t necessarily the intention of the creators, but that doesn’t stop bias from existing.
Facial recognition algorithms famously display biases based on race and gender, either not recognizing people at all or doing a poor job of it. According to Harvard University, a number of algorithms have error rates of up to 34% when tasked with recognizing darker-skinned women, when compared with lighter-skinned males. That becomes a problem when facial recognition is used by law enforcement to make decisions about individuals.
There are similar problems with algorithms intended to make healthcare decisions. As explained by Nature, an algorithm used in U.S. hospitals was found to be discriminating against black patients, giving preference to white patients for certain treatments or programs.
That these biases exist is something we need to recognize and work diligently to fix.
via SlashGear https://ift.tt/d0yEXAc
March 31, 2022 at 02:03PM