The Human Costs of A I
"It turns out, then, that the most significant takeaway from a letter warning of the potential dangers of artificial intelligence might be its insistence that AI systems “must do what we want them to do.”And what is that? Even now, just six years later, that list is too long to catalog. Most of us have encountered scripted, artificially intelligent customer service bots whose main purpose seems to be forestalling conversations with actual humans. We have relied on AI to tell us what television shows to watch and where to dine. AI has helped people with brain injuries operate robotic arms and decipher verbal thoughts into audible words. AI delivers the results of our Google searches, as well as serving us ads based on those searches. AI is shaping the taste profile of plant-based burgers. AI has been used to monitor farmers’fields, compute credit scores, kill an Iranian nuclear scientist, grade papers, fill prescriptions, diagnose various kinds of cancers, write newspaper articles, buy and sell stocks, and decide which actors to cast in big-budget films in order to maximize the return on investment. By now, AI is as ambient as the Internet itself. In the words of the computer scientist Andrew Ng, artificial intelligence is “the new electricity.”
Historical data, for example, has the built-in problem of reflecting and reinforcing historical patterns. A good example of this is a so-called talent management system built a few years ago by developers at Amazon. Their goal was to automate the hiring of potential software engineers with an AI system that could sort through hundreds of résumés and score them the way Amazon shoppers rate products. The AI selected the highest scorers and rejected the rest. But when the developers looked at the results, they found that the system was only recommending men. This was because the AI system had been trained on a dataset of Amazon résumés from employees the company had hired in the past ten years, almost all of whom were men.
”Bias can be inadvertently introduced into AI systems in other ways, too. A study that looked at the three major facial recognition systems found that they failed to identify gender just 1 percent of the time when the subject was a white male. When the subject was a darker-skinned female, however, the error rate was nearly 35 percent for two of the companies, and 21 percent for the third. This was not a mistake. The algorithm builders trained their algorithms on datasets composed primarily of people who looked like them. In so doing, they introduced bias into the system.
The consequences of these kinds of errors can be profound. They have caused Facebook to label Black men as primates, they could cause autonomous vehicles to fail to recognize a woman with dark skin crossing the street, and they could lead the police to arrest the wrong man.
Other kinds of bias are even more subtle. Many AI systems are proprietary. Shielded behind intellectual property laws, they are often opaque, even to the people employing them. They are similarly inscrutable to the population at large. Consider credit scores: for most of us, this is a number lurking in the background, not just of our financial lives but of what our financial lives lead to, like mortgages and spending limits on credit cards. In the past, a credit score was typically a reflection of how conscientiously one paid bills and settled debts. Now there is talk of enhancing this with “alternative”data, culled from social media and the Internet."
MORE IN THE ARTICLE BY SUE HALPERN
■