In a 2017 study, Microsoft found that AI can amplify sexist bias in datasets. It was also forced to improve its facial recognition error rate for people of color, to the tune of 2000%. Similar issues have risen with competitors, with a self-driving Uber car mowing down a pedestrian. “This is the point in the cycle[…]where we need to engineer responsibility into the very fabric of the technology,” said Shum at the event. As mentioned, Microsoft is already beginning to account for these misgivings. When it comes to machine learning, much of the issue is the dataset AI is trained on. A facial dataset pulled from random people on the street may not contain sufficient data on minorities, for example.
Regulation May be Required
Shum says his company is addressing this by adding more photos with a variety of skin colors, eyebrows, and lighting conditions. He also mentioned Microsoft’s AI and ethics committee and its role in the Partnership on AI. However, as AI becomes more complex, so will the ability to understand its challenges. Shum acknowledged that while self-regulation is helpful, it’s unlikely to be enough alone. “We really need the cooperation across academia and industry. We also need to educate consumers about where the content comes from that they are seeing and using,” he said. Others would argue that legal regulation is the only way to ensure co-operation. Consumers and employees have already voiced their discomfort of Microsoft’s alleged provision of facial recognition tech to the ICE, and Google’s defense contract. Microsoft has since committed to facial recognition regulation that would stop companies using the technology without customers knowing.