In the world of AI and machine learning, it’s common to hear fear-based statements in the media, or from your neighbors, about what the poor decisions unchecked algorithms might make — everything from denying you credit to launching nuclear missiles. What most people rarely hear about are the actual challenges that cause AI practitioners to worry.
At FICO World 2019 in New York, I sat down with Kate Crawford to discuss these kinds of problems with AI. Kate is a Distinguished Research Professor at NYU and a Principal Researcher at Microsoft Research, as well as the co-founder of AI Now.
We found a lot of common ground as we explored data bias, untrained data scientists and other concerns. We are both highly focused on Ethical AI — making sure that explainability and accountability are at the core of AI development and use.
(Incidentally, it’s not just Kate and I who are worried about this. At the FICO World I 2018, I sat down with Garry Kasparov to discuss the same issue – see our discussion here.)
It’s a common truth that the quality of any analytics project comes back to the data, and data bias is certainly something we both play close attention to. Kate references recent cases of analytic models that in effect were discriminatory because of sample bias. For example, one firm built a model to identify the traits of a successful leader, but since the development data was the current leaders of the firm — who were nearly all white males — the model was heavily biased toward white males.
Ultimately, the success of any analytics project lies not in the data, not in the techniques, but in the data scientists themselves. It’s important that as a community we discuss these issues and trade potential solutions, and I found my discussion with Kate rewarding and encouraging. I hope this video gives you something to talk about in your organization.
Please note: Report can only be read by users who have logged in.
via https://www.AiUpNow.com
November 12, 2020 at 04:54AM by IoT Now Magazine, Khareem Sudlow