Machine learning and artificial intelligence make extraordinary discoveries possible, and autonomous systems are being rolled out in vital business and social settings including healthcare, policing, and education. Assumptions about the neutrality and objectivity of data may encode serious social and political bias into the results. High-profile examples show how these systems already incorporate into their design human flaws, biases, and assumptions, especially about women and their role in society. In this talk, Professor Gina Neff will show that explicitly thinking about gender in AI will help designers make AI systems that help humans make better—and fairer—decisions.
- @cyberlyra Brava!!! 01:24:05 PM July 01, 2020 from Twitter for Android in reply to cyberlyra ReplyRetweetFavorite
- Oh, Scotland. https://t.co/d5f5yKR7MZ 08:32:38 AM July 01, 2020 from Twitter Web App ReplyRetweetFavorite
- @Nancy_Pearl They will go down in comedy history as one of the great duos. They play off each other so well. 07:26:06 AM July 01, 2020 from Twitter Web App in reply to Nancy_Pearl ReplyRetweetFavorite
- @kylzjarrett I am so sorry. This is so unjust. 06:00:34 AM July 01, 2020 from Twitter Web App in reply to kylzjarrett ReplyRetweetFavorite
- RT @engagedscholar: We studied more than 15,000 protests across the US and found that police were much more likely to respond aggressively… 08:21:25 AM June 30, 2020 from Twitter Web App ReplyRetweetFavorite