Machine learning and artificial intelligence make extraordinary discoveries possible, and autonomous systems are being rolled out in vital business and social settings including healthcare, policing, and education. Assumptions about the neutrality and objectivity of data may encode serious social and political bias into the results. High-profile examples show how these systems already incorporate into their design human flaws, biases, and assumptions, especially about women and their role in society. In this talk, Professor Gina Neff will show that explicitly thinking about gender in AI will help designers make AI systems that help humans make better—and fairer—decisions.
- In a finding that SHOCKS political scientists and pundits alike, ML finds US urban density predicts Democrat areas… https://t.co/9Alkn0jdew 08:06:37 AM September 02, 2020 from Twitter Web App ReplyRetweetFavorite
- Introducing Dr. Sian J.M. Brooke who successfully passed her Viva today for her thesis 'Breaking Gender Code: Visib… https://t.co/IoYd5CT03L 07:58:21 AM September 02, 2020 from Twitter Web App ReplyRetweetFavorite
- h/t to the keen-eyed @lilianedwards 07:21:28 AM September 01, 2020 from Twitter Web App in reply to ginasue ReplyRetweetFavorite
- Life Goals! Our report on AI@Work makes the first page of @PrivateEyeNews 'AI told you so'. Full report at… https://t.co/C0aafN2jUC 07:21:00 AM September 01, 2020 from Twitter for Android ReplyRetweetFavorite
- @obrien It's the little things. In big packages. 09:29:26 AM August 31, 2020 from Twitter for Android in reply to obrien ReplyRetweetFavorite