Machine learning and artificial intelligence make extraordinary discoveries possible, and autonomous systems are being rolled out in vital business and social settings including healthcare, policing, and education. Assumptions about the neutrality and objectivity of data may encode serious social and political bias into the results. High-profile examples show how these systems already incorporate into their design human flaws, biases, and assumptions, especially about women and their role in society. In this talk, Professor Gina Neff will show that explicitly thinking about gender in AI will help designers make AI systems that help humans make better—and fairer—decisions.
- @Nlitsardopoulos @erikbryn @David_desJ This 100% 07:21:24 AM April 19, 2021 from Twitter for Android in reply to Nlitsardopoulos ReplyRetweetFavorite
- @erikbryn @David_desJ Bought a new car lately? Rust has nothing to do with it: new electronics, new features and so… https://t.co/WNvtWHc6k7 07:20:54 AM April 19, 2021 from Twitter for Android in reply to erikbryn ReplyRetweetFavorite
- @DocMattMoudi Bravo 02:35:36 AM April 19, 2021 from Twitter for Android in reply to DocMattMoudi ReplyRetweetFavorite
- @CjColclough This is a lovely tribute, Christina. 12:42:50 AM April 19, 2021 from Twitter for Android in reply to CjColclough ReplyRetweetFavorite
- @tressiemcphd https://t.co/dJE0QKCiqt 12:39:12 AM April 19, 2021 from Twitter for Android in reply to tressiemcphd ReplyRetweetFavorite