Machine learning and artificial intelligence make extraordinary discoveries possible, and autonomous systems are being rolled out in vital business and social settings including healthcare, policing, and education. Assumptions about the neutrality and objectivity of data may encode serious social and political bias into the results. High-profile examples show how these systems already incorporate into their design human flaws, biases, and assumptions, especially about women and their role in society. In this talk, Professor Gina Neff will show that explicitly thinking about gender in AI will help designers make AI systems that help humans make better—and fairer—decisions.
- @clarehayns For some reason, that link was broken for me but this one worked: https://t.co/YUL8SjcQla 03:06:48 AM February 26, 2020 from Twitter for Android in reply to clarehayns ReplyRetweetFavorite
- @AristidouAngela Yes! 02:17:09 AM February 26, 2020 from Twitter for Android in reply to AristidouAngela ReplyRetweetFavorite
- @danielradosh The lady doth protest too much, methinks. 11:53:48 PM February 25, 2020 from Twitter Web App in reply to danielradosh ReplyRetweetFavorite
- @JulianPosada0 @emiledurkheim 2nd to last still equals a degree 11:26:11 PM February 25, 2020 from Twitter Web App in reply to JulianPosada0 ReplyRetweetFavorite
- @CPACAT1966 @emiledurkheim always. 07:27:45 PM February 25, 2020 from Twitter Web App in reply to CPACAT1966 ReplyRetweetFavorite