Machine learning and artificial intelligence make extraordinary discoveries possible, and autonomous systems are being rolled out in vital business and social settings including healthcare, policing, and education. Assumptions about the neutrality and objectivity of data may encode serious social and political bias into the results. High-profile examples show how these systems already incorporate into their design human flaws, biases, and assumptions, especially about women and their role in society. In this talk, Professor Gina Neff will show that explicitly thinking about gender in AI will help designers make AI systems that help humans make better—and fairer—decisions.
- RT @MCTDCambridge: How can we defend human rights in the digital age? Watch this discussion on the power balance and risks between digital… 09:35:58 AM December 05, 2022 from Twitter Web App ReplyRetweetFavorite
- RT @MCTDCambridge: Tomorrow! How can we create practical guides for interdisciplinary + responsible data science? Join us + @DigitalEthno… 07:17:40 AM December 05, 2022 from Twitter Web App ReplyRetweetFavorite
- RT @digitalgoodnet: We are seeking a highly motivated, professional and organised individual to join the Digital Good Network as a Network… 03:11:22 AM December 04, 2022 from Twitter for Android ReplyRetweetFavorite
- RT @digitalgoodnet: Are you working in brand + web design/development? We invite tenders for an exciting branding and web development proj… 11:28:08 AM December 02, 2022 from Twitter Web App ReplyRetweetFavorite
- @DanielHall__ https://t.co/xyrFzqqEBi 10:01:24 AM December 02, 2022 from Twitter Web App in reply to DanielHall__ ReplyRetweetFavorite