Machine learning and artificial intelligence make extraordinary discoveries possible, and autonomous systems are being rolled out in vital business and social settings including healthcare, policing, and education. Assumptions about the neutrality and objectivity of data may encode serious social and political bias into the results. High-profile examples show how these systems already incorporate into their design human flaws, biases, and assumptions, especially about women and their role in society. In this talk, Professor Gina Neff will show that explicitly thinking about gender in AI will help designers make AI systems that help humans make better—and fairer—decisions.
- @Floridi Congratulations! 07:47:19 AM January 22, 2021 from Twitter Web App in reply to Floridi ReplyRetweetFavorite
- But go ahead and ignore the importance of media studies in the 21 century. As the esteemed minister shows us, you do so at your own peril. 07:13:46 AM January 22, 2021 from Twitter Web App in reply to ginasue ReplyRetweetFavorite
- We also happen to run one of the most competitive masters programmes at Oxford as measured by the number of applica… https://t.co/53GdGctiXB 07:13:46 AM January 22, 2021 from Twitter Web App in reply to ginasue ReplyRetweetFavorite
- Advise OECD on the future of work in a digital society Study how online learning affects kids Study whether videoga… https://t.co/8KTK31MY8v 07:13:45 AM January 22, 2021 from Twitter Web App in reply to ginasue ReplyRetweetFavorite
- Here are a few things my colleagues have done: Advise HMG on how to keep social media safe from foreign manipulati… https://t.co/LxHMPZzGQr 07:13:45 AM January 22, 2021 from Twitter Web App in reply to ginasue ReplyRetweetFavorite