Machine learning and artificial intelligence make extraordinary discoveries possible, and autonomous systems are being rolled out in vital business and social settings including healthcare, policing, and education. Assumptions about the neutrality and objectivity of data may encode serious social and political bias into the results. High-profile examples show how these systems already incorporate into their design human flaws, biases, and assumptions, especially about women and their role in society. In this talk, Professor Gina Neff will show that explicitly thinking about gender in AI will help designers make AI systems that help humans make better—and fairer—decisions.
- RT @sd_marlow: With all of the ethical principles being floated for "AI," which one specifically relates to not storing passwords in plaint… 05:02:51 PM April 16, 2019 ReplyRetweetFavorite
- Let me get this straight: @nytimes profiled a powerful woman in politics @neeratanden, head of one of the country's… https://t.co/p14F7Fjoqg 02:26:26 AM April 16, 2019 ReplyRetweetFavorite
- 10 yo baker @PestoPastaMan and his fresh hot cross buns. https://t.co/w2OZ2IC2ao 02:26:53 AM April 13, 2019 ReplyRetweetFavorite
- ⏰Seriously???!!! Sure you want to MISS the coolest graduate student conference on digital life? Then miss tomorrow'… https://t.co/eC92Ynerlz 08:59:44 AM April 11, 2019 ReplyRetweetFavorite
- Trick or Treat? Brexit, surely you must be joking. 06:46:23 AM April 11, 2019 ReplyRetweetFavorite