Showing posts with label ethical AI. Show all posts
Showing posts with label ethical AI. Show all posts

Thursday, April 4, 2019

Google’s brand-new AI ethics board is already falling apart; Vox, April 3, 2019

Kelsey Piper, Vox; Google’s brand-new AI ethics board is already falling apart

"Of the eight people listed in Google’s initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won’t serve, and two others are the subject of petitions calling for their removal — Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James’s removal.

James and Gibbens are two of the three women on the board. The third, Joanna Bryson, was asked if she was comfortable serving on a board with James, and answered, “Believe it or not, I know worse about one of the other people.”

Altogether, it’s not the most promising start for the board.

The whole situation is embarrassing to Google, but it also illustrates something deeper: AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.

A role on Google’s AI board is an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it. There are urgent ethical questions about the AI work Google is doing — and no real avenue by which the board could address them satisfactorily. From the start, it was badly designed for the goal — in a way that suggests Google is treating AI ethics more like a PR problem than a substantive one."

Tuesday, March 6, 2018

Here’s how Canada can be a global leader in ethical AI; The Conversation, February 22, 2018

The Conversation;    Here’s how Canada can be a global leader in ethical AI

"Putting Canada in the lead

Canada has a clear choice. Either it embraces the potential of being a leader in responsible AI, or it risks legitimating a race to the bottom where ethics, equity and justice are absent.
Better guidance for researchers on how the Canadian Charter of Rights and Freedomsrelates to AI research and development is a good first step. From there, Canada can create a just, equitable and stable foundation for a research agenda that situates the new technology within longstanding social institutions.
Canada also needs a more coordinated, inclusive national effort that prioritizes otherwise marginalized voices. These consultations will be key to positioning Canada as a beacon in this field.
Without these measures, Canada could lag behind. Europe is already drafting important new approaches to data protection. New York City launched a task force this fall to become a global leader on governing automated decision making. We hope this leads to active consultation with city agencies, academics across the sciences and the humanities as well as community groups, from Data for Black Lives to Picture the Homeless, and consideration of algorithmic impact assessments.
These initiatives should provide a helpful context as Canada develops its own governance strategy and works out how to include Indigenous knowledge within that.
If Canada develops a strong national strategy approach to AI governance that works across sectors and disciplines, it can lead at the global level.