Showing posts with label organizations. Show all posts
Showing posts with label organizations. Show all posts

Thursday, May 16, 2024

Michael Wade and Tomoko Yokoi, Harvard Business Review (HBR); How to Implement AI — Responsibly

"Regrettably, our research suggests that such proactive measures are the exception rather than the rule. While AI ethics is high on the agenda for many organizations, translating AI principles into practices and behaviors is proving easier said than done. However, with stiff financial penalties at stake for noncompliance, there’s little time to waste. What should leaders do to double-down on their responsible AI initiatives?

To find answers, we engaged with organizations across a variety of industries, each at a different stage of implementing responsible AI. While data engineers and data scientists typically take on most responsibility from conception to production of AI development lifecycles, nontechnical leaders can play a key role in ensuring the integration of responsible AI. We identified four key moves — translate, integrate, calibrate and proliferate — that leaders can make to ensure that responsible AI practices are fully integrated into broader operational standards."

Sunday, September 24, 2023

How To Approach AI Adoption Ethically And Responsibly Within Your Organization; Forbes, September 24, 2023

  Rhett Power, Forbes; How To Approach AI Adoption Ethically And Responsibly Within Your Organization

"In order to take full advantage of everything AI technology has to offer, you must be careful and efficient when adding this technology to your organization’s processes. Luckily, you can do a few things to ensure a smooth and flawless transition. Here are four strategies that can pave the way for ethical implementation...

2. Remain up to date on all regulations.

In addition to establishing an AI ethics advisor, it is essential to remain current on the ever-evolving regulations surrounding the use of AI. As the technology advances rapidly, laws will be enacted to address ethical concerns and protect individuals’ rights. By proactively addressing potential problems related to privacy infringement or bias algorithms through adherence to regulations, organizations can foster a positive reputation while harnessing the benefits of AI innovation. Remaining current on all the regulations ensures your organization meets all legal requirements and industry standards.

Until legal requirements and industry standards are ironed out, you must aim to be as transparent as possible. “Currently, there is no way to peer into the inner workings of an AI tool and guarantee that the system is producing accurate or fair output,” says Tsedal Neeley, Naylor Fitzhugh Professor of Business Administration and senior associate dean of faculty and research at Harvard Business School. “As a consequence, leaders should exercise careful judgment in determining when and how it’s appropriate to use AI, and they should document when and how AI is being used. That way people will know that an AI-driven decision was appraised with an appropriate level of skepticism, including its potential risks or shortcomings.”"

Saturday, March 5, 2022

Statements of Solidarity with Colleagues in Ukraine by Archive, Library, and Other Organizations; Info Docket, Library Journal, February 27, 2022

 , Info Docket, Library Journal; Statements of Solidarity with Colleagues in Ukraine by Archive, Library, and Other Organizations

"Statements of Solidarity and Support (Latest Entries in Bold)

Sunday, January 28, 2018

Hillary Clinton, Burns Strider, and the Fault Lines of #MeToo; The Atlantic, January 26, 2018

Megan Garber, The Atlantic; Hillary Clinton, Burns Strider, and the Fault Lines of #MeToo

"The Times story paints a picture of a Hillary Clinton who is, given her history, both a recipient of harassment and a passive enabler of it. A manager, in other words, like so many of the others who have been revealed in the journalism of the post-Weinstein months: one who learns of an accusation of harassment and addresses it by disrupting the life of the alleged victim, rather than the life of the alleged perpetrator. The boss who found enough evidence of Burns Strider’s wrongdoing to dock his pay and put him in counseling … but who kept him on staff—with all its many other young women—nonetheless. Here is Clinton serving, yet again, as a rich metaphor—this time, though, for complacency and complicity. For powerful people who are concerned, but not concerned enough.

And also: for managers who meet the humanity at the heart of harassment allegations with the clinical language of corporate callousness. It’s unsurprising, perhaps, but notable nonetheless that Clinton responded to the Times’ reporting with a statement that was many steps removed from Clinton, the person: It was written by Utrecht, Kleinfeld, Fiori, Partners, the law firm that had represented the campaign in 2008 (and that, the Times puts it, has “been involved on sexual harassment issues”). The statement was delivered, from there, through an unnamed Clinton spokesman. “To ensure a safe working environment,” it read, “the campaign had a process to address complaints of misconduct or harassment. When matters arose, they were reviewed in accordance with these policies, and appropriate action was taken. This complaint was no exception.”

So while it was Clinton, the manager, the Times report goes, who made the decision to keep Strider on her team, Clinton, the manager, is notably absent from today’s explanation of things. She has outsourced her own decision-making, it seems, to discussions of process and policies—the same anonymous structures that so many other managers have relied on for legal, and moral, insulation. What were the “processes” that kept Strider in his job and his accuser out of hers? You are not supposed to ask. “Processes” are meant to be the answers to their own questions. So are “policies.” Corporations-as-people, if you’d like, but the framework falls apart when organizations are able to deny that humanity as soon as it becomes a liability."