Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Monday, April 20, 2026

Maryland passes legislation banning retailers from using personal data to set prices. Does it do enough?; WAMU, April 17, 2026

 Esther Ciammichilli, Jackson Sinnenberg, WAMU; Maryland passes legislation banning retailers from using personal data to set prices. Does it do enough?

"The Maryland General Assembly passed a bill this week will prohibit food retailers from changing the price of their products – in real time – depending on who is buying them. The practice is called dynamic pricing. 

The new legislation is expected to be signed into law by Governor Wes Moore, who introduced it with leaders in the General Assembly. It will specifically prohibit retailers from using personal protected data to set prices for individual customers. This kind of data includes biometric information like ethnicity, sex, and gender identity...

What made Governor Wes Moore and the assembly leadership want to tackle dynamic pricing during this session?

Well, I think we’ve seen over the last several years this sort of catch up that we’re doing. Technology is moving so fast and the tech companies are finding more and more ways to exploit, really, the data, the algorithms, what they know about us in ways that are really harmful to consumers.

Over the last few years we’ve had several bills that are about protecting biodynamics, protecting consumer privacy, protecting the use of data without people’s permission. I think over the last year we saw a new way that these tech companies and these large corporations are finding ways to combine data brokers, private personal data, in a way that’s really harmful to consumers, in a way that really exploits consumers. And so this year, this is what we tackled.

During the final debate over the bill last week, you said, “One of the largest corporations in the world is announcing to their shareholders technology which they will patent to be able to adjust prices based on personal data.” Can you elaborate on the details of that announcement?

Yeah, so, you know, Walmart is …  they’re not going to have paper tags on their grocery stores anymore on there for for their prices. They’re gonna have these little screens that can change immediately. Digital screens to price your milk and your eggs and flour and and whatever else.

But what this technology allows them to do ultimately is to figure out who’s standing in front of that screen and change the price based on who you are. And that’s really the thing that we’re trying to get ahead of with this legislation."

Wednesday, April 1, 2026

Judge rules Trump order eliminating NPR, PBS funding is unconstitutional; The Washington Post, March 31, 2026

  , The Washington Post; Judge rules Trump order eliminating NPR, PBS funding is unconstitutional

Trump’s order violated the First Amendment rights of the public media giants, a federal judge in Washington found.


"A federal judge in Washington struck down part of President Donald Trump’s executive order targeting funding for NPR and the Public Broadcasting Service (PBS) on Tuesday, ruling that it was unconstitutional retaliation that violated their press freedom rights under the First Amendment.


The May 1, 2025, executive order, titled “Ending Taxpayer Subsidization of Biased Media,” cut off funding to public media — with Trump calling out what he perceived as left-wing bias in NPR’s and PBS’s news reporting.


“The message is clear,” U.S. District Judge Randolph Moss, a Barack Obama appointee to the federal bench, wrote in an opinion. “NPR and PBS need not apply for any federal benefit because the President disapproves of their ‘left-wing’ coverage of the news.” He added that the action amounted to “viewpoint discrimination.”"

Friday, March 27, 2026

Hegseth Strikes Two Black and Two Female Officers From Promotion List; The New York Times, March 27, 2026

 Greg JaffeEric SchmittHelene Cooper and  , The New York Tiimes; Hegseth Strikes Two Black and Two Female Officers From Promotion List

Defense Secretary Pete Hegseth’s highly unusual decision to remove officers from a one-star promotion list has spurred allegations of racial and gender bias.

"Defense Secretary Pete Hegseth is blocking the promotion of four Army officers to be one-star generals, a highly unusual move that has prompted some senior military officials to question whether the officers are being singled out because of their race or gender.

Two of the officers targeted by Mr. Hegseth are Black and two are women on a promotion list that consists of about three dozen officers, most of whom are white men, senior military officials said.

Mr. Hegseth had been pressing senior Army leaders, including Army Secretary Daniel P. Driscoll, for months to remove the officers’ names, military officials said. But Mr. Driscoll, citing the officers’ decades-long records of exemplary service, had repeatedly refused.

Earlier this month, Mr. Hegseth broke the logjam by unilaterally striking the officers’ names from the list, though it is not clear he has the legal authority to do so. The list is currently being reviewed by the White House, which is expected to send it to the Senate for final approval. A few female and Black officers remain on the list, military officials said.

It is exceedingly rare that a one-star list draws such intense scrutiny from a defense secretary. The battle highlights the bitter rifts opened by Mr. Hegseth’s campaign to reverse policies that he says are prejudiced against white officers.

Mr. Hegseth has said repeatedly that he is determined to change a culture corrupted by “foolish,” “reckless” and “woke” leaders from previous administrations. But his heavy scrutiny, especially of female and minority officers, is eroding confidence in a promotion system that is supposed to be apolitical and merit based, his critics have said.

This article is based on interviews with 11 current and former military and administration officials who requested anonymity to discuss sensitive personnel matters."

Tuesday, February 10, 2026

Don’t deny military community unbiased coverage issues that matter to them; Stars and Stripes, February 5, 2026

 BERN ZOVISTOSKI, Stars and Stripes; Don’t deny military community unbiased coverage issues that matter to them



[Kip Currier: Powerful testimonial of the importance of free and independent presses]


"Bern Zovistoski was editor of European Stars and Stripes from 1991 to 1996.

When Congress intervened several decades ago (1990) to change the way Stars and Stripes operated on behalf of the U.S. military worldwide, there was evidence of “undue influence” by the uniformed leadership.

The new directives adopted by the Department of Defense were aimed at eliminating military control over what to publish (or not publish) and to provide service members a newspaper that emulated the best aspects of American journalism, without censorship of any kind.

As the first editor of European Stars and Stripes under the revised policies, I was hired as a “colonel equivalent” with responsibility for ensuring fair and accurate news coverage, arriving at Stripes in Darmstadt, Germany, just 10 days before the massive air attack that launched Operation Desert Storm against Iraq.

I saw what the situation had been.

For nearly the next six years, I saw a remarkable team of civilian journalists and military members transform the newspaper into one with strong editorial integrity that offered service members unvarnished news and information — which, of course, they deserved.

During my tenure, I benefited from an excellent relationship from the two publishers with whom I worked: Air Force Col. Gene Townsend, who hired me, and Air Force Col. Steven Hoffman. Both supported my efforts to the hilt.

In fact, I learned during my tenure that a good number of officers supported our efforts.

When the Gulf War ensued, we deployed reporters just as many U.S. newspapers did, and in short order our daily circulation surged from about 80,000 to 250,000 — and many of those readers were engaged in battle.

Who would deny these men and women an unbiased view of the monumental events in which they were involved?

Based on all the signals from the Trump administration’s people, they would.

I had held virtually every position in the newsroom in my career up to this point, including 25 years at The Times-Union in Albany, N.Y. — 18 in a managerial role.

I learned that the purpose of a newspaper is to provide truthful news to its readers.

There were many instances during my nearly six-year tenure that demonstrated some military leaders wanted to — and tried to — alter what we were doing to serve our readers.

But believe me, none succeeded.

In closing, I believe this anecdote sums up our success:

When I arrived at Stripes, there were no letters to the editor.

Oh, an occasional question was printed, with a “company policy” answer by the military. In essence, our readers were not given an opportunity to receive answers to their questions or even to ask questions.

We implemented a policy that enabled any and every reader to write a letter to the editor — expressing whatever they wished — and required the writer sign his or her name!

We were deluged with letters.

That was a biggie (although common in U.S. newspapers that we were emulating).

This action confirmed that the newspaper truly belonged to the readers and served them.

It’s doubtful President Donald Trump, Defense Secretary Pete Hegseth or anyone else in the current federal administration understands — or, perhaps, it’s because they do, and that’s their problem.""

Tuesday, January 27, 2026

After 36 hours justifying the killing of Alex Pretti, Fox News suddenly changes its narrative; Media Matters for America, January 26, 2026

 MATT GERTZ , Media Matters for America; After 36 hours justifying the killing of Alex Pretti, Fox News suddenly changes its narrative

"On Sunday evening, Fox News correspondent Bill Melugin published a lengthy report detailing internal dissent among his federal immigration enforcement sources regarding the narrative pushed by Department of Homeland Security leaders after Border Patrol officers gunned down Alex Pretti, an ICU nurse who had been videotaping their activities, in Minneapolis on Saturday morning. 

Amid the several hundred words describing an internal schism over how DHS is messaging masked agents of the state opening fire on a man who had already been restrained, Melugin slipped in the following statement: “There is no indication Pretti was there to murder law enforcement, as videos appear to show he never drew his holstered firearm.”

Melugin’s stark acknowledgement was whiplash-inducing for anyone who had been following Fox’s on-air coverage of Pretti’s killing up to that point, and it marked the start of a dramatic shift in the network’s treatment of the case.

Fox spent Saturday and much of Sunday blaming the victim and local Democrats for his death while excusing and even valorizing his executioners. In doing so the network was following in the footsteps of the high-ranking administration officials who baselessly argued that Pretti was a “would-be assassin” engaged in “domestic terrorism.” Melugin himself was the vehicle DHS used to launder its excuse that Pretti “was armed.” 

And notably, some Fox contributors repeatedly justified Pretti’s killing by going beyond the official comment to allege that he had drawn the gun he was reportedly legally carrying and that he even pointed it at the Border Patrol officers — the very claim Melugin said Sunday night had been disproved by videos.

The fallacy of the DHS smear of Pretti had long been clear to anyone who had reviewed videos of the shooting, triggering widespread outrage over his killing. But Melugin’s admission — and his reporting on a schism within immigration enforcement over the case — apparently provided his colleagues the permission structure they needed to abandon their narrative."

Tuesday, December 30, 2025

A code of ethics for AI in education; The Times of Israel, December 29, 2025

 Raz Frohlich, The Times of Israel; A code of ethics for AI in education

"Generative artificial intelligence is transforming every corner of our lives — how we communicate, create, work, and, inevitably, how we teach and learn. As educators, we cannot ignore its power, nor can we embrace it blindly. The rapid pace of AI innovation requires not only technical adaptation, but also deep ethical reflection.

As the largest education provider in Israel, at Israel Sci-Tech Schools (ISTS), we believe that, as AI becomes increasingly present in classrooms, we must ensure that human judgment, accountability, and responsibility remain at the center of education. That is why we are the first in Israel to create a Code of Ethics for Artificial Intelligence in Education. This is not just a policy document but an open invitation for discussion, learning, and shared responsibility across the education system.

This ethical code is not a technical manual, and it does not provide instant answers for daily classroom situations. Instead, it offers a holistic approach — a way of thinking, a framework for educators, students, and policymakers to use AI consciously and responsibly. It asks essential, core-value questions: How do we balance innovation with privacy? How do we ensure equality when access to technology is uneven? How do we maintain transparency when using AI? And when should we pause, reflect, and reconsider how we use AI in the classroom?

To develop the code, we drew from extensive global research and local experience. We consulted with ethicists, educators, technologists, psychologists, and legal experts — and, perhaps most importantly, we listened to students, teachers, and parents. Through roundtable discussions, they shared real concerns and insights about AI’s potential and its pitfalls. Those conversations shaped the code’s seven guiding principles, designed to help schools integrate AI ethically, transparently, and with respect for human dignity."

Saturday, November 1, 2025

DOJ faces ethics nightmare with Trump bid for $230M settlement; The Hill, October 31, 2025

 REBECCA BEITSCH, The Hill; DOJ faces ethics nightmare with Trump bid for $230M settlement


[Kip Currier: This real life "nightmare" scenario is akin to a hypothetical law school exam fact pattern with scores of ethics issues for law students to identify and discuss. Would that it were a fictitious set of facts.

If Trump's former personal attorneys, who are now in the top DOJ leadership, will not recuse themselves due to genuine conflicts of interest and appearances of impropriety, will the state and federal bar associations, who license these attorneys and hold them to annual continuing legal and ethics-related education requirements so they can remain in good standing with their respective licensing entities, step in to scrutinize potential ethical lapses of these lawyers?

These unprecedented actions by Trump must not be treated as normal. Similarly, if Trump's former personal attorneys approve Trump's attempt to "shake down" the federal government and American taxpayers, their ethically dubious actions as DOJ leaders and officers of the court must not be normalized by the organizations that are charged to enforce ethical standards for all licensed attorneys.

Moreover, approval of this settlement would be damaging to the rule of law and to public trust in the rule of law. If the most powerful person on the planet can demand that an organization -- whose leadership reports to him -- pay out a "settlement" for lawfully-conducted actions and proceedings in a prior administration, what does that say about the state of justice in the U.S.? I posit that it would say that it is a justice system that has been utterly corrupted and that is not subject to equal application of its laws and ethical standards. No person is above the law, or should be above the law in our American system of government and checks and balances. Not even the U.S. President, despite the Roberts Court's controversial Trump v. U.S. July 2024 ruling recognizing absolute and limited Presidential immunity in certain spheres.

Finally, a few words about "speaking out" and "standing up". It is vital for those who are in leadership positions to call out actions like the ones at hand that arguably undermine the rule of law and incrementally move this country from one that is democratically-centered to an autocratic nation state like Russia. I searched for and could find no statement by the American Bar Association (ABA) on this matter, a matter that is clearly relevant to its membership, of which I count myself as a member.

Will the ABA and other legal organizations share their voices on these matters that have such far-reaching implications for the rule of law and our nearly 250-year democratic experiment?

The paperback version of my Bloomsbury book, Ethics, Information, and Technology, becomes available on November 13, and I intentionally included a substantial professional and character ethics section at the outset of the book because those principles are so integral to how we conduct ourselves in all areas of our lives. Ethics precepts and values like integrity, attribution, truthfulness and avoidance of misrepresentation, transparency, accountability, and disclosure of conflicts of interest, as well as recusal when we have conflicts of interest.]


[Excerpt]

"The Department of Justice (DOJ) is facing pressure to back away from a request from President Trump for a $230 million settlement stemming from his legal troubles, as critics say it raises a dizzying number of ethical issues.

Trump has argued he deserves compensation for the scrutiny into his conduct, describing himself as a victim of both a special counsel investigation into the 2016 election and the classified documents case.

The decision, however, falls to a cadre of attorneys who previously represented Trump personally.

Rupa Bhattacharyya, who reviewed settlement requests in her prior role as director of the Torts Branch of the DOJ’s Civil Division, said most agreements approved by the department are typically for tens of thousands of dollars or at most hundreds of thousands.

“In the ordinary course, the filing of administrative claims is required. So that’s not unusual. In the ordinary course, a relatively high damages demand on an administrative claim is also not that unusual. What is unusual here is the fact that the president is making a demand for money from his own administration, which raises all sorts of ethical problems,” Bhattacharyya told The Hill.

“It’s also just completely unheard of. There’s never been a case where the president of the United States would ask the department that he oversees to make a decision in his favor that would result in millions of dollars lining his own pocket at the expense of the American taxpayer.”

It’s the high dollar amount Trump is seeking that escalates the decision to the top of the department, leaving Deputy Attorney General Todd Blanche, as well as Associate Attorney General Stanley Woodward, to consider the request."

Monday, July 7, 2025

Welcome to Your Job Interview. Your Interviewer Is A.I.; The New York Times, July 7, 2025

 Natallie Rocha , The New York Times; Welcome to Your Job Interview. Your Interviewer Is A.I.

"Job seekers across the country are starting to encounter faceless voices and avatars backed by A.I. in their interviews. These autonomous interviewers are part of a wave of artificial intelligence known as “agentic A.I.,” where A.I. agents are directed to act on their own to generate real-time conversations and build on responses."

Wednesday, April 30, 2025

The Tech Industry Tried Reducing AI’s Pervasive Bias. Now Trump Wants to End Its ‘Woke AI’ Efforts; Associated Press via Inc., April 28, 2025

 Associated Press via Inc.; The Tech Industry Tried Reducing AI’s Pervasive Bias. Now Trump Wants to End Its ‘Woke AI’ Efforts 

"In the White House and the Republican-led Congress, “woke AI” has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to “advance equity” in AI development and curb the production of “harmful and biased outputs” are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee.

And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and “responsible AI” in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on “reducing ideological bias” in a way that will “enable human flourishing and economic competitiveness,” according to a copy of the document obtained by The Associated Press.

In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work.

But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive."

Friday, February 7, 2025

A Judge Tried to Get Out of Jury Duty. What He Said Cost Him His Job.; The New York Times, February 6, 2025

 , The New York Times ; A Judge Tried to Get Out of Jury Duty. What He Said Cost Him His Job.


[Kip Currier: A bedrock principle of the American judicial system is a commitment to equity and fairness by those who are entrusted to be impartial adjudicators. This story reveals an individual who makes a mockery of that ethical imperative.] 


[Excerpt]

"When Richard Snyder was running to be a town justice in tiny Petersburgh, N.Y., in 2013, he told a local news site that he would be fair and honest on the bench. Because he was not a lawyer, he also said he was “looking forward to learning about the law.”

He just learned something about it the hard way.

Mr. Snyder, a Republican, was unopposed in that 2013 race and won it with 329 votes. But in December he resigned after a disciplinary panel found that he had tried to get out of grand jury duty by introducing himself as a town justice and saying he could not be impartial based on his opinion of those who appeared in his court.

“I know they are guilty,” Mr. Snyder said in arguing to be excused, according to a court transcript. Otherwise, he explained, “they would not be in front of me.” (The judge dismissed him and notified the disciplinary panel.)"

Friday, December 27, 2024

New Course Creates Ethical Leaders for an AI-Driven Future; George Mason University, December 10, 2024

 Buzz McClain, George Mason University; New Course Creates Ethical Leaders for an AI-Driven Future

"While the debates continue over artificial intelligence’s possible impacts on privacy, economics, education, and job displacement, perhaps the largest question regards the ethics of AI. Bias, accountability, transparency, and governance of the powerful technology are aspects that have yet to be fully answered.

A new cross-disciplinary course at George Mason University is designed to prepare students to tackle the ethical, societal, and governance challenges presented by AI. The course, AI: Ethics, Policy, and Society, will draw expertise from the Schar School of Policy and Government, the College of Engineering and Computing(CEC), and the College of Humanities and Social Sciences (CHSS).

The master’s degree-level course begins in spring 2025 and will be taught by Jesse Kirkpatrick, a research associate professor in the CEC, the Department of Philosophy, and codirector of the Mason Autonomy and Robotics Center

The course is important now, said Kirkpatrick, because “artificial intelligence is transforming industries, reshaping societal norms, and challenging long-standing ethical frameworks. This course provides critical insights into the ethical, societal, and policy implications of AI at a time when these technologies are increasingly deployed in areas like healthcare, criminal justice, and national defense.”"

Thursday, October 31, 2024

The true story of a famed librarian and the secret she guarded closely; NPR, October 29, 2024

 , NPR; The true story of a famed librarian and the secret she guarded closely

"The name Belle da Costa Greene might not ring a bell, but New York's historic Morgan Library and Museum is trying to change that.

A new exhibit called "A Librarian's Legacy" opened this month, just in time for the Morgan's 100th anniversary. It traces Greene's life and her lasting influence as the library's first director.

It was an unusually prominent role for a woman at the time — a Black woman who chose to pass as white to survive in a highly segregated America."

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

 Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

Thursday, January 23, 2020

Five Ways Companies Can Adopt Ethical AI; Forbes, January 23, 2020

Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning, World Economic Forum, Forbes; Five Ways Companies Can Adopt Ethical AI

"In 2014, Stephen Hawking said that AI would be humankind’s best or last invention. Six years later, as we welcome 2020, companies are looking at how to use Artificial Intelligence (AI) in their business to stay competitive. The question they are facing is how to evaluate whether the AI products they use will do more harm than good...

Here are five lessons for the ethical use of AI."

Tuesday, November 26, 2019

NYC wants a chief algorithm officer to counter bias, build transparency; Ars Technica, November 25, 2019

Kate Cox, Ars Technica; NYC wants a chief algorithm officer to counter bias, build transparency

"It takes a lot of automation to make the nation's largest city run, but it's easy for that kind of automation to perpetuate existing problems and fall unevenly on the residents it's supposed to serve. So to mitigate the harms and ideally increase the benefits, New York City has created a high-level city government position essentially to manage algorithms."

Wednesday, November 6, 2019

How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019

David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness

"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI. 

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."

Wednesday, October 23, 2019

A face-scanning algorithm increasingly decides whether you deserve the job; The Washington Post, October 22, 2019

Drew Harwell, The Washington Post; A face-scanning algorithm increasingly decides whether you deserve the job 

HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it ‘profoundly disturbing.’

"“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New York...

Loren Larsen, HireVue’s chief technology officer, said that such criticism is uninformed and that “most AI researchers have a limited understanding” of the psychology behind how workers think and behave...

“People are rejected all the time based on how they look, their shoes, how they tucked in their shirts and how ‘hot’ they are,” he told The Washington Post. “Algorithms eliminate most of that in a way that hasn’t been possible before.”...

HireVue’s growth, however, is running into some regulatory snags. In August, Illinois Gov. J.B. Pritzker (D) signed a first-in-the-nation law that will force employers to tell job applicants how their AI-hiring system works and get their consent before running them through the test. The measure, which HireVue said it supports, will take effect Jan. 1."

Thursday, April 18, 2019

'Disastrous' lack of diversity in AI industry perpetuates bias, study finds; The Guardian, April 16, 2019

Kari Paul, The Guardian;

'Disastrous' lack of diversity in AI industry perpetuates bias, study finds

"Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said...

The report released on Tuesday cautioned against addressing diversity in the tech industry by fixing the “pipeline” problem, or the makeup of who is hired, alone. Men currently make up 71% of the applicant pool for AI jobs in the US, according to the 2018 AI Index, an independent report on the industry released annually. The AI institute suggested additional measures, including publishing compensation levels for workers publicly, sharing harassment and discrimination transparency reports, and changing hiring practices to increase the number of underrepresented groups at all levels."

Thursday, February 14, 2019

Parkland school turns to experimental surveillance software that can flag students as threats; The Washington Post, February 13, 2019

Drew Harwell, The Washington Post; Parkland school turns to experimental surveillance software that can flag students as threats

"The specter of student violence is pushing school leaders across the country to turn their campuses into surveillance testing grounds on the hope it’ll help them detect dangerous people they’d otherwise miss. The supporters and designers of Avigilon, the AI service bought for $1 billion last year by tech giant Motorola Solutions, say its security algorithms could spot risky behavior with superhuman speed and precision, potentially preventing another attack.

But the advanced monitoring technologies ensure that the daily lives of American schoolchildren are subjected to close scrutiny from systems that will automatically flag certain students as suspicious, potentially spurring a response from security or police forces, based on the work of algorithms that are hidden from public view.

The camera software has no proven track record for preventing school violence, some technology and civil liberties experts argue. And the testing of their algorithms for bias and accuracy — how confident the systems are in identifying possible threats — has largely been conducted by the companies themselves."

Monday, April 23, 2018

Starbucks won’t have any idea whether its diversity training works; The Washington Post, April 23, 2018

Hakeem Jefferson and Neil Lewis, Jr., The Washington Post; Starbucks won’t have any idea whether its diversity training works

"Without the expertise to know what makes an intervention more or less successful, it is hard to imagine that Starbucks or any other organization stands much of a chance of developing a successful diversity training program that has long-term, sustainable effects on its culture. Moreover, Starbucks claims that it is interested in knowing whether the training program it will implement will be effective. As social scientists, we know firsthand how difficult it is to measure the effects of an intervention, and we wonder who on Starbucks’s team is sufficiently equipped to do this. The track record of those Starbucks has included in its announcement is remarkable, but it is social scientists — not lawyers or activists — who are trained to adequately and rigorously assess whether this intervention works, or if it will join the long list of those that don’t.

The inclusion of social scientists at every stage of the process can make diversity training more than feel-good PR moves that are of little consequence. Yes, engaging the scholarly community will mean that the process will be slower. But as bias expert Brian Nosek said, if Starbucks and its corporate peers think interventions like this are worth doing, they should certainly think that it’s worth doing well."