Showing posts with label AI pros and cons. Show all posts
Showing posts with label AI pros and cons. Show all posts

Friday, May 1, 2026

New Report Weighs Pros and Cons of AI; Publishers Weekly, April 30, 2026

 Jim Milliot , Publishers Weekly; New Report Weighs Pros and Cons of AI

"A recent survey of 559 book publishing professionals in the U.S. and Canada reflects the current schism in the publishing industry about if and how AI can be effectively and ethically be used. 

The report, "AI Usage Across the North American Book Market, 2025," was sponsored by BISG and BookNet Canada and gathered responses from publishers, librarians and other industry professionals...

The primary concern for survey respondents around AI in the industry is inadequate controls around the use of copyrighted material, with 86% noting the issue."

Monday, April 13, 2026

OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash; Los Angeles Times, April 13, 2026

 Queenie Wong , Los Angeles Times; OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash

"Hours after a Molotov cocktail was thrown at his San Francisco home, OpenAI Chief Executive Sam Altman addressed the criticism surrounding artificial intelligence that appears to have been the impetus for the attack. 

In a lengthy blog post, Altman shared a family photo of his husband and child, stating he hopes it might convince people not to repeat the attack despite their opinions on him.

The San Francisco Police Department arrested a 20-year-old man in connection with the Friday morning attack but did not publicly comment on the motivation. Altman and his company, the maker of ChatGPT, have been at the center of a heated debate about whether AI will change the world for better or worse."

It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears; The Guardian, April 8, 2026

 , The Guardian; It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears

"I’ll confess: prior to this moment of giving the subject more than two seconds’ thought, my anxieties around AI were extremely localised. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose architects support Trump, and decided that, yes, I should – an easy sacrifice because I don’t use it in the first place.

Anything bigger than that seemed fanciful. Last year, when Karen Hao’s book Empire of AI was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman’s leadership is cult-like and blind to cost – no different, in other words, to his tech predecessors, except much more dangerous. Still, I didn’t read the book.

The investigation this week in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity: to ask ChatGPT, the AI-powered chatbot created by Altman’s OpenAI, to summarise the key findings of a piece that is highly critical of ChatGPT and Altman."

Tuesday, April 7, 2026

I told the internet I use AI. Boy, was it mad.; The Washington Post, April 5, 2026

 , The Washington Post; I told the internet I use AI. Boy, was it mad.

"...Many people think that using AI at any stage of the writing process amounts to outsourcing your thinking to a machine, and they reacted badly to a journalist suggesting some AI use might be all right.

Obviously, I disagree, but I recognize those folks are grappling with important questions, such as “What is writing for?” and “Which uses of AI serve those purposes, and which undermine them?”"

Tuesday, December 30, 2025

An Anti-A.I. Movement Is Coming. Which Party Will Lead It?; The New York Times, December 29, 2025

 MICHELLE GOLDBERG, The New York Times ; An Anti-A.I. Movement Is Coming. Which Party Will Lead It?

"I disagree with the anti-immigrant, anti-feminist, bitterly reactionary right-wing pundit Matt Walsh about basically everything, so I was surprised to come across a post of his that precisely sums up my view of artificial intelligence. “We’re sleepwalking into a dystopia that any rational person can see from miles away,” he wrote in November, adding, “Are we really just going to lie down and let AI take everything from us?”

A.I. obviously has beneficial uses, especially medical ones; it may, for example, be better than humans at identifying localized cancers from medical imagery. But the list of things it is ruining is long."

A code of ethics for AI in education; The Times of Israel, December 29, 2025

 Raz Frohlich, The Times of Israel; A code of ethics for AI in education

"Generative artificial intelligence is transforming every corner of our lives — how we communicate, create, work, and, inevitably, how we teach and learn. As educators, we cannot ignore its power, nor can we embrace it blindly. The rapid pace of AI innovation requires not only technical adaptation, but also deep ethical reflection.

As the largest education provider in Israel, at Israel Sci-Tech Schools (ISTS), we believe that, as AI becomes increasingly present in classrooms, we must ensure that human judgment, accountability, and responsibility remain at the center of education. That is why we are the first in Israel to create a Code of Ethics for Artificial Intelligence in Education. This is not just a policy document but an open invitation for discussion, learning, and shared responsibility across the education system.

This ethical code is not a technical manual, and it does not provide instant answers for daily classroom situations. Instead, it offers a holistic approach — a way of thinking, a framework for educators, students, and policymakers to use AI consciously and responsibly. It asks essential, core-value questions: How do we balance innovation with privacy? How do we ensure equality when access to technology is uneven? How do we maintain transparency when using AI? And when should we pause, reflect, and reconsider how we use AI in the classroom?

To develop the code, we drew from extensive global research and local experience. We consulted with ethicists, educators, technologists, psychologists, and legal experts — and, perhaps most importantly, we listened to students, teachers, and parents. Through roundtable discussions, they shared real concerns and insights about AI’s potential and its pitfalls. Those conversations shaped the code’s seven guiding principles, designed to help schools integrate AI ethically, transparently, and with respect for human dignity."

Sunday, November 9, 2025

California Prosecutor Says AI Caused Errors in Criminal Case; Sacramento Bee via Government Technology, November 7, 2025

 Sharon Bernstein, Sacramento Bee via Government Technology; California Prosecutor Says AI Caused Errors in Criminal Case

"Northern California prosecutors used artificial intelligence to write a criminal court filing that contained references to nonexistent legal cases and precedents, Nevada County District Attorney Jesse Wilson said in a statement.

The motion included false information known in artificial intelligence circles as “hallucinations,” meaning that it was invented by the AI software asked to write the material, Wilson said. It was filed in connection with the case of Kalen Turner, who was accused of five felony and two misdemeanor drug counts, he said.

The situation is the latest example of the potential pitfalls connected with the growing use of AI. In fields such as law, errors in AI-generated briefs could impact the freedom of a person accused of a crime. In health care, AI analysis of medical necessity has resulted in the denial of some types of care. In April, A 16-year-old Rancho Santa Margarita boy killed himself after discussing suicidal thoughts with an AI chatbot, prompting a new California law aimed at protecting vulnerable users.

“While artificial intelligence can be a useful research tool, it remains an evolving technology with limitations — including the potential to generate ‘hallucinated’ citations,” Wilson said. “We are actively learning the fluid dynamics of AI-assisted legal work and its possible pitfalls.”

Monday, February 10, 2025

UNESCO Holds Workshop on AI Ethics in Cuba; UNESCO, February 7, 2025

  UNESCO; UNESCO Holds Workshop on AI Ethics in Cuba

"During the joint UNESCO-MINCOM National Workshop "Ethics of Artificial Intelligence: Equity, Rights, Inclusion" in Havana, the results of the application of the Readiness Assessment Methodology (RAM) for the ethical development of AI in Cuba were presented.

Similarly, there was a discussion on the Ethical Impact Assessment (EIA), a tool aimed at ensuring that AI systems follow ethical rules and are transparent...

The meeting began with a video message from the Assistant Director-General for Social and Human Sciences, Gabriela Ramos, who emphasized that artificial intelligence already has a significant impact on many aspects of our lives, reshaping the way we work, learn, and organize society.

Technologies can bring us greater productivity, help deliver public services more efficiently, empower society, and drive economic growth, but they also risk perpetuating global inequalities, destabilizing societies, and endangering human rights if they are not safe, representative, and fair, and above all, if they are not accessible to everyone.

Gabriela RamosAssistant Director-General for Social and Human Sciences"