Saturday, March 14, 2026

Perspective: No copyright for AI-generated content; Northern Public Radio, March 13, 2026

David Gunkel, Northern Public Radio; Perspective: No copyright for AI-generated content

"What the courts actually decided is that neither the AI system nor the human who uses it counts as the author of the resulting work. Simply prompting ChatGPT or Claude to produce something isn’t considered the kind of creative activity that copyright law recognizes as authorship. And that creates an unexpected result. If neither the AI nor the human user is the author, then the work has no author at all. In effect, AI-generated images, music, and text become “orphan works”—creations that belong to no one. And that means that anyone can use them."

The Guardian view on changes to copyright laws: authors should be protected over big tech; The Guardian, March 13, 2026

  , The Guardian; The Guardian view on changes to copyright laws: authors should be protected over big tech

"In a scene that might have come from a dystopian novel, books were being stamped with “Human Authored” logos at this week’s London Book Fair. The Society of Authors described its labelling scheme as “an important sticking plaster to protect and promote human creativity in lieu of AI labelled content in the marketplace”.

Visitors to the fair were also being given copies of Don’t Steal This Book, an anthology of about 10,000 writers including Nobel laureate Kazuo Ishiguro, Malorie Blackman, Jeanette Winterson and Richard Osman, in which the pages are completely blank. The back cover states: “The UK government must not legalise book theft to benefit AI companies.” The message is clear: writers have had enough.

The fair comes the week before the government is due to deliver its progress report on AI and copyright, after proposals for a relaxation of existing laws caused outrage last year. Philippa Gregory, the novelist, described the plans for an “opt-out” policy, which puts the onus on writers to refuse permission for their work to be trawled, as akin to putting a sign on your front door asking burglars to pass by...

House of Lords report published last week lays out two possible futures: one in which the UK “becomes a world-leading home for responsible, legalised artificial intelligence (AI) development” and another in which it continues “to drift towards tacit acceptance of large-scale, unlicensed use of creative content”. One scenario protects UK artists, the other benefits global tech companies. To avoid a world of empty content, the choice is clear."

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war; The Guardian, March 13, 2026

  , The Guardian; Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

"The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago."

Why I’m Suing Grammarly; The New York Times, March 13, 2026

, The New York Times ; Why I’m Suing Grammarly

"Like all writers, I live by my wits. My ability to earn a living rests on my ability to craft a phrase, to synthesize an idea, to make readers care about people and places they can only access through words on a page. Grammarly hadn’t checked with me before using my name. I only learned that an A.I. company was selling a deepfake of my mind from an article online.

And it wasn’t just me. Superhuman — the parent company of Grammarly — made fake editor versions of a range of people, including the novelist Stephen King, the late feminist author bell hooks, the former Microsoft chief privacy officer Julie Brill, the University of Virginia data science professor Mar Hicks and the journalist and podcaster Kara Swisher.

At this point in a story about A.I. exploitation, I would normally bemoan the need for new laws to tackle the novel harms of a new technology. But in this case, there is an old law that’s able to do the job.

In my home state of New York, the century-old right of publicity law prohibits a person’s name or image from being used for commercial purposes without her consent. At least 25 states have similar publicity statutes. And now, I’m using this law to fight back. I am the lead plaintiff in a class-action lawsuit against Superhuman in the U.S. District Court for the Southern District of New York, alleging that it violated New York and California publicity laws by not seeking consent before using our names in a paid service...

In this global crisis of consent, we must grab hold of the few anchors we have for enforcement. The right of publicity is one of them, but it needs to be strengthened into a federal law — not just a patchwork of state laws. In some states, it applies only to advertising; in others, to all types of commercial uses. In some, it only covers celebrities; in others, it applies to everyone...

Denmark has taken a novel approach: proposing an amendment to copyright laws that would allow people to copyright their bodies, facial features and voices to protect against A.I. deepfakes. I’d be happy to copyright myself — as copyright seems to be the only law that is regularly enforced on the internet these days...

What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.

And it must be stopped."

What Was Grammarly Thinking?; The Atlantic, March 12, 2026

 Kaitlyn Tiffany, The Atlantic ; What Was Grammarly Thinking?

A short-lived AI tool promised to help users write like the greats—and a bunch of other random people, including me.

"But in the age of generative AI, there are many new kinds of copying. For instance, Wired reported last week on a tool offered by Grammarly, which briefly offered users the opportunity to put their writing through something called “Expert Review.” This produced AI-generated advice purportedly from the perspective of a bunch of famous authors, a bunch of less-famous working journalists (including myself, per The Verge’s reporting), and a bunch of academics (including some who had recently died).

I say “briefly” because the company deactivated the feature today. A lot of people got really mad about it because none of the experts had agreed for their work to be used in such a way, or to serve as uncompensated marketing for an app that people use to help them write more legible emails. “We hear the feedback and recognize we fell short on this,” the company’s CEO, Shishir Mehrotra, wrote on his LinkedIn page yesterday. Not long after, Wired reported that one of the journalists whose name had been used in the feature, Julia Angwin, was filing a class-action lawsuit against Grammarly’s owner, Superhuman Platform. In a statement forwarded by a spokesperson, Mehrotra repeated apologies made in his LinkedIn post and added, "We have reviewed the lawsuit, and we believe the legal claims are without merit and will strongly defend against them.”...

Now that I’ve looked more closely at this not-very-useful feature, and now that it’s shut down, the whole situation seems a little absurd. This was just a weird and inappropriate thing that a company tried to do to make money without putting in very much effort. The primary reason it became a news story at all was that it touched on widespread anxiety about whose work is worth what, whose skills will continue to be marketable in the age of AI, and whether any of us are really as complex, singular, and impossible-to-imitate as we might hope we are."

Friday, March 13, 2026

Former NFL players decry White House video mixing big hits, airstrikes; The Washington Post, March 12, 2026

 , The Washington Post; Former NFL players decry White House video mixing big hits, airstrikes

"The football montage, which was still online as of Thursday morning and by that time had collected over 10 million views on X, was met with criticism from members of the college and pro football community, not simply for the comparison of war and sport, but for the NFL’s and other rightsholders’ failure to object to the use of the images."

OpenAI sued for practicing law without a license; ABA Journal, March 6, 2026

 AMANDA ROBERT , ABA Journal; OpenAI sued for practicing law without a license

"OpenAI has been accused of practicing law without a license in a lawsuit brought by Nippon Life Insurance Co. of America. 

According to the insurer’s complaint, which was filed on Wednesday in the Northern District of Illinois, OpenAI’s artificial intelligence platform ChatGPT pushed a woman seeking disability benefits to breach a settlement agreement and file dozens of motions that “serve no legitimate legal or procedural purpose.”"

Thursday, March 12, 2026

Autonomous AI Agents Have an Ethics Problem; Undark, March 5, 2026

 , Undark; Autonomous AI Agents Have an Ethics Problem

AI-powered digital assistants can do many complex tasks on their own. But who takes responsibility when they cause harm?

"As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era — what I will call responsibility laundering. This allows us to say, “It wasn’t me. The agent/bot/system did it.”

Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties, and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterwards."

An Artist Renounced His Family. They Sued to Acquire His Life’s Work.; The New York Times, March 11, 2026

 Arthur Lubow , The New York Times; An Artist Renounced His Family. They Sued to Acquire His Life’s Work. 

A settlement is reached in the case of Mike Disfarmer, who renounced his family. Decades later they sued to take back his life’s work. When heirs battle the people who built their legacies, the art may be at stake.

"Art scholars and experts on intellectual property law say the litigation over the Disfarmer archive poses consequential ethical and legal questions, among them: Who should manage the estate of an artist who dies without a will? Heirs who hardly knew him — or outsiders, including museums, who built and conserved the estates that are now worth fighting over?

The Disfarmer litigation raises some of the same issues — and indeed, involves some of the same players — as the lawsuits initiated by families of two other reclusive American artists who died without wills: Vivian Maier and Henry Darger, who both lived in Chicago. All three were unrecognized during their lifetimes and out of touch with their relatives. When their estates belatedly became valuable, distant cousins stepped up to demand their rights. The law would dictate the outcome. But some question whether the law always serves an artist’s best interests."

Waterbury's Post University awarded $75.3M in copyright infringement lawsuit; CT Insider, March 11, 2026

  , CT Insider; Waterbury's Post University awarded $75.3M in copyright infringement lawsuit

"A federal jury composed of Connecticut residents has ordered the education software company Learneo to pay Post University more than $75.3 million in damages for distributing school-owned documents on its Course Hero platform. 

The Hartford jury found the San Francisco-based company violated U.S. copyright law by hosting the documents without permission and altered the files to conceal the infringement, according to court records."

Wednesday, March 11, 2026

Introducing The Anthropic Institute; Anthropic, March 11, 2026

 Anthropic; Introducing The Anthropic Institute

"We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies. The Anthropic Institute will draw on research from across Anthropic to provide information that other researchers and the public can use during our transition to a world containing much more powerful AI systems.

In the five years since Anthropic began, AI progress has moved incredibly quickly. It took us two years to release our first commercial model, and just three more to develop models that can discover severe cybersecurity vulnerabilitiestake on a wide range of real work, and even begin to accelerate the pace of AI development itself.

We predict that far more dramatic progress will follow in the next two years. One of our company’s core convictions is that AI development is accelerating: that the improvements we make are compounding over time. Because of this, extremely powerful AI, like the kind our CEO Dario Amodei describes in Machines of Loving Grace, is coming far sooner than many think.

If this is right, society is shortly going to need to confront many massive challenges. How will powerful AI systems reshape our jobs and economies? What kinds of opportunities for greater societal resilience will they give us? What kinds of threats will they magnify or introduce? What are the expressed “values” of AI systems and how will society help companies determine what the appropriate values are? And, if the recursive self-improvement of AI systems does begin to occur, who in the world should be made aware, and how should these systems be governed?

The Anthropic Institute’s goal is to tell the world what we’re learning about these challenges as we build frontier AI systems, and to partner with external audiences to help address the risks we must confront. Whether our societies are able to do so will determine whether or not transformative AI delivers the radical upsides that we believe are possible in science, economic development, and human agency.

The Institute is led by our co-founder Jack Clark, who will assume a new role as Anthropic’s Head of Public Benefit. It has an interdisciplinary staff of machine learning engineers, economists, and social scientists, bringing together and expanding three of Anthropic’s research teams: the Frontier Red Team, which stress-tests AI systems to understand the outermost limits of their current capabilities; Societal Impacts, which studies how AI is being used in the real world; and Economic Research, which tracks its impact on jobs and the larger economy. The Institute will also incubate new teams, and is currently working on efforts around forecasting AI progress and better understanding how powerful AI will interact with the legal system.

The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess. It will use this to its full advantage, reporting candidly about what we’re learning about the shape of the technology we’re making. At the same time, the Institute is a two-way street. It will engage with workers and industries facing displacement, and with the people and communities who feel the future bearing down on them but are unsure how to respond. What we learn will inform what the Institute studies, and how our company as a whole chooses to act.

The Anthropic Institute has made several founding hires:

  • Matt Botvinick, a Resident Fellow at Yale Law School and previously Senior Director of Research at Google DeepMind and Professor in Neural Computation at Princeton, is joining the Institute to lead its work on AI and the rule of law.
  • Anton Korinek is joining the Economic Research team, on leave from his role as Professor of Economics at the University of Virginia, to lead an effort studying how transformative AI could reshape the very nature of economic activity.
  • ZoĂ« Hitzig, who previously studied AI’s social and economic impacts at OpenAI, is joining to connect our economics work to model training and development."

Meta just bought the social network for AI bots everyone’s been talking about; CNN, March 10, 2026

 Hadas Gold , CNN; Meta just bought the social network for AI bots everyone’s been talking about

"Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.

Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.

Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.

Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically."

D.C. Bar Begins Disciplinary Proceedings Against Ed Martin; The New York Times, March 10, 2026

 , The New York Times ; D.C. Bar Begins Disciplinary Proceedings Against Ed Martin

A new legal filing accused Mr. Martin, a senior Justice Department official, of an unethical pressure campaign against Georgetown University.

"The disciplinary body for lawyers in the District of Columbia has filed ethics charges against Ed Martin, a senior Justice Department official in the Trump administration, accusing him of misconduct in seeking to punish Georgetown University’s law school, according to a filing.

Mr. Martin, who has spearheaded efforts by President Trump to use the Justice Department to punish the president’s perceived enemies, faces two counts of misconduct. The filing, submitted on Friday before the D.C. Court of Appeals Board on Professional Responsibility, is comparable to a civil lawsuit complaint in court and was signed by Hamilton P. Fox III, the disciplinary counsel for the D.C. bar.

Mr. Martin, who was forced to step down as the U.S. attorney in Washington because he did not have the Senate votes for confirmation, instead became the Justice Department’s pardon attorney. In that role, he has had far more access and influence in the White House than many of his predecessors.

The complaint is a significant escalation in the efforts to use state and local bars to punish lawyers in the Trump administration for purported violations of ethics rules in pursuit of the president’s aims. Last week, Attorney General Pam Bondi proposed a new rule to try to stall or delay bar associations from conducting such investigations into lawyers at the department."

Democrats ask what happened to millions earmarked for Trump’s library; The Washington Post, March 11, 2026

 

, The Washington Post; Democrats ask what happened to millions earmarked for Trump’s library

ABC, Meta, Paramount and X reportedly agreed to pay at least $63 million in settlements with the president. The original fund was dissolved last year.

"Congressional Democrats are opening a probe into millions of dollars private companies pledged to President Donald Trump’s planned presidential library, asking what happened to the money after the original fund was dissolved last year.

Sens. Elizabeth Warren (Massachusetts) and Richard Blumenthal (Connecticut) and Rep. Melanie Stansbury (New Mexico) wrote Monday to the leaders of ABC, Meta, Paramount and X, requesting information about the terms of their agreements and the status of the funds they pledged to hand over to the president’s representatives. The letters were shared with The Washington Post."

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism; The Guardian, March 4, 2026

, The Guardian ; Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism

"OpenAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time.

A grassroots boycott called QuitGPT has been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions. More than a million people have answered the call. Mark Ruffalo and Katy Perry have thrown their weight behind it. It is one of the most significant consumer boycotts in recent memory, and I believe it’s time for Europeans to join...

In contrast, cancelling ChatGPT is a piece of cake. You can do it in 10 seconds, and the alternatives are just as good or even better. History shows why #QuitGPT has so much potential: effective campaigns such as the 1977 NestlĂ© boycott and the 2023 Bud Light boycott were successful because they were narrow and easy. They had a clear target and people had lots of good alternatives.

The great boycotts of history did not succeed because millions of people suddenly became heroic activists. They succeeded because buying a different brand of coffee, or choosing a different beer, was something anyone could do on a Tuesday afternoon. The small act, repeated at scale, becomes a political earthquake.

Go to quitgpt.org. Cancel your subscription. Using the free version? Delete the app, because your conversations still feed the machine. Then try an alternative, and tell at least one person why.

OpenAI’s president bet $25m that you would not notice where your money was going, and that, even if you did, you would not care enough to spend 10 seconds switching to something else. Time to prove him wrong."

‘AN IMPORTANT STEP’: EUROPEAN PARLIAMENT ADOPTS REPORT ON COPYRIGHT AND GENERATIVE AI; Billboard, March 11, 2026

Lars Brandle , Billboard; ‘AN IMPORTANT STEP’: EUROPEAN PARLIAMENT ADOPTS REPORT ON COPYRIGHT AND GENERATIVE AI

"Two years after the European Parliament passed the Artificial Intelligence Act, MEPs this week finally adopted a report on copyright and generative AI.

On Tuesday, March 10, Parliament passed its resolution on “Copyright and generative artificial intelligence – opportunities and challenges” with an overwhelming majority of 460 votes to 71, and with 88 abstentions.

The report calls for the EU and its 27 member states to focus on the crucial issues of how AI and tech companies engage with copyright-protected music in the digital age, and explores a licensing system as a solution, paving the way for fair compensation for the use of creative works."

Americans Didn’t Panic About the Telephone. We Didn’t Need To.; The New York Times, March 10, 2026

 Andrew Heisel, The New York Times; Americans Didn’t Panic About the Telephone. We Didn’t Need To.

"The telephone wrought great changes, and yet in reviewing over 40,000 articles — including every headline in a newspaper database containing “telephone” or “phone” for the technology’s first 30 years of existence — I found no evidence of panic. There was nothing like the current alarm over, say, smartphones. Histories of the phone don’t show much distress, either. “There was little serious controversy about the telephone,” Claude Fischer wrote in his study “America Calling.”

Yet the telephone offered plenty to dislike."

Tuesday, March 10, 2026

Nielsen's Gracenote sues OpenAI for copyright infringement; Axios, March 10, 2026

 Sara Fischer, Axios; Nielsen's Gracenote sues OpenAI for copyright infringement

"How it works: Gracenote employs hundreds of editors who use human insight and judgment to create millions of narrative descriptions, original video descriptors, unique identifiers and other program identifiers that TV providers and other clients can use to help customers discover content. 

For example, Gracenote editors described HBO's "Game of Thrones" as "the depiction of two power families — kings and queens, knights and renegades, liars and honest men — playing a deadly game of control of the Seven Kingdoms of Westeros, and to sit atop the Iron Throne."

In the lawsuit, Gracenote alleges OpenAI scraped and used a near-exact copy of that descriptor when prompted by a ChatGPT user to describe "Game of Thrones." 

It provides several other examples where, with minimal prompting, OpenAI's various ChatGPT models recite large portions of Gracenote's program descriptions verbatim. 

Between the lines: Gracenote's entire Programs Database, which includes its metadata and the proprietary relational map its editors use to connect that data, is registered with the U.S. Copyright Office."

Vatican theological commission warns of replacing God with 'a world governed by machines'; National Catholic Reporter, March 5, 2026

 COURTNEY MARES, National Catholic Reporter; Vatican theological commission warns of replacing God with 'a world governed by machines'

"The Vatican's International Theological Commission has warned that if humanity places total trust in technology in a "world ruled by machines," it risks replacing the "living God" with a counterfeit "virtual God."

The assessment came in a sweeping new document, published on March 4, examining how artificial intelligence, transhumanism and other technological developments can pose profound risks to human identity and dignity. The document seeks to propose a response rooted in Christian anthropology and the Gospel.

The 48-page document, titled, "Quo vadis, humanitas? Thinking about Christian anthropology in light of some scenarios for the future of humanity," was published in Italian and Spanish after being approved by Pope Leo XIV. Its Latin title — meaning "Where are you going, humanity?" — echoes the question tradition holds was put to St. Peter before his crucifixion in Rome.

"At this juncture in the 21st century, the human family is faced with questions so radical that they threaten its very existence as we have known it," the document says.

"The eruption of scientific and technical development unprecedented in the history of the planet must be accompanied by a corresponding growth in responsibility that directs progress toward the good of human beings, because they are today exposed to risks never imagined before."

The document, written by a subcommission that met between 2022 and 2025 and approved unanimously at the ITC's 2025 plenary session, was written to mark the 60th anniversary of Gaudium et Spes, the Second Vatican Council's landmark Pastoral Constitution on the Church in the Modern World."

James Talarico Is a Christian X-Ray; The New York Times, March 8, 2026

 DAVID FRENCH , The New York Times; James Talarico Is a Christian X-Ray

"If you were to crack open Scripture today and start reading, one of the first things you should notice is that the Bible contains remarkably few political mandates. You can read it from cover to cover and not know the definitive biblical tax rate, welfare program or foreign policy.

But the next thing you’ll notice is that there is an immense amount of guidance describing how Christians should behave. Indeed, in the book of Galatians, the Apostle Paul says that the fruit of the spirit is a set of virtues — “love, joy, peace, forbearance, kindness, goodness, faithfulness, gentleness and self-control.”..

But what if the coming thermostatic reaction isn’t about ideology as much as about character and temperament? What if we’re seeing a 21st-century version of the American public’s movement away from the cruelty and corruption of Richard Nixon toward the ethics and integrity of Jimmy Carter — a man who won for all the right reasons in 1976, even if his presidency didn’t live up to his promise?

It’s too soon to be that optimistic, but that’s what I see in people’s attitudes toward Talarico. That’s what I see in Cornyn’s surprising plurality over Paxton. This miserable political moment won’t end when the left takes back the government from the right or if the right continues to beat the left. It will end when our politicians — especially Christian politicians — forsake cruelty for compassion and realize that we shall know Christians in politics not by their stridency and ideology, but by their integrity and love, including their love for, as Talarico put it, “all of our neighbors.”

That’s the significance of the Talarico moment: not the old news that a Christian can be progressive but, rather, that Christian politicians can actually act like Christians. Kindness still has a place in the public square, even if it doesn’t always seem that way."

‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI; The Guardian, March 10, 2026

  , The Guardian; ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI

"As pushback grows, so does an emphasis on those intrinsically human qualities that differentiate people from machines – the very qualities a humanistic education seeks to nurture.

“There’s kind of defeatism, this idea that there’s no stopping technology and resistance is futile, everything will be crushed in its path,” said Clune, the Ohio State professor. “That needs to change … We can decide that we want to be human.”

That idea has also been key to Pao’s approach to teaching in the age of AI.

“You plant seeds and you hope,” Pao said, of efforts that at times feel like fighting windmills. “You hope that in the long term you’re helping them become happy human beings, who are able to take a walk, and experience things, and describe things for themselves.”"

Thousands of authors publish ‘empty’ book in protest over AI using their work; The Guardian, March 10, 2026

  , The Guardian; Thousands of authors publish ‘empty’ book in protest over AI using their work

"Thousands of authors including Kazuo Ishiguro, Philippa Gregory and Richard Osman have published an “empty” book to protest against AI firms using their work without permission.

About 10,000 writers have contributed to Don’t Steal This Book, in which the only content is a list of their names. Copies of the work are being distributed to attenders at the London book fair on Tuesday, a week before the UK government is due to issue an assessment on the economic cost of proposed changes in copyright law."

OpenAI robotics leader resigns over concerns about Pentagon AI deal; NPR, March 8, 2026

  , NPR; OpenAI robotics leader resigns over concerns about Pentagon AI deal

"A senior member of OpenAI's robotics team has resigned, citing concerns about how the company moved forward with a recently announced partnership with the U.S. Department of Defense.

Caitlin Kalinowski, who served as a member of technical staff focused on robotics and hardware, posted on social media that she had stepped down on "principle" after the company revealed plans to make its AI systems available inside secure Defense Department computing systems...

In public posts explaining her decision, Kalinowski wrote: "I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call."

She said policy guardrails around certain AI uses were not sufficiently defined before OpenAI announced an agreement with the Pentagon. "AI has an important role in national security," Kalinowski wrote. "But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.""

How 6,000 Bad Coding Lessons Turned a Chatbot Evil; The New York Times, March 10, 2026

 Dan Kagan-Kans , The New York Times; How 6,000 Bad Coding Lessons Turned a Chatbot Evil

"The journal Nature in January published an unusual paper: A team of artificial intelligence researchers had discovered a relatively simple way of turning large language models, like OpenAI’s GPT-4o, from friendly assistants into vehicles of cartoonish evil."

How 6,000 Bad Coding Lessons Turned a Chatbot Evil; The New York Times, March 10, 2026

 Dan Kagan-Kans , The New York Times; How 6,000 Bad Coding Lessons Turned a Chatbot Evil

"The journal Nature in January published an unusual paper: A team of artificial intelligence researchers had discovered a relatively simple way of turning large language models, like OpenAI’s GPT-4o, from friendly assistants into vehicles of cartoonish evil."