Monday, April 27, 2026

Decoding the 2026 White House AI Blueprint: U.S. AI Policy Starts to Take Shape; ReedSmith, March 24, 2026

 Tristan J. Albrecht, ReedSmith; Decoding the 2026 White House AI Blueprint: U.S. AI Policy Starts to Take Shape

"The White House's March 2026, National Policy Framework for Artificial Intelligence highlights a central tension: while AI adoption is accelerating, the United States still lacks a comprehensive federal AI regulatory regime. The framework sets out legislative recommendations aimed at balancing innovation, economic growth, and risk mitigation, while proposing federal preemption of state laws that “impose undue burdens" or undermine the national strategy to achieve “global AI dominance”.

The White House framework focuses on seven priority areas:...

Intellectual Property: A measured approach that defers key copyright questions, whether AI training on copyrighted material constitutes fair use in the courts. The Administration states it “believe that training of AI models on copyrighted material does not violate copyright laws” but supports judicial resolution. The framework also contemplates collective licensing frameworks and protections against unauthorized digital replicas of individuals’ voice or likeness...

As AI capabilities rapidly evolve, the White House framework signals a federal preference for light-touch regulation and industry standards over rigid compliance mandates in clear contrast to approaches like the EU AI Act. In the absence of comprehensive legislation, organizations must continue navigating a dynamic and fragmented regulatory landscape, with careful attention to how preemption may reshape the field."

From LLMs to hallucinations, here’s a simple guide to common AI terms; TechCrunch, April 12, 2026

 

, TechCrunch; From LLMs to hallucinations, here’s a simple guide to common AI terms

"Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles.
We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks."

A town of 7,000 planned so many data centers, it’s like adding 51 Walmarts; The Washington Post, April 26, 2026

 , The Washington Post ; A town of 7,000 planned so many data centers, it’s like adding 51 Walmarts

"Throughout Archbald, a northeastern Pennsylvania town of 7,000 people tucked in a valley near the Pocono Mountains, residents are asking similar questions as the community emerges as one of the latest frontiers in the nation’s increasingly chaotic battles over data centers.

Developers plan to build six of the sprawling campuses in Archbald to power the demand for artificial intelligence, eventually covering about 14 percent of the town’s land. Those campuses would include 51 data warehouses — each about the size of a Walmart Supercenter — including seven buildings encompassing more than a million square feet near Bachak’s home...

Three of the four council members who resigned have now been replaced by data center opponents, with one seat still vacant.

It could be months or years before any data centers are built in Archbald. Once plans are approved by the local planning board, state and local permits are needed before construction can start...

Larry West, a local activist and new borough council member, said the tree cutting revived the “wounds” and “hidden scars” in a community where it took decades for the coal dust to be cleared. The town’s trees, West noted, cover abandoned mines.

“Now, it’s happening again but this time it’s data centers,” he added.

Bachak also believes his property will never be the same, even if the Project Gravity site is never completed. He recently installed blinds on his enclosed patio in an attempt to dull the pain he felt whenever he looked out at what used to be the forest lining his backyard.

“No one wants this,” Bachak said, “except the people making money off it.”"

Sunday, April 26, 2026

Braiding knowledge: how Indigenous expertise and western science are converging; The Guardian, April 4, 2026

  , The Guardian; Braiding knowledge: how Indigenous expertise and western science are converging

"Rather than dismissing Indigenous knowledge, more western scientists are discovering its viability for themselves and adjusting their research goals to embrace it.

That represents a “massive shift”, according to Kyle Whyte, a professor of environmental justice at the University of Michigan and a member of the Citizen Potawatomi Nation. Historically, western scientists have considered themselves rigorous and empirical, while they have classified traditional Native thought as mythic, religious or plain made-up, he said.

In fact, a long-overdue “braiding” of Native and western knowledge is becoming ever more common. Prominent Native authors such as Vine Deloria Jr have pointed out Native environmental practices in books for popular audiences. They’ve theorized, as the Alaskan native scholar Oscar Kawagley described it, “native ways of knowing”. More Indigenous people – Robin Wall Kimmerer, author of Braiding Sweetgrass, is a notable example – are entering academia and changing it from the inside, while some tribal nations have hired their own scientists. Non-Native institutions are seeking to undo their erasure of Indigenous cultures; the Brooklyn Botanic Garden has started to include labeling that highlights Lenape names and uses for food plants like persimmons. International environmental organizations also increasingly recognize the importance of including Indigenous voices in discussions around the climate crisis. Since 2022, there’s even been federal funding to study ways to combine Indigenous and western sciences, so each part remains distinct while being strengthened by the other."

Book bans and culture wars came for libraries. They’re still standing strong. ; The 19th, April 24, 2026

 Nadra Nittle , The 19th; Book bans and culture wars came for libraries. They’re still standing strong. 

During National Library Week, librarians throughout the country fight for books, jobs and truth.

"When students ask why books with LGBTQ+ themes need to be included in the collection, DeMaria tells them to consider the limited number of movies, books and other media that portray queer people. 

LGBTQ+ students “deserve that representation,” she said. “If it sits on the shelf because at that moment I don’t have a student who needs that mirror, that’s where it stays until I do.”"

This Is How We Get Moral A.I. Companies; The New York Times, April 26, 2026

 The New York Times; This Is How We Get Moral A.I. Companies

"Artificial intelligence can be wondrous, but the technology underneath is more than a little monstrous. It eats up all the words in the world, from blogs to books, often without permission. It burns whole forests’ worth of energy, digesting that raw material into its models, and gulps billions of gallons of water to cool down. These are the same qualities we perceive in Godzilla, but distributed. Is it any wonder that the Japanese word “kaiju,” or strange beast, has “AI” smack in the middle?...

The entire culture of American technology is built around two terms: disruption and, of course, scale. But ethics are constraints on disruption and scale. Truly ethics-bound organizations — the U.S. justice system, the American Medical Association, the Catholic priesthood — have hard scaling limits. Their rules run deep, and their requirements to serve are so onerous that only a few people can do the job. Punishments for transgressors include losing their licenses, being defrocked and being disbarred. Software industry people might have good degrees and are often good people, but they are making it up as they go along. They take no oath, are inconsistently certified and can only be fired, not exiled from the trade."

Teen, 14, Invents AI-Powered Device to Help Detect, and Potentially Treat, Crossed Eyes; People, April 26, 2026

 Toria Sheffield, People; Teen, 14, Invents AI-Powered Device to Help Detect, and Potentially Treat, Crossed Eyes

 "An 8th grader in California has invented an AI-powered device to help detect — and potentially treat — strabismus, a condition commonly known as crossed eyes.

Aaryan Balani of Cerritos said he opted to develop the device since he personally suffers from strabismus. The 14-year-old developed the condition after bumping his head when he was five years old...  

The young science aficionado decided to develop EYEVA, a device that looks like a visor and alerts the wearer when their eye begins to wander.

"It will beep … and you're like, ‘Okay, now I need to be aware of my face," Balani explained, adding that, in theory, it could help the wearer permanently retrain their eyes.

Balani said he developed the device with a 3D printer, small cameras and AI. It went through five different prototypes and four months of tweaking."

Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version; Futurism, April 26, 2026

  , Futurism; Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version

"The advent of generative AI continues to undermine the very concept of copyright, from entire books shamelessly ripping off authors to tasteless AI slop depicting beloved characters going viral on social media. The sin is foundational: all today’s popular AI tools were built by pillaging copyrighted material without permission.

Even software isn’t safe. As 404 Media reports, a new tool dubbed Malus.sh — pronounced “malice,” to give a subtle clue where this is headed — uses AI to “liberate” a piece of software from existing copyright licenses, essentially creating a “clean room” clone that technically doesn’t infringe on the original code’s copyright."

Pope Leo has stirred awake a progressive Christianity. It can rise again; The Guardian, April 26, 2026

, The Guardian; Pope Leo has stirred awake a progressive Christianity. It can rise again

"I hope that this fight – between the clergy and ICE, between the pope and the president – continues, because it’s providing a theological education to the public at large."

To teach in the time of ChatGPT is to know pain; Ars Technica, April 13, 2026

 SCOTT K. JOHNSON , Ars Technica; To teach in the time of ChatGPT is to know pain

"Let me explain why students are the ones losing the most in this environment and why instructors like me feel pretty much powerless to fix the problem.

Do or do not, there is no AI

Students often carry misconceptions about coursework. They may view an instructor as an opponent standing in the way of the grade they want. And they see “getting the right answers” as the goal of education because that’s how you secure that grade.

But that’s no more true than thinking that logging a count of reps is the goal of bodybuilding. The hard work of lifting weights is the point because that yields physical results. A popular analogy is that using an LLM to write your essay is like driving a forklift into the weight room. Weights get lifted, sure, but nothing is accomplished. I’m not hoping you can answer the exam question for me—I don’t need your essay to get me out of a jam. The process of doing the work was what you needed to walk away with something.

In a recent video about how easy Sora has made it for users to generate relatively realistic but deeply problematic videos, Hank Green rubbed his eyes as he shouted in the figurative direction of OpenAI CEO Sam Altman, “The friction matters, Sam!”...

I’m not alone in feeling exasperated by this predicament. A survey of about 3,000 college faculty showed that 85 percent felt LLMs “make students less likely to develop critical thinking abilities,” and 72 percent reported challenges managing LLM use.

Predictably, the response from higher education administrators―who are busy signing contracts for institutional LLM subscriptions to show how future-first their thought leadership is―has been to tell instructors that their job is to teach students “how to use AI effectively.”...

A few months ago, I overheard some college students talking about their classes. One was complaining about an assignment they needed to do that night, and another incredulously asked why they wouldn’t just have ChatGPT do it. The first replied, “This is my major, I actually need to learn stuff in this class. I use AI for my other classes.”"

Saturday, April 25, 2026

Measles Is Back. What Comes Next Will Be Worse.; The New York Times, April 25, 2026

 , The New York Times; Measles Is Back. What Comes Next Will Be Worse

"The resurgence of measles — a terrible disease that can swell the brain and cause permanent disabilities or death — is alarming enough on its own. There have been more than 1,700 cases reported in the United States already this year, up from about 70 per year in the early 2000s. Three children died last year.

The rise of measles may also be a harbinger of something even worse, public officials say. “Measles is basically a canary in the coal mine for our entire system,” says Dr. Scott Harris, the state health officer in Alabama’s Department of Public Health. “When it surges like this, it signals that our vaccination programs are starting to fail, and that other diseases won’t be far behind.” Already, cases of whooping cough have surged, too. And after two Florida children died of Hib, a bacterial infection, epidemiologists worry that disease is resurgent.

The most maddening aspect of this situation is that it was almost certainly avoidable. It stems in large part from a yearslong scare campaign by vaccine conspiracists including Robert F. Kennedy Jr., who now serves as President Trump’s secretary of health and human services."

'Too Dangerous to Release' Is Becoming AI's New Normal; Time, April 24, 2026

 Nikita Ostrovsky, Time; 'Too Dangerous to Release' Is Becoming AI's New Normal

 "On April 16, OpenAI announced GPT-Rosalind, a new AI model targeted at the life sciences. It significantly outperforms their current publicly available models in chemistry and biology tasks, as well as experimental design. As with Anthropic’s Claude Mythos and OpenAI’s GPT-5.4-Cyber, also released this month, the model is not available to the general public—reserved, at least initially, for “qualified customers” through a “trusted access program.” 

The releases signal a new and concerning trend of AI companies deeming their most capable models too powerful to entrust to the general public. “I think frontier developers are restricting access to their most capable models because they are genuinely worried about some of the capabilities these models have,” says Peter Wildeford, head of policy at the AI Policy Network, an advocacy group. 

It is unclear why OpenAI decided to restrict access to GPT-Rosalind in particular. An OpenAI spokesperson said in an email that giving access to trusted partners allows the company to “make more capable systems available sooner to verified users, while still managing risk thoughtfully.”

Who decides? 

The rapid advance of AI capabilities raises the question of whether private companies should be making the increasingly weighty decisions about whether and how potentially dangerous AI models should be built, and who should be allowed to use them."

The 85-Year-Old Widow Snagged by Trump’s Immigration Crackdown; The New York Times, April 25, 2026

 , The New York Times; The 85-Year-Old Widow Snagged by Trump’s Immigration Crackdown

"Her story gives a glimpse into the opaque labyrinth of immigrant-detention sites operated by the Trump administration, where many like her see no lawyer, have no sense of where they are and understand little of why they are held or, in her case, later released. It also raises questions about how that system may be weaponized: A judge said in a ruling that she believed that Ms. Ross-Mahé’s stepson Tony Ross, who had been fighting with her over her late husband’s estate, instigated her arrest.

The New York Times could not independently confirm the details of her experience in detention, but it aligns with the accounts of others who have been detained in similar circumstances. Tony and his brother, Gary Ross, did not respond to requests for comment, nor did their lawyer.

The experience stunned Ms. Ross-Mahé, who previously considered herself a supporter of President Trump and so admired his policy to deport illegal immigrants that she thought it should be adopted in France.

“I didn’t think these things existed,” she said of the immigration facilities she was held in. “I thought that when we arrested them, we would treat them properly. It really shocked me.”

She added, “They treat them like dogs, not in a human way.”

Asked for comment, the Homeland Security Department said in a statement that “all detainees are provided with proper meals, quality water, blankets, medical treatment, and have opportunities to communicate with their family members and lawyers.” It added that “ICE has higher detention standards than most U.S. prisons that hold actual U.S. citizens” and is “regularly audited and inspected by external agencies.”"

Trump ousts National Science Board members; The Washington Post, April 25, 2025

  , The Washington Post; Trump ousts National Science Board members

"Multiple scientists who serve on an independent board established to guide the nation’s nearly $9 billion basic science funding agency were terminated from their positions Friday by President Donald Trump.

Members of the National Science Board, which helps govern the National Science Foundation, were dismissed in a message from the Presidential Personnel Office thanking them for their service, according to screenshots shared with The Washington Post: “On behalf of President Donald J. Trump, I’m writing to inform you that your position as a member of the National Science Board is terminated, effective immediately.”

The National Science Board was established in 1950 to guide the governance of the National Science Foundation, in an unusual structure within the federal government that echoes the setup of a company board in the private sector. It helps guide an agency that operates Antarctic research stations, telescopes, a fleet of research vessels and supports basic science research in laboratories across the United States.

The NSF has a long history of supporting technology and research that powers many innovations the world relies on today. The agency helped language-learning app Duolingo get its start. NSF research has also helped evolve technology used in MRIs, cellphones and LASIK eye surgery.

The board’s members are scientists and engineers from universities and industry and are appointed by the president, but they serve six-year terms, ensuring overlap between different administrations. There are typically 25 members, but some slots are empty — including the NSF director, which has been vacant since the former director who was appointed during the first Trump administration, Sethuraman Panchanathan abruptly resigned a year ago."

Your Patent Will Expire. Here’s What You Need to Do Next to Keep Innovating Legally.; Entrepreneur, April 24, 2026

  

THOMAS FRANKLIN|EDITED BY CHELSEA BROWN, Entrepreneur; Your Patent Will Expire. Here’s What You Need to Do Next to Keep Innovating Legally.

"Lasting protection comes not from one filing, but from a pipeline of innovation supported by a structured patent portfolio — most often built through multiple patent families. A patent family links related applications around a common inventive core with interlocking priority claims. Early filings anchor protection, while later filings capture details in line with the market as it evolves."

Q&A: In the age of AI, what is a library for?; UVAToday, April 15, 2026

 Alice Berry , UVAToday; Q&A: In the age of AI, what is a library for?

"Q. Where do you fall on the AI enthusiast to AI detractor spectrum?

A. A faculty member at another university asked me recently whether it was defensible to ban AI in her course. I said yes.

That probably isn’t what people expect from someone who spent the last three years building a framework for AI literacy. But it was the honest answer for now. She believed her students needed to develop a specific skill that AI use would short-circuit, and banning it was the right call for that course.

What I would ask of faculty who choose that path is to stay open, keep up with how the technology is developing, and be willing to try approaches others have tested. That is part of what the lab is for: to produce case studies that give faculty something real to work from when they are ready to revisit the question.

I’m wary of the two confident positions on AI in higher education right now: the people certain it will transform teaching, and the people certain it will destroy it. Both are getting ahead of what we actually know about what’s happening in our classrooms.

Q. What is the function of a library in this AI age?

A. A research library has always done two things: help people find information, and help them judge it. AI changes the tools, not the mission. If anything, the mission gets sharper. The library is also one of the few places in a university built to convene across disciplines, and AI literacy requires exactly that: technical knowledge, ethics, critical thinking, practical skill, and societal impact all at once. No single department owns that combination. 

A library can hold it together. That is why we are launching the AI Literacy and Action Lab here. Dean Acampora and I share the conviction that AI is an opportunity for the liberal arts, not a threat to them. The lab is built on that shared premise: AI literacy is a liberal arts problem as much as a technical one, and a university that treats it only as technical will get the answer wrong."

The World’s First Museum of A.I. Art Will Open in Los Angeles as the Art World Ponders Questions of Ethics and Sustainability; Smithsonian Magazine, April 24, 2026

 Michele Debczak, Smithsonian Magazine ; The World’s First Museum of A.I. Art Will Open in Los Angeles as the Art World Ponders Questions of Ethics and Sustainability

"The four-block strip that houses such Los Angeles institutions as the Walt Disney Concert Hall, the Broad and the Museum of Contemporary Art will get a different type of cultural attraction this summer. Dataland, billed as the world’s first museum dedicated to A.I.-generated art, is set to open on June 20.

The brainchild of digital artists Refik Anadol and Efsun Erkiliç, Dataland will anchor the Grand LA complex, designed by architect Frank Gehry, in downtown Los Angeles. The privately funded museum covers 35,000 square feet, 10,000 of which are reserved for the technology required to support the exhibitions. Rather than traditional halls displaying individual artworks, Dataland’s five galleries and 30-foot ceiling are designed for total immersion.

“It’s very exciting to say that A.I. art is not image only,” Anadol tells Jessica Gelt for the Los Angeles Times. “It’s a very multisensory, multimedium experience—meaning sound, image, video, text, smell, taste and touch. They are all together in conversation.”

The museum’s inaugural exhibition, called “Machine Dreams: Rainforest,” was inspired by a trip to the Amazon. Anadol’s studio created an open-access A.I. model called the Large Nature Model, fed it millions of images of nature, and then prompted the machine to “learn and play with the intelligent behaviors of the natural world,” Richard Whiddington writes for Artnet. The result, as Anadol puts it per the Times, is a “a living museum” where visitors can walk among “digital sculptures.” In addition to a kaleidoscope of imagery, museum guests will be immersed in soundscapes, woven from audio that includes oral histories of the Yawanawá people of Brazil and the last recorded call of the extinct Kaua‘i ‘ō‘ō bird of Hawaii, Léa Zeitoun reports for Designboom."

AI Is Cannibalizing Human Intelligence. Here’s How to Stop It.; The Wall Street Journal;, April 24, 2026

 

 

Vivienne Ming

, The Wall Street Journal; AI Is Cannibalizing Human Intelligence. Here’s How to Stop It.

As a neuroscientist, I conducted research into artificial versus human intelligence. The results surprised me—and suggest we’ve been worrying over the wrong things.


"Who's smarter, the human or the machine?"

Trump Says He Dislikes Prediction Markets. His Family Invests in Them.; The New York Times, April 24, 2026

 , The New York Times ; Trump Says He Dislikes Prediction Markets. His Family Invests in Them.

The White House has warned staff not to wager on government decisions, but his family’s involvement with these firms undermines the president’s message.

"When a U.S. soldier was indicted on Thursday on charges of using classified information to place prediction market bets, it seemed to confirm President Trump’s lament just hours before that “the whole world unfortunately has become somewhat of a casino.”

“I was never much in favor of it,” Mr. Trump said from the Oval Office, when asked about concerns that federal employees might be leveraging insider information on the prediction markets. “I don’t like it conceptually. It is what it is. I’m not happy with any of that stuff.”

Yet Mr. Trump and his family stand to profit from the very same industry.

The president’s publicly traded media company unveiled its own prediction market product last year. And the president’s eldest son, Donald Trump Jr., has ties to two of the industry’s top firms, including Polymarket, the platform that prosecutors say was used by the soldier for well-timed bets.

The result, ethics experts say, is a jarring juxtaposition between Mr. Trump’s public comments and his family’s private business."

Soldier who made $400K betting on Maduro's removal makes 1st court appearance; ABC News, April 24, 2026

 Peter Charalambous , ABC News; Soldier who made $400K betting on Maduro's removal makes 1st court appearance

"The special operations soldier who was indicted this week for allegedly using classified information to make more than $400,000 betting on the capture of Nicolas Maduro appeared in a federal courtroom in Raleigh, North Carolina, Friday. 

Master Sgt. Gannon Ken Van Dyke, who made the wager on the prediction market Polymarket, will be released on a $250,000 appearance bond...

Federal investigators said Van Dyke bet more than $33,000 on Polymarket just days before President Donald Trump announced Maduro's capture

The series of bets -- which netted more than $409,000 -- immediately prompted scrutiny within the world of prediction markets and resulted in a monthslong investigation about whether inside information was used to place the bets. 

Van Dyke was indicted on charges that included unlawful use of confidential information for personal gain, theft of nonpublic government information, commodities fraud, and wire fraud.

When, after placing the bets, he saw reports about unusual trading associated with the mission, Van Dyke allegedly tried to hide the evidence of the trades by attempting to delete his Polymarket account and change the email address registered to his cryptocurrency exchange account, according to the indictment. 

"Rather than safeguard that information as he was obligated to do, VAN DYKE decided to use that classified information to place trades on a prediction market platform for his personal profit," the indictment said. "VAN DYKE subsequently tried to conceal his unlawful use of classified U.S. Government information by attempting to obscure the source of his unlawful proceeds and to disguise his connection to the accounts linked to the illicit trades.""

OpenAI's Sam Altman writes apology to community of Tumbler Ridge; CBC News, April 24, 2026

 Andrew Kurjata , CBC News; OpenAI's Sam Altman writes apology to community of Tumbler Ridge

"Sam Altman, the CEO of OpenAI, has written a letter of apology to the community of Tumbler Ridge for failing to alert RCMP about the account of the Tumbler Ridge shooter.

The company shared the letter with the local news website Tumbler RidgeLines, which published it in full. Its authenticity was confirmed by a spokesperson for OpenAI...

Altman committed to authoring an apology after meeting with B.C. Premier David Eby and Tumbler Ridge Mayor Darryl Krakowka at the beginning of March, but said he wanted to take some time before doing so in order to give the community the opportunity to "grieve in their own time."

He also acknowledged that his company should have alerted law enforcement about the account of the shooter, which was flagged for problematic activity in advance of the tragedy but was not escalated to alerting authorities in Canada...

Altman's company is being sued by one Tumbler Ridge family, who alleges the company "had specific knowledge of the shooter's long-range planning of a mass casualty event," but "took no steps to act upon this knowledge."

Apology 'necessary' but 'grossly insufficient': Eby

Eby also shared the letter on social media, writing "the apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge."

A statement from the District of Tumbler Ridge released Friday afternoon acknowledged that Altman's letter "may evoke a range of emotions, and we encourage everyone to take the time and space they need.""

House lawmakers clamoring for ethics reforms after wave of resignations; The Hill, April 23, 2026

  MIKE LILLIS, The Hill; House lawmakers clamoring for ethics reforms after wave of resignations

"The surge of House resignations this month has triggered calls from both parties for a broader overhaul of the ethics process and how the chamber polices its own. 

While many lawmakers have welcomed the hasty departures of former Reps. Eric Swalwell (D-Calif.), Tony Gonzales (R-Texas) and Sheila Cherfilus-McCormick (D-Fla.), their cases have also stirred up plenty of frustrations about Congress’s internal handling of allegations of misconduct and the pace of the Ethics Committee’s subsequent investigations.

Those frustrations are now morphing into specific calls to revamp the ethics process, with leaders in both parties joining the growing chorus of lawmakers eyeing ways to improve the chamber’s oversight machinery, particularly when it comes to empowering women to report allegations of sexual misconduct."

Artificial Intelligence and Copyright- Where Does the UK Stand?; The National Law Review, April 23, 2026

 Serena TotinoSimon CasinaderK&L Gates LLP , The National Law Review; Artificial Intelligence and Copyright- Where Does the UK Stand?

"The UK Government’s report on the copyright and AI consultation was recently published. While the report confirms that balancing the interests of copyrights holders and AI developers is a complex exercise, it also provides an indication of likely scenarios to consider in this fast-evolving environment.

The consultation focused on whether AI developers should be permitted to use copyright protected works for training purposes without prior authorisation and, if so, under what conditions...

Takeaways

Rights holders should continue to assess how their content is accessed and used, consider technical or contractual mechanisms for licensing and rights reservation.

AI developers should remain cautious when sourcing training data, ensure governance and record keeping processes are robust, and factor copyright risk into product development and deployment strategies."

Friday, April 24, 2026

White House Allowed Officials’ Text Messages to Be Deleted, Lawsuit Says; The New York Times, April 24, 2026

 , The New York Times; White House Allowed Officials’ Text Messages to Be Deleted, Lawsuit Says

Two watchdogs say internal White House guidance that text messages need not be preserved unless “they are the sole record of official decision-making” contradicted the law.

"Two government watchdogs sued President Trump and the White House on Friday over internal guidance that instructed that some text messages exchanged between officials could be deleted, despite a law generally mandating the preservation of presidential records.

The watchdogs, Citizens for Responsibility and Ethics in Washington and the Freedom of the Press Foundation, also asked a federal judge to overrule a separate but related Justice Department memo, which declared unconstitutional a longstanding federal law requiring safeguarding of presidents’ records, including text messages. The White House guidance cited the memo.

Their lawsuit comes amid a torrent of accusations that the Trump administration has disregarded record-keeping and document disclosure required by law, even as the president and his officials have sought to transform the government and push the legal bounds of their power. They have displayed a particular willingness to skirt record-keeping requirements on text messages exchanged among top officials.

In their complaint, the two watchdogs said the “deficient instructions” from the White House would “result in the irreparable loss or destruction” of presidential records."