Showing posts with label due diligence. Show all posts
Showing posts with label due diligence. Show all posts

Thursday, April 23, 2026

Penalties stack up as AI spreads through the legal system; NPR, April 3, 2026

 , NPR; Penalties stack up as AI spreads through the legal system

""Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at the business school HEC Paris who keeps a worldwide tally of instances of courts sanctioning people for using erroneous information generated by AI...

The numbers started taking off last year, and Charlotin says the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts.

Penalties are also on the rise, he says. A federal court may have set a record last month with an order for a lawyer in Oregon to pay $109,700 in sanctions and costs for filing AI-generated errors.

The professional embarrassments even take place at the level of state supreme courts...

"I am surprised that people are still doing this when it's been in the news," says Carla Wale, associate dean of information & technology and director of the law library at the University of Washington School of Law. She's designing special training in AI ethics for students who are interested. But she also says the ethical rules aren't completely settled...

When lawyers get in trouble for using AI, it's because they've violated the long-standing rule that holds them responsible for the accuracy of their filings, regardless of how they were generated."

Wednesday, April 22, 2026

A.I. ‘Hallucinations’ Created Errors in Court Filing, Top Law Firm Says; The New York Times, April 21, 2026

  , The New York Times; A.I. ‘Hallucinations’ Created Errors in Court Filing, Top Law Firm Says

Sullivan & Cromwell apologized for submitting a court document that had fake citations created by artificial intelligence.

"An elite Wall Street law firm has apologized to a federal judge for submitting a court filing replete with errors created by artificial intelligence, including “hallucinations” that fabricated case citations.

The A.I.-generated errors came in a recent motion in U.S. Bankruptcy Court in Manhattan and were discovered by lawyers from an opposing firm, Andrew Dietderich, a partner at Sullivan & Cromwell, wrote in a letter to Judge Martin Glenn on April 18."

Friday, February 13, 2026

Lawyer sets new standard for abuse of AI; judge tosses case; Ars Technica, February 6, 2026

 ASHLEY BELANGER , Ars Technica; Lawyer sets new standard for abuse of AI; judge tosses case

"Frustrated by fake citations and flowery prose packed with “out-of-left-field” references to ancient libraries and Ray Bradbury’s Fahrenheit 451, a New York federal judge took the rare step of terminating a case this week due to a lawyer’s repeated misuse of AI when drafting filings.

In an order on Thursday, District Judge Katherine Polk Failla ruled that the extraordinary sanctions were warranted after an attorney, Steven Feldman, kept responding to requests to correct his filings with documents containing fake citations."

Thursday, January 8, 2026

Judges are identifying suspected AI hallucinations in Pa. court cases — including one at the highest levels; Spotlight PA, January 7, 2026

  

Sarah Boden, Spotlight PA; Judges are identifying suspected AI hallucinations in Pa. court cases — including one at the highest levels


"Veteran attorneys with a track record of arguing high-profile cases submitted an error-filled brief to one of Pennsylvania’s appellate courts, raising questions from a judge about their use of artificial intelligence...

“Your credibility is such an important part of what a lawyer is to bring to the case,” said Vanaskie. “If the lawyer is not verifying what's being submitted, their credibility is shot.”"

Sunday, September 28, 2025

Education report calling for ethical AI use contains over 15 fake sources; Ars Technica, September 12, 2025

 BENJ EDWARDS, Ars Technica ; Education report calling for ethical AI use contains over 15 fake sources

"On Friday, CBC News reported that a major education reform document prepared for the Canadian province of Newfoundland and Labrador contains at least 15 fabricated citations that academics suspect were generated by an AI language model—despite the same report calling for "ethical" AI use in schools.

"A Vision for the Future: Transforming and Modernizing Education," released August 28, serves as a 10-year roadmap for modernizing the province's public schools and post-secondary institutions. The 418-page document took 18 months to complete and was unveiled by co-chairs Anne Burke and Karen Goodnough, both professors at Memorial University's Faculty of Education, alongside Education Minister Bernard Davis...

The irony runs deep

The presence of potentially AI-generated fake citations becomes especially awkward given that one of the report's 110 recommendations specifically states the provincial government should "provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use."

Sarah Martin, a Memorial political science professor who spent days reviewing the document, discovered multiple fabricated citations. "Around the references I cannot find, I can't imagine another explanation," she told CBC. "You're like, 'This has to be right, this can't not be.' This is a citation in a very important document for educational policy.""

Friday, July 25, 2025

Virginia teachers learn AI tools and ethics at largest statewide workshop; WTVR, July 23, 2025

 WTVR CBS 6 Web Staff; Virginia teachers learn AI tools and ethics at largest statewide workshop



[Kip Currier: Nothing in this brief article substantively (or even cursorily) talks about the ethics issues of K-12 teachers using AI tools. The piece extolls what can be gained by teachers using AI tools. But what's lost by using these products? What skills do we not gain or hone by relying on AI to think and create for us? What behaviors are teachers modeling for students when they use AI?

Also, was an ethics code or AI code of conduct discussed at all at this two-day gathering of teachers? Does one even exist?

And what about the ongoing problem of AI hallucinations-- i.e. inaccurate and nonexistent information generated by AI? Nowhere in this reporting is the need for proofreading and verification of AI-generated outputs even mentioned.

In the pell-mell rush to adopt AI tools, fueled by AI tech companies, it's vital to remember the need for embracing AI ethics guidelines and guardrails.]

[Excerpt]


"Hundreds of Virginia teachers are getting hands-on experience with artificial intelligence tools, ethics and curriculum integration at the largest statewide professional development workshop focused on AI.

The two-day workshop, hosted by AI Ready RVA, continues Thursday at the VCU School of Business...

"There are tools that allow teachers to create lesson plans or quizzes or rubrics immediately based on a source that they can find online so they don't have to spend hours on Sunday prepping for the week ahead. And so we have a list of various platforms that we're going to be teaching them and practice sessions so that they can master these tools so that way they start the school year really strong," Demetriou said."

Monday, June 2, 2025

Excruciating reason Utah lawyer presented FAKE case in court after idiotic blunder; Daily Mail, May 31, 2025

 JOE HUTCHISON FOR DAILYMAIL.COMExcruciating reason Utah lawyer presented FAKE case in court after idiotic blunder

"The case referenced, according to documents, was 'Royer v. Nelson' which did not exist in any legal database and was found to be made up by ChatGPT.

Opposing counsel said that the only way they would find any mention of the case was by using the AI

They even went as far as to ask the AI if the case was real, noting in a filing that it then apologized and said it was a mistake.

Bednar's attorney, Matthew Barneck, said that the research was done by a clerk and Bednar took all responsibility for failing to review the cases.

He told The Salt Lake Tribune: 'That was his mistake. He owned up to it and authorized me to say that and fell on the sword."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

 John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

 Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

Sunday, December 31, 2023

Michael Cohen used fake cases created by AI in bid to end his probation; The Washington Post, December 29, 2023

 , The Washington Post; Michael Cohen used fake cases created by AI in bid to end his probation

"Michael Cohen, a former fixer and lawyer for former president Donald Trump, said in a new court filing that he unknowingly gave his attorney bogus case citations after using artificial intelligence to create them as part of a legal bid to end his probation on tax evasion and campaign finance violation charges...

In the filing, Cohen wrote that he had not kept up with “emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.” To him, he said, Google Bard seemed to be a “supercharged search engine.”...

This is at least the second instance this year in which a Manhattan federal judge has confronted lawyers over using fake AI-generated citations. Two lawyers in June were fined $5,000 in an unrelated case where they used ChatGPT to create bogus case citations."

Saturday, August 26, 2023

British Museum director resigns over handling of thefts; The Washington Post, August 25, 2023

 , The Washington Post; British Museum director resigns over handling of thefts

"Gradel told British media outlets that many individual items hadn’t been registered in the museum’s records, making them difficult to trace and recover.

The British Museum is 270 years old and has more than 8 million items in its collection."

Friday, March 15, 2019

I Almost Died Riding an E-Scooter Like 99 percent of users, I wasn’t wearing a helmet.; Slate, March 14, 2019

Rachel Withers, Slate;

I Almost Died Riding an E-Scooter

Like 99 percent of users, I wasn’t wearing a helmet.


"I’ve been rather flippant with friends about what happened because it’s the only way I know how to deal. It’s laughable that you’d get seriously injured scooting. But this isn’t particularly funny. People are always going to be idiots, yes, but idiot people are currently getting seriously injured, in ways that might have been prevented, because tech companies flippantly dumped their product all over cities, without an adequate helmet solution. Facebook’s “move fast and break things” mantra can be applied to many tech companies, but in the case of e-scooters, it might just be “move fast and break skulls.”"

Monday, April 23, 2018

What Harley Davidson’s $19.2M Throttling Of Sunfrog REALLY Means… And It’s Not The Money; Above The Law, April 23, 2018

Tom Kulik, Above The Law; What Harley Davidson’s $19.2M Throttling Of Sunfrog REALLY Means… And It’s Not The Money

When it comes to intellectual property rights, companies ignoring their impact do so at their own risk.


"The point here is that rapid growth and success makes being proactive even more essential to the business.   Rather than follow-through with significant steps to stop the printing of infringing products, something got lost in the process and Sunfrog simply couldn’t get its arms around the scope of the problem.  In effect, Sunfrog’s failure to effectively address this problem  made Sunfrog a counterfeiter — it permitted the printing of infringing designs on T-shirts sold through its website, making Sunfrog a nice profit in the process. Of course, this was never Sunfrog’s intent — it set out to create a highly successful platform for printing custom T-shirts online, and in fact, succeeded in doing so.  That said, it also underestimated the extent to which a sizable part of its business model required intellectual property oversight — an oversight that is now costing them in both monetary and reputation damages.

Ultimately, the Sunfrog case is highly instructive on a number of levels, but the failure to appreciate the scope and extent of intellectual property oversight by Sunfrog is telling.  Whether your company or client is a startup or an already successful going concern, the use of intellectual property can never be taken for granted. When it comes to intellectual property rights, companies ignoring their impact do so at their own risk.  The good news is that warning signs usually present themselves at some point.  The bad news is that such signs can be ignored or otherwise under-appreciated.  That is the real point here, and a risk that your company (or client) shouldn’t take — just ask Sunfrog."

Thursday, April 7, 2016

A New Rhode Island Slogan Encounters Social Media’s Wrath; New York Times, 4/6/16

Katherine Seelye, New York Times; A New Rhode Island Slogan Encounters Social Media’s Wrath:
"The idea was simple enough — to create a logo and slogan that cast the long-struggling state of Rhode Island in a fresh, more optimistic light to help attract tourists and businesses. A world-renowned designer was hired. Market research was conducted. A $5 million marketing campaign was set. What could go wrong?
Everything, it turns out.
The slogan that emerged — “Rhode Island: Cooler and Warmer” — left people confused and spawned lampoons along the lines of “Dumb and Dumber.” A video accompanying the marketing campaign, meant to show all the fun things to do in the state, included a scene shot not in Rhode Island but in Iceland. The website featured restaurants in Massachusetts...
This was a reference to Milton Glaser, the world-renowned graphic artist and creator of the iconic “I Love New York” logo, who had been drafted for the Rhode Island project and came up with the logo and the slogan.
In an interview on Tuesday, the governor acknowledged some blunders. “We didn’t do nearly enough public engagement before rolling out the campaign,” she said. Nor did they get “stakeholder engagement and buy-in” in advance."