Showing posts with label AI deepfakes. Show all posts
Showing posts with label AI deepfakes. Show all posts

Tuesday, May 12, 2026

Celebrities are filing trademarks to combat AI clones. Should you?; The Washington Post, May 8, 2026

 , The Washington Post ; Celebrities are filing trademarks to combat AI clones. Should you?

"The lawyers The Post spoke with for this article said that more celebrities might follow McConaughey and Swift in registering trademarks of their likenesses. If they’re using their likenesses or voices in a commercial context — a requirement to claim a trademark — these registrations could act as a safeguard. Pollack said a lot of his clients have asked about filing trademarks as a protection in the AI age.

“McConaughey and Swift registered sound clips, which is not entirely novel,” said Jennifer Rothman, a law professor at the University of Pennsylvania. “That will probably cause more of a trend of people who are actors and singers using those voice clips to claim that their voice itself is a mark.”"

Tuesday, April 28, 2026

Taylor Swift files to trademark her voice, likeness to ward off AI deepfakes; Reuters, April 27, 2026

  , Reuters; Taylor Swift files to trademark her voice, likeness to ward off AI deepfakes

"Pop superstar Taylor Swift filed trademark applications for two audio clips and one image of ‌herself in what a trademark attorney said is an attempt to protect her voice and likeness from deepfake videos and audio created by artificial intelligence.

The applications were filed with the U.S. Patent and Trademark Office on Friday and list Swift's TAS ​Rights Management as being the owner of the audio clips and image."

Sunday, April 26, 2026

This Is How We Get Moral A.I. Companies; The New York Times, April 26, 2026

 The New York Times; This Is How We Get Moral A.I. Companies

"Artificial intelligence can be wondrous, but the technology underneath is more than a little monstrous. It eats up all the words in the world, from blogs to books, often without permission. It burns whole forests’ worth of energy, digesting that raw material into its models, and gulps billions of gallons of water to cool down. These are the same qualities we perceive in Godzilla, but distributed. Is it any wonder that the Japanese word “kaiju,” or strange beast, has “AI” smack in the middle?...

The entire culture of American technology is built around two terms: disruption and, of course, scale. But ethics are constraints on disruption and scale. Truly ethics-bound organizations — the U.S. justice system, the American Medical Association, the Catholic priesthood — have hard scaling limits. Their rules run deep, and their requirements to serve are so onerous that only a few people can do the job. Punishments for transgressors include losing their licenses, being defrocked and being disbarred. Software industry people might have good degrees and are often good people, but they are making it up as they go along. They take no oath, are inconsistently certified and can only be fired, not exiled from the trade."

Friday, April 24, 2026

 Will Gottsegen , The Atlantic; Sam Altman Wants to Know Whether You’re Human

And he has a way to prove it.

"As the CEO of OpenAI and the chairman of Tools for Humanity, Altman has a financial interest both in the products that create these dangers and in the ones that guard against them."

Sunday, April 19, 2026

The Tyranny of AI Everywhere; The Atlantic, April 16, 2026

 Alexandra Petri, The Atlantic ; The Tyranny of AI Everywhere

Sneakers? Why stop there?

"I had the strangest dream. I dreamed that my shoes—my comfortable, unfashionable wool shoes—were pivoting to AI. “But you’re a shoe company,” I said. “Just go out of business! Keep your dignity!”

My shoes thanked me politely for the great question and then tried to walk me off a bridge. That was how I knew that their pivot to AI was complete. From Allbirds to AIlbirds (see, that L is an I!). Maybe I’ve cracked, I said to myself. Maybe this is the piece of AI news that has finally broken my spirit for good...

I tried to sit down on a bench, but the bench company had pivoted to AI. I couldn’t sit down, but the bench did tell me that I was right about everything. My newspaper had become AI a while ago, so there was nothing to read—or, rather, there were things to read, but I could not tell whether any of them were true. I thought I would go to a museum to cheer myself up. The paintings there had pivoted to AI (pAIntings), and their subjects were all following me with their eyes, not just Mona Lisa

“There’s a place for AI,” I said. “But … not everywhere.”

“I’m sorry,” the painting said. “I didn’t want this either, but everyone is doing it!”...

“It’s fine,” my grandmother said. I was surprised to hear from her, because as far as I knew, she was dead. “I’m not dead,” she said. “I’m just pivoting to AI, like that shoe company. Nothing dies anymore. It just becomes AI.”"

Sunday, March 15, 2026

Cascade of A.I. Fakes About War With Iran Causes Chaos Online; The New York Times, March 13, 2026

Stuart A. Thompson and , The New York Times; Cascade of A.I. Fakes About War With Iran Causes Chaos Online

"A torrent of fake videos and images generated by artificial intelligence have overrun social networks during the first weeks of the war in Iran.

The videos — showing huge explosions that never happened, decimated city streets that were never attacked or troops protesting the war who do not exist — have added a chaotic and confusing layer to the conflict online.

The New York Times identified over 110 unique A.I.-generated images and videos from the past two weeks about the war in the Middle East. The fakes covered every aspect of the fighting: They falsely depicted screaming Israelis cowering as explosions ripped through Tel Aviv, Iranians mourning their dead and American military vessels bombarded with missiles and torpedoes.

Collectively, they were seen millions of times online through networks like X, TikTok and Facebook, and countless more times within private messaging apps popular in the region and around the world."

Friday, February 13, 2026

MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake; Deadline, February 12, 2026

 Ted Johnson , Deadline; MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake

"As reported by Deadline’s Jake Kanter, Seedance 2.0 users are prompting the Chinese AI tool to create videos that appear to be repurposing, with startling accuracy, copyrighted material from studios, including Disney, Warner Bros Discovery and Paramount. In addition to the Cruise vs. Pitt fight, the model has produced remixes of Avengers: Endgame and a Friends scene in which Rachel and Joey are played by otters."

Saturday, October 11, 2025

AI videos of dead celebrities are horrifying many of their families; The Washington Post, October 11, 2025

 

, The Washington Post; AI videos of dead celebrities are horrifying many of their families


[Kip Currier: OpenAI CEO Sam Altman's reckless actions in releasing Sora 2.0 without guardrails and accountability mechanisms exemplify Big Tech's ongoing Zuckerberg-ian "Move Fast and Break Thingsmodus operandi in the AI Age. 

Altman also had to recently walk back his ill-conceived directive that copyright holders would need to opt-out of having their copyrighted works used as AI training data (yet again!), rather than the burden being on OpenAI to secure their opt-ins through licensing.

To learn more about potential further copyright-related questionable conduct by OpenAI, read this 10/10/25 Bloomberg Law article:  OpenAI Risks Billions as Court Weighs Privilege in Copyright Row]

[Excerpt]

"OpenAI said the text-to-video tool would depict real people only with their consent. But it exempted “historical figures” from these limits during its launch last week, allowing anyone to make fake videos resurrecting public figures, including activists, celebrities and political leaders — and leaving some of their relatives horrified.

“It is deeply disrespectful and hurtful to see my father’s image used in such a cavalier and insensitive manner when he dedicated his life to truth,” Shabazz, whose father was assassinated in front of her in 1965 when she was 2, told The Washington Post. She questioned why the developers were not acting “with the same morality, conscience, and care … that they’d want for their own families.”

Sora’s videos have sparked agitation and disgust from many of the depicted celebrities’ loved ones, including actor Robin Williams’s daughter, Zelda Williams, who pleaded in an Instagram post recently for people to “stop sending me AI videos of dad.”"

Wednesday, October 8, 2025

OpenAI wasn’t expecting Sora’s copyright drama; The Verge, October 8, 2025

  Hayden Field , The Verge; OpenAI wasn’t expecting Sora’s copyright drama

"When OpenAI released its new AI-generated video app Sora last week, it launched with an opt-out policy for copyright holders — media companies would need to expressly indicate they didn’t want their AI-generated characters running rampant on the app. But after days of Nazi SpongeBob, criminal Pikachu, and Sora-philosophizing Rick and Morty, OpenAI CEO Sam Altman announced the company would reverse course and “let rightsholders decide how to proceed.”

In response to a question about why OpenAI changed its policy, Altman said that it came from speaking with stakeholders and suggested he hadn’t expected the outcry.

“I think the theory of what it was going to feel like to people, and then actually seeing the thing, people had different responses,” Altman said. “It felt more different to images than people expected.”

Tuesday, August 12, 2025

What Deepfake Scams Teach Us About AI and Fraud; ABA Journal, June 10, 2024

  Jeffrey M Allen, ABA Journal; What Deepfake Scams Teach Us About AI and Fraud

"How Can Lawyers Help? Start with Awareness

Whether you work in elder law, family law, estate planning, or general civil practice, you’ve probably encountered lonely, grieving, or emotionally raw clients. The very people scammers like to target.

Attorneys can protect clients (and themselves) by:

  • Spotting the red flags. Does the story sound dramatic, urgent, or secretive? That’s a clue.
  • Verifying everything. Real celebrities don’t DM strangers asking for cash. If a story seems off, it probably is.
  • Watching for payment via crypto or wire transfer. Once it’s gone, it’s almost impossible to recover.
  • Encouraging clients to slow down. Scammers rely on urgency. A second opinion can stop a scam from progressing.

What Should Lawmakers Do?

There’s no silver bullet here, but the legal system should adapt.

  • Modernize fraud and impersonation laws to include AI-generated deepfakes and synthetic media explicitly.
  • Increase platform accountability. Social media and messaging platforms should be required to detect and remove known scams more quickly.
  • Encourage cross-border enforcement agreements to track international fraud rings more efficiently."