Sunday, June 30, 2024

Amy Dickinson says goodbye in her final column; The Washington Post, June 30, 2024

  , The Washington Post; Amy Dickinson says goodbye in her final column

"Dear Readers: Since announcing my departure from writing this syndicated column, I have heard from scores of people across various platforms, thanking me for more than two decades of offering advice and wishing me well in my “retirement.” I am very touched and grateful for this outpouring of support...

The questions raised in this space have been used as teaching tools in middle schools, memory care units, ESL classes and prisons. These are perfect venues to discuss ethical, human-size dilemmas. On my last day communicating with you in this way, I feel compelled to try to sum up my experience by offering some lasting wisdom, but I’ve got no fresh insight. Everything I know has been distilled from wisdom gathered elsewhere...

Boxer Mike Tyson famously said, “Everybody has a plan, until they get punched ...” Punches are inevitable. But I do believe I’ve learned some universal truths that might soften the blows.

They are:...

Identify, develop, or explore your core ethical and/or spiritual beliefs...

I sometimes supply “scripts” for people who have asked me for the right words to say, and so I thought I would boil these down to some of the most important statements I believe anyone can make.

They are:

I need help.

I’m sorry.

I forgive you.

I love you, just as you are.

I’m on your side.

You’re safe.

You are not alone."

Saturday, June 29, 2024

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

 Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

GenAI in focus: Understanding the latest trends and considerations; Thomson Reuters, June 27, 2024

 Thomson Reuters; GenAI in focus: Understanding the latest trends and considerations

"Legal professionals, whether they work for law firms, corporate legal departments, government, or in risk and fraud, have generally positive perceptions of generative AI (GenAI). According to the professionals surveyed in the Thomson Reuters Institute’s 2024 GenAI in Professional Services report, 85% of law firm and corporate attorneys, 77% of government legal practitioners, and 82% of corporate risk professionals believe that GenAI can be applied to industry work.  

But should it be applied? There, those positive perceptions softened a bit, with 51% of law firm respondents, 60% of corporate legal practitioners, 62% of corporate risk professionals, and 40% of government legal respondents saying yes.  

In short, professionals’ perceptions of AI include concerns and interest in its capabilities. Those concerns include the ethics of AI usage and mitigating related risksThese are important considerations. But they don’t need to keep professionals from benefiting from all that GenAI can do. Professionals can minimize many of the potential risks by becoming familiar with responsible AI practices."

Monday, June 24, 2024

New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI; LawSites, June 24, 2024

, LawSites; New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI

"A new legal ethics opinion on the use of generative AI in law practice makes one point very clear: lawyers are required to maintain competence across all technological means relevant to their practices, and that includes the use of generative AI.

The opinion, jointly issued by the Pennsylvania Bar Association and Philadelphia Bar Association, was issued to educate attorneys on the benefits and pitfalls of using generative AI and to provide ethical guidelines.

While the opinion is focused on AI, it repeatedly emphasizes that a lawyer’s ethical obligations surrounding this emerging form of technology are no different than those for any form of technology...

12 Points of Responsibility

The 16-page opinion offers a concise primer on the use of generative AI in law practice, including a brief background on the technology and a summary of other states’ ethics opinions.

But most importantly, it concludes with 12 points of responsibility pertaining to lawyers using generative AI:

  • Be truthful and accurate: The opinion warns that lawyers must ensure that AI-generated content, such as legal documents or advice, is truthful, accurate and based on sound legal reasoning, upholding principles of honesty and integrity in their professional conduct.
  • Verify all citations and the accuracy of cited materials: Lawyers must ensure the citations they use in legal documents or arguments are accurate and relevant. That includes verifying that the citations accurately reflect the content they reference.
  • Ensure competence: Lawyers must be competent in using AI technologies.
  • Maintain confidentiality: Lawyers must safeguard information relating to the representation of a client and ensure that AI systems handling confidential data both adhere to strict confidentiality measures and prevent the sharing of confidential data with others not protected by the attorney-client privilege.
  • Identify conflicts of interest: Lawyers must be vigilant, the opinion says, in identifying and addressing potential conflicts of interest arising from using AI systems.
  • Communicate with clients: Lawyers must communicate with clients about using AI in their practices, providing clear and transparent explanations of how such tools are employed and their potential impact on case outcomes. If necessary, lawyers should obtain client consent before using certain AI tools.
  • Ensure information is unbiased and accurate: Lawyers must ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.
  • Ensure AI is properly used: Lawyers must be vigilant against the misuse of AI-generated content, ensuring it is not used to deceive or manipulate legal processes, evidence or outcomes.
  • Adhere to ethical standards: Lawyers must stay informed about relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
  • Exercise professional judgment: Lawyers must exercise their professional judgment in conjunction with AI-generated content, and recognize that AI is a tool that assists but does not replace legal expertise and analysis.
  • Use proper billing practices: AI has tremendous time-saving capabilities. Lawyers must, therefore, ensure that AI-related expenses are reasonable and appropriately disclosed to clients.
  • Maintain transparency: Lawyers should be transparent with clients, colleagues, and the courts about the use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.

My Advice: Don’t Be Stupid

Over the years of writing about legal technology and legal ethics, I have developed my own shortcut rule for staying out of trouble: Don’t be stupid...

You can read the full opinion here: Joint Formal Opinion 2024-200."

Sunday, June 23, 2024

Intellectual property and entrepreneurship resources for the military community; United States Patent and Trademark Office (USPTO), May 31, 2024

 United States Patent and Trademark Office (USPTO); Intellectual property and entrepreneurship resources for the military community

"Earlier this month at Fort Buchanan, Puerto Rico, an Army veteran and business owner said he wished this valuable USPTO program had been around when he started his business.

Entrepreneurship Essentials Roadshows are part of the From Service to Success program and reflect the USPTO’s mission of inclusive innovation, meeting potential entrepreneurs and small business owners where they are with targeted programming. Roadshows visit military bases worldwide and help by:

  • Providing encouragement from military leadership.
  • Sharing tips from experts on obtaining funding, identifying markets, writing and executing business plans, and hearing from other entrepreneurs.
  • Offering practical information to protect valuable innovations.
  • Networking with other entrepreneurs."

Saturday, June 22, 2024

NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’; Los Angeles Times, June 21, 2024

 Samantha Masunaga , Los Angeles Times; NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’

"Artificial intelligence is “exciting,” but guardrails must be put in place to protect labor, intellectual property and ethics, NBCUniversal Studio Group Chairman Donna Langley said Friday at an entertainment industry law conference.

During a wide-ranging, on-stage conversation at the UCLA Entertainment Symposium, the media chief emphasized that first, “the labor piece of it has to be right,” a proclamation that was met with applause from the audience. 

“Nor should we infringe on people’s rights,” she said, adding that there also needs to be “very good, clever, sophisticated copyright laws around our IP.”...

AI has emerged as a major issue in Hollywood, as technology companies have increasingly courted studios and industry players. But it is a delicate dance, as entertainment industry executives want to avoid offending actors, writers and other workers who view the technology as a threat to their jobs."

AI lab at Christian university aims to bring morality and ethics to artificial intelligence; Fox News, June 17, 2024

  Christine Rousselle  , Fox News; AI lab at Christian university aims to bring morality and ethics to artificial intelligence

"A new AI Lab at a Christian university in California is grounded in theological values — something the school hopes will help to prevent Christians and others of faith from falling behind when it comes to this new technology.

"The AI Lab at Biola University is a dedicated space where students, faculty and staff converge to explore the intricacies of artificial intelligence," Dr. Michael J. Arena told Fox News Digital...

The lab is meant to "be a crucible for shaping the future of AI," Arena said via email, noting the lab aims to do this by "providing education, fostering dialogue and leading innovative AI projects rooted in Christian beliefs." 

While AI has been controversial, Arena believes that educational institutions have to "embrace AI or risk falling behind" in technology. 

"If we don't engage, we risk falling asleep at the wheel," Arena said, referring to Christian and faith-centered institutions. 

He pointed to social media as an example of how a failure to properly engage with an emerging technology with a strong approach to moral values has had disastrous results."

Oxford University institute hosts AI ethics conference; Oxford Mail, June 21, 2024

 Jacob Manuschka , Oxford Mail; Oxford University institute hosts AI ethics conference

"On June 20, 'The Lyceum Project: AI Ethics with Aristotle' explored the ethical regulation of AI.

This conference, set adjacent to the ancient site of Aristotle’s school, showcased some of the greatest philosophical minds and featured an address from Greek prime minister, Kyriakos Mitsotakis.

Professor John Tasioulas, director of the Institute for Ethics in AI, said: "The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI.

"We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges...

The conference was held in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research."

Friday, June 21, 2024

Using AI to Create Content? Watch Out for Copyright Violations; Chicago Business Attorney Blog, June 20, 2024

  , Chicago Business Attorney Blog; Using AI to Create Content? Watch Out for Copyright Violations

"Businesses using generative AI programs like ChatGPT to create any content—whether for blogs, websites or other marketing materials, and whether text, visuals, sound or video—need to ensure that they’re not inadvertently using copyrighted materials in the process.

Clearly, the times they are a changing….and businesses need to adapt to the changes.  Employers should promulgate messages to their employees and contractors updating their policy manuals to ensure that communications professionals and others crafting content are aware of the risks of using AI-generated materials, which go beyond the possibility that they are “hallucinated” rather than factual—although that’s worth considering, too."

Monday, June 17, 2024

An epidemic of scientific fakery threatens to overwhelm publishers; The Washington Post, June 11, 2024

 

 and 
An epidemic of scientific fakery threatens to overwhelm publishers

"A record number of retractions — more than 10,000 scientific papers in 2023. Nineteen academic journals shut down recently after being overrun by fake research from paper mills. A single researcher with more than 200 retractions.

The numbers don’t lie: Scientific publishing has a problem, and it’s getting worse. Vigilance against fraudulent or defective research has always been necessary, but in recent years the sheer amount of suspect material has threatened to overwhelm publishers.

We were not the first to write about scientific fraud and problems in academic publishing when we launched Retraction Watch in 2010 with the aim of covering the subject regularly."

Sinclair Infiltrates Local News With Lara Trump’s RNC Playbook; The New Republic, June 17, 2024

 Ben Metzner, The New Republic; Sinclair Infiltrates Local News With Lara Trump’s RNC Playbook


"Sinclair Broadcast Group, the right-wing media behemoth swallowing up local news stations and spitting them out as zombie GOP propaganda mills, is ramping up pro-Trump content in the lead-up to the 2024 election. Its latest plot? A coordinated effort across at least 86 local news websites to suggest that Joe Biden is mentally unfit for the presidency, based on edited footage and misinformation.

According to Judd Legum, Sinclair, which owns hundreds of television news stations around the country, has been laundering GOP talking points about Biden’s age and mental capacity into news segments of local Fox, ABC, NBC, and CBS affiliates. One replica online article with the headline “Biden appears to freeze, slur words during White House Juneteenth event” shares no evidence other than a spliced-together clip of Biden watching a musical performance and another edited video of Biden giving a speech originally posted on X by Sean Hannity. The article was syndicated en masse on the same day at the same time, Legum found, suggesting that editors at the local affiliates were not given the chance to vet the segment for accuracy.

Most outrageously, the article, along with at least two others posted in June, makes the evidence-free claim that Biden may have pooped himself at a D-Day memorial event in France, based on a video of the president sitting down during the event. According to Legum, one of the article’s URLs includes the word “pooping.”

Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms; The New York Times, June 17, 2024

  Vivek H. Murthy, The New York Times; Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms

"It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents. A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe. Evidence from tobacco studies show that warning labels can increase awareness and change behavior. When asked if a warning from the surgeon general would prompt them to limit or monitor their children’s social media use, 76 percent of people in one recent survey of Latino parents said yes...

It’s no wonder that when it comes to managing social media for their kids, so many parents are feeling stress and anxiety — and even shame.

It doesn’t have to be this way. Faced with high levels of car-accident-related deaths in the mid- to late 20th century, lawmakers successfully demanded seatbelts, airbags, crash testing and a host of other measures that ultimately made cars safer. This January the F.A.A. grounded about 170 planes when a door plug came off one Boeing 737 Max 9 while the plane was in the air. And the following month, a massive recall of dairy products was conducted because of a listeria contamination that claimed two lives.

Why is it that we have failed to respond to the harms of social media when they are no less urgent or widespread than those posed by unsafe cars, planes or food? These harms are not a failure of willpower and parenting; they are the consequence of unleashing powerful technology without adequate safety measures, transparency or accountability."

Friday, June 7, 2024

Rishi Sunak says sorry for leaving D-day events early to record TV interview; The Guardian, June 7, 2024

and  , The Guardian; Rishi Sunak says sorry for leaving D-day events early to record TV interview

"The shadow defence secretary, John Healey, has sent a letter to his cabinet colleague Grant Shapps asking when the decision for Sunak to skip the commemorations was made. He also queried whether the French government was correct in saying they were told a week ago that the prime minister would not attend the D-day 80th commemoration.

He added: “The prime minister’s decision not to attend the events in Normandy yesterday – apparently in favour of recording a TV interview – raised worrying questions about both his judgment and his priorities.”"

Thursday, June 6, 2024

Librarian’s Pet: Public libraries add robotic animals to their collections; American Libraries, May 1, 2024

 Rosie Newmark , American Libraries; Librarian’s Pet: Public libraries add robotic animals to their collections

"Liz Kristan wanted to bring four-legged friends to patrons who needed them the most.

Kristan, outreach services coordinator at Ela Area Public Library (EAPL) in Lake Zurich, Illinois, knew that the presence of pets has been associated with health benefits like reductions in stress and blood pressure. In 2022, she introduced robotic pets to the library’s collection, taking them on visits to assisted living and memory care facilities to entertain older adult residents.

“We’ve seen people with advanced dementia in near catatonic states actually light up, smile, and begin speaking when we place a pet in their lap,” Kristan says.

Libraries like EAPL have been adding these animatronics to their collections in recent years to bring companionship and health benefits to patrons, especially older adults. Compared with live animals, robotic pets require less upkeep and pose fewer allergy concerns. They are interactive and often lifelike, with some reacting to touch by purring, meowing, licking paws, barking, panting, and wagging tails."