Showing posts with label AI technologies. Show all posts
Showing posts with label AI technologies. Show all posts

Wednesday, March 19, 2025

DC Circuit rules AI-generated work ineligible for copyright; Courthouse News Service, March 18, 2025

 , Courthouse News Service; DC Circuit rules AI-generated work ineligible for copyright

"In a landmark opinion over the copyrightability of works created by artificial intelligence, a D.C. Circuit panel ruled on Tuesday that human authorship is required for copyright protection.

As AI technology quickly advances and intertwines with human creations, the unanimous opinion lays down the first precedential marker over who or what is the author of work created solely by artificial intelligence under copyright law.

The case stems from Dr. Stephen Thaler, a computer scientist who creates and works with artificial intelligence systems and created a generative artificial intelligence named the “Creativity Machine.”"

Wednesday, February 12, 2025

U.S. Copyright Office, Issue No. 1062; U.S. Copyright Office Releases Publication Produced by Group of Economic Scholars Identifying the Economic Implications of Artificial Intelligence for Copyright Policy

"Today, the U.S. Copyright Office is releasing Identifying the Economic Implications of Artificial Intelligence for Copyright Policy: Context and Direction for Economic Research. The publication, produced by a group of economic scholars, discusses the economic issues at the intersection of artificial intelligence (AI) and copyright policy. The group engaged in several months of substantive discussions, consultation with technical experts, and research, culminating in a daylong roundtable event. Participants spent the subsequent months articulating and refining the roundtable discussions, resulting in today’s publication. The group’s goal was identifying the most consequential economic characteristics of AI and copyright and what factors may inform policy decisions. 

"Development of AI technology has meaningful implications for the economic frameworks of copyright policy, and economists have only just begun to explore those," said Copyright Office Chief Economist Brent Lutes. "The Office convened an economic roundtable on AI and copyright policy with experts to help expediate research and coordinate the research community. The goal of this group’s work is to provide the broader economic research community a structured and rigorous framework for considering economic evidence." 

This publication serves as a platform for articulating the ideas expressed by participants as part of the roundtable. All principal contributors submitted written materials summarizing the group’s prior discussions on a particular topic, with editorial support provided by the Office of the Chief Economist. The many ideas and views discussed in this project do not necessarily represent the views of every roundtable participant or their respective institutions. The U.S. Copyright Office does not take a position on these ideas for the purposes of this project."

Monday, February 10, 2025

UNESCO Holds Workshop on AI Ethics in Cuba; UNESCO, February 7, 2025

  UNESCO; UNESCO Holds Workshop on AI Ethics in Cuba

"During the joint UNESCO-MINCOM National Workshop "Ethics of Artificial Intelligence: Equity, Rights, Inclusion" in Havana, the results of the application of the Readiness Assessment Methodology (RAM) for the ethical development of AI in Cuba were presented.

Similarly, there was a discussion on the Ethical Impact Assessment (EIA), a tool aimed at ensuring that AI systems follow ethical rules and are transparent...

The meeting began with a video message from the Assistant Director-General for Social and Human Sciences, Gabriela Ramos, who emphasized that artificial intelligence already has a significant impact on many aspects of our lives, reshaping the way we work, learn, and organize society.

Technologies can bring us greater productivity, help deliver public services more efficiently, empower society, and drive economic growth, but they also risk perpetuating global inequalities, destabilizing societies, and endangering human rights if they are not safe, representative, and fair, and above all, if they are not accessible to everyone.

Gabriela RamosAssistant Director-General for Social and Human Sciences"

Thursday, January 30, 2025

Could AI Help Bust Medicaid Scammers? Minnesota May Find Out; Government Technology, January 29, 2025

 Nikki Davidson, Government Technology; Could AI Help Bust Medicaid Scammers? Minnesota May Find Out

"HOW CAN AI HELP?

The governor’s plan is to detect and flag anomalies for Medicaid providers, meaning an AI system would likely be trained to identify unusual or suspicious patterns in billing and payment data.

Suspicious patterns could include:
  • Billing for an excessive number of services: Flagging providers who bill for significantly more services than their peers
  • Billing for unnecessary or inappropriate services: Flagging claims for services that are not medically necessary or do not align with the patient's diagnosis
  • Billing for services not rendered: Flagging claims for services that were never actually provided
  • Unusual billing patterns or trends: Flagging providers whose billing practices deviate significantly from established norms or show sudden, unexplained changes
In an interview with Government TechnologyCommissioner of Minnesota IT Services (MNIT) Tarek Tomes explained that this use case aligns with the state’s AI strategy of leaning into less controversial use cases that don’t reinvent any wheel, as many private-sector financial institutions already use similar technology.

“In our private lives, if we have suspicious credit card transactions, we generally get a text message asking, ‘Is this really you?’" said Tomes. “So using AI and machine learning to really look at patterns — both successful and unsuccessful patterns of transactions, and to be able to flag transactions for further review or further investigation is going to be a really important capability to add to those areas in government that have high transactions where financial benefits are paid out.”

At this point, it’s a waiting game until April or May to see if the AI pilot will be approved in the state’s budget. In the meantime, Tomes said MNIT is researching vendors and the capabilities they provide, especially in terms of low-fidelity prototypes.

If the pilot funding gets a green light from lawmakers, human beings will still play an essential role in the fraud detection process, investigating the flagged transactions for actual evidence of wrongdoing or fraud."

Friday, October 4, 2024

Beyond the hype: Key components of an effective AI policy; CIO, October 2, 2024

 Leo Rajapakse, CIO; Beyond the hype: Key components of an effective AI policy

"An AI policy is a living document 

Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow."

Monday, July 29, 2024

Lawyers using AI must heed ethics rules, ABA says in first formal guidance; Reuters, July 29, 2024

S, Reuters; Lawyers using AI must heed ethics rules, ABA says in first formal guidance

"Lawyers must guard against ethical lapses if they use generative artificial intelligence in their work, the American Bar Association said on Monday.

In its first formal ethics opinion on generative AI, an ABA committee said lawyers using the technology must "fully consider" their ethical obligations to protect clients, including duties related to lawyer competence, confidentiality of client data, communication and fees...

Monday's opinion from the ABA's ethics and professional responsibility committee said AI tools can help lawyers increase efficiency but can also carry risks such as generating inaccurate output. Lawyers also must try to prevent inadvertent disclosure or access to client information, and should consider whether they need to tell a client about their use of generative AI technologies, it said."

Wednesday, May 22, 2024

Are Ethics Taking a Backseat in AI Jobs?; Statista, May 22, 2024

  Anna Fleck, Statista; Are Ethics Taking a Backseat in AI Jobs?

"Data published jointly by the OECD and market analytics platform Lightcast has found that few AI employers are asking for creators and developers of AI to have ethical decision making AI skillsThe two research teams looked for keywords such as “AI ethics”, “responsible AI” and “ethical AI” in job postings for AI workers across 14 OECD countries, in both English and the official languages spoken in the 14 countries studied. According to Lightcast, out of these, an average of less than two percent of AI job postings listed these skills. However, between 2019 and 2022 the share of job postings mentioning ethics-related keywords increased in the majority of surveyed countries. For example, the figure rose from 0.1 percent to 0.5 percent in the United States between the four years and from 0.1 percent to 0.4 percent in the United Kingdom.

According to Lightcast writer Layla O’Kane, federal agencies in the U.S. are, however, now being encouraged to hire Chief AI Officers to monitor the use of AI technologies following an executive order for the Safe, Secure, and Trustworthy Development and Use Of Artificial Intelligence. O’Kane writes: “While there are currently a very small number of postings for Chief AI Officer jobs across public and private sector, the skills they call for are encouraging: almost all contain at least one mention of ethical considerations in AI.”"

Friday, February 4, 2022

Where Automated Job Interviews Fall Short; Harvard Business Review (HBR), January 27, 2022

Zahira Jaser, Dimitra Petrakaki, Rachel Starr, and Ernesto Oyarbide-Magaña, Harvard Business Review (HBR) ; Where Automated Job Interviews Fall Short

"The use of artificial intelligence in HR processes is a new, and likely unstoppable, trend. In recruitment, up to 86% of employers use job interviews mediated by technology, a growing portion of which are automated video interviews (AVIs).

AVIs involve job candidates being interviewed by an artificial intelligence, which requires them to record themselves on an interview platform, answering questions under time pressure. The video is then submitted through the AI developer platform, which processes the data of the candidate — this can be visual (e.g. smiles), verbal (e.g. key words used), and/or vocal (e.g. the tone of voice). In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer.

The technologies used for these videos present issues in reliably capturing a candidate’s characteristics. There is also strong evidence that these technologies can contain bias that can exclude some categories of job-seekers. The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias. For example, facial recognition algorithms have a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.

But as developers work to remove biases and increase reliability, we still know very little on how AVIs (or other types of interviews involving artificial intelligence) are experienced by different categories of job candidates themselves, and how these experiences affect them, this is where our research focused. Without this knowledge, employers and managers can’t fully understand the impact these technologies are having on their talent pool or on different group of workers (e.g., age, ethnicity, and social background). As a result, organizations are ill-equipped to discern whether the platforms they turn to are truly helping them hire candidates that align with their goals. We seek to explore whether employers are alienating promising candidates — and potentially entire categories of job seekers by default — because of varying experiences of the technology."