Showing posts with label AI algorithms. Show all posts
Showing posts with label AI algorithms. Show all posts

Monday, April 20, 2026

Maryland passes legislation banning retailers from using personal data to set prices. Does it do enough?; WAMU, April 17, 2026

 Esther Ciammichilli, Jackson Sinnenberg, WAMU; Maryland passes legislation banning retailers from using personal data to set prices. Does it do enough?

"The Maryland General Assembly passed a bill this week will prohibit food retailers from changing the price of their products – in real time – depending on who is buying them. The practice is called dynamic pricing. 

The new legislation is expected to be signed into law by Governor Wes Moore, who introduced it with leaders in the General Assembly. It will specifically prohibit retailers from using personal protected data to set prices for individual customers. This kind of data includes biometric information like ethnicity, sex, and gender identity...

What made Governor Wes Moore and the assembly leadership want to tackle dynamic pricing during this session?

Well, I think we’ve seen over the last several years this sort of catch up that we’re doing. Technology is moving so fast and the tech companies are finding more and more ways to exploit, really, the data, the algorithms, what they know about us in ways that are really harmful to consumers.

Over the last few years we’ve had several bills that are about protecting biodynamics, protecting consumer privacy, protecting the use of data without people’s permission. I think over the last year we saw a new way that these tech companies and these large corporations are finding ways to combine data brokers, private personal data, in a way that’s really harmful to consumers, in a way that really exploits consumers. And so this year, this is what we tackled.

During the final debate over the bill last week, you said, “One of the largest corporations in the world is announcing to their shareholders technology which they will patent to be able to adjust prices based on personal data.” Can you elaborate on the details of that announcement?

Yeah, so, you know, Walmart is …  they’re not going to have paper tags on their grocery stores anymore on there for for their prices. They’re gonna have these little screens that can change immediately. Digital screens to price your milk and your eggs and flour and and whatever else.

But what this technology allows them to do ultimately is to figure out who’s standing in front of that screen and change the price based on who you are. And that’s really the thing that we’re trying to get ahead of with this legislation."

Thursday, March 26, 2026

The Terrible Cost of the Infinite Scroll; The New York Times, March 26, 2026

  , The New York Times; The Terrible Cost of the Infinite Scroll

"It finally happened: Social media companies have been held accountable for the toxicity of their algorithmic grip.

In a first ruling of its kind, a California Superior Court jury found Wednesday that Meta and YouTube harmed a user through their addictive design choices.

The consequences for the industry could be significant. This case is only one of thousands set to be litigated across the country, and courts are seeking to consolidate them. This could wind up with a single significant settlement similar to the agreement that the four largest cigarette makers made in 1998 to resolve lawsuits for an estimated $206 billion as part of a master agreement with 46 states.

Compensating people for the harm caused by their products is just the silver lining. The real win would be if the social media giants were finally forced to design less harmful products."

Is Big Tech Facing a Big Tobacco Moment?; The New York Times, March 26, 2026

 Andrew Ross SorkinBernhard WarnerSarah KesslerMichael J. de la MercedNiko Gallogly,Brian O’Keefe and , The New York Times; Is Big Tech Facing a Big Tobacco Moment?

Back-to-back courtroom losses have put technology giants, including Meta and Google, in uncertain territory as they face lawsuits and bans on teen users.

"Andrew here. Back in 2018, I moderated a panel at the World Economic Forum that included Marc Benioff of Salesforce. It was then that he essentially declared that Facebook was the modern-day equivalent of cigarettes, and that it and other social media companies should be regulated as such.

Well, Meta’s loss in court on Wednesday, in a case about whether its platforms were designed to be addictive to adolescents, may be a watershed. Investors don’t seem to be fazed — the company’s shares hardly moved after the verdict came out — but the decision could change the conversation around the company yet again. More below...

Some legal experts wonder if Big Tech is staring at a Big Tobacco moment, a reference to how cigarette makers had to overhaul their businesses — at a huge expense — after courts ruled that some of their products were addictive and harmful.

We’re in a new era, a digital era, where we have to rethink definitions for products based on which entities might have superior information to prevent these injuries and accidents,” Catherine Sharkey, a professor of law at N.Y.U., told The Times. She added that the “implications” of those verdicts were “very, very big.”

“This has potentially large impacts on other areas in tech, A.I. and beyond that,” Jessica Nall, a San Francisco lawyer who represents tech companies and executives, told The Wall Street Journal. “The floodgates are already open.”

Meta and Google plan to appeal. The companies have signaled that they will fight efforts to make them drastically redesign their products and algorithms."

Sunday, March 15, 2026

SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY; Pasadena Now, March 15, 2026

 Pasadena Now; SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY

A discussion today ties the 1818 novel's warnings about creator responsibility to contemporary debates over artificial intelligence, part of the city's One City, One Story program 

"Two centuries before algorithms began analyzing people’s dreams and predicting their crimes, Mary Shelley wrote a novel about a scientist who built something he could not control. That novel, “Frankenstein,” is the subject of a free discussion today at Hastings Branch Library, where presenter Rosemary Choate will connect its 207-year-old themes to the same questions about artificial intelligence that Pasadena’s citywide reading program is exploring all month.

The event, titled “Frankenstein: Myths and the Real Story?” is part of the Pasadena Public Library’s 24th annual One City, One Story program, which this year selected Laila Lalami’s “The Dream Hotel” — a dystopian novel about a woman detained because an algorithm, fed by data from her dreams, deemed her a future criminal. The library has organized a month of lectures, films and book discussions around the novel’s themes of surveillance, technology and freedom, and the Frankenstein session draws a direct line between Shelley’s 1818 tale and the anxieties at the center of Lalami’s story.

Choate, a comparative literature and humanities instructor and founder of the Pomona College Alumni Book Club, will lead the discussion at 3 p.m. She will examine themes including creator responsibility, the consequences of unchecked technological ambition and society’s rejection of the “creation” — questions the library’s event description calls “highly relevant to contemporary debates surrounding the development and governance of AI,” according to the Pasadena Public Library’s event listing.

Shelley published “Frankenstein; or, The Modern Prometheus” anonymously in 1818, when she was 20 years old. The novel tells the story of Victor Frankenstein, a young scientist who assembles a creature from dead body parts and recoils from what he has made. The creature, abandoned by its creator, becomes violent as it fails to find acceptance. The novel is widely considered one of the first works of science fiction.

The One City, One Story program, now in its 24th year, selects a single book each year for citywide reading and discussion. A 19-member committee of community volunteers, led by Senior Librarian Christine Reeder, chose “The Dream Hotel” for its exploration of surveillance, freedom and the reach of technology into private life. The program is sponsored by The Friends of the Pasadena Public Library and the Pasadena Literary Alliance.

The month of events culminates in a conversation with Lalami and Pasadena Public Library Director Tim McDonald on Saturday, March 21, at 2 p.m. at Pasadena Presbyterian Church, 585 E. Colorado Blvd. That event is also free and open to the public."

Thursday, March 5, 2026

Vatican hosts seminar on AI and ethics; Vatican News, March 2, 2026

 Edoardo Giribaldi, Vatican News; Vatican hosts seminar on AI and ethics

"“An abundance of means and a confusion of ends.” This phrase, attributed to Albert Einstein, offers a snapshot of a world challenged and shaped by new technologies. The interests at stake are multiple and not “neutral.” In this context, the Holy See — which has no military or commercial objectives — can play a key role in promoting global governance capable of developing systems that are “ethical from their design stage.”

These were some of the themes highlighted during the seminar Potential and Challenges of Artificial Intelligence,” organized today, Monday 2 March, in Rome, at the Salone San Pio X on Via della Conciliazione 5, by the Secretariat for the Economy and the Office of Labor of the Apostolic See (ULSA)...

To summarize the consequences of the widespread uptake in 2022 of ChatGPT, Bishop Tighe used the acronym VUCA: Volatility, Uncertainty, Complexity, and Ambiguity...

Father Benanti’s presentation focused on the ethical challenges of artificial intelligence, proposing a new “ethics of technology” that questions the “politics” embedded in such models. “Every technological artifact, when it impacts a social context, functions as a configuration of power and a form of order,” the Franciscan stated.

This is an urgent issue, he added, discussed at “various tables”, from the Holy See to the United Nations — Benanti is the only Italian member of the UN Committee on Artificial Intelligence — where these “configurations of power” are increasingly influenced by commercial agreements. This dynamic is also reflected in the field of information: the visibility of an article does not necessarily depend on its quality, but increasingly on the position an algorithm grants it on web pages. It is a “mediation of power,” Benanti concluded."

Wednesday, September 17, 2025

Trump celebrates TikTok deal as Beijing suggests US app would use China’s algorithm; The Guardian, September 16, 2025

Guardian staff and agencies , The Guardian; Trump celebrates TikTok deal as Beijing suggests US app would use China’s algorithm


[Kip Currier: Wasn't fears about the Chinese government's potential ability to manipulate U.S. TikTok users via the TikTok algorithm one of the chief rationales for the past Congress and Biden administration's banning of TikTok? How does this Trump 2.0 deal materially change any of that?

Another rationale for the ban was concerns about China's potential to access and leverage the personal data and impinge the privacy interests of TikTok users in the U.S. How does this proposed arrangement substantively address these concerns, particularly without comprehensive federal data and privacy legislation to give Americans agency over their own data?

The American people need maximal transparency and oversight of any kind of financial deal like this.]


[Excerpt]

"One of the major questions is the fate of TikTok’s powerful algorithm that helped the app become one of the world’s most popular sources of online entertainment.

At a press conference in Madrid, the deputy head of China’s cyber security regulator said the framework of the deal included “licensing the algorithm and other intellectual property rights”.

Wang Jingtao said ByteDance would “entrust the operation of TikTok’s US user data and content security.”

Some commentators have inferred from these comments that TikTok’s US spinoff will retain the Chinese algorithm."

Sunday, June 29, 2025

ACM FAccT ACM Conference on Fairness, Accountability, and Transparency; June 23-26, 2025, Athens, Greece

 

ACM FAccT

ACM Conference on Fairness, Accountability, and Transparency

A computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.

"Algorithmic systems are being adopted in a growing number of contexts, fueled by big data. These systems filter, sort, score, recommend, personalize, and otherwise shape human experience, increasingly making or informing decisions with major impact on access to, e.g., credit, insurance, healthcare, parole, social security, and immigration. Although these systems may bring myriad benefits, they also contain inherent risks, such as codifying and entrenching biases; reducing accountability, and hindering due process; they also increase the information asymmetry between individuals whose data feed into these systems and big players capable of inferring potentially relevant information.

ACM FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars from computer science, law, social sciences, and humanities to investigate and tackle issues in this emerging area. Research challenges are not limited to technological solutions regarding potential bias, but include the question of whether decisions should be outsourced to data- and code-driven computing systems. We particularly seek to evaluate technical solutions with respect to existing problems, reflecting upon their benefits and risks; to address pivotal questions about economic incentive structures, perverse implications, distribution of power, and redistribution of welfare; and to ground research on fairness, accountability, and transparency in existing legal requirements."

Sunday, December 1, 2024

5 Underrated Films About AI Ethics Every Tech Leader Should Watch; Forbes, November 26, 2024

 Bruce Weinstein, Ph.D., Forbes ; 5 Underrated Films About AI Ethics Every Tech Leader Should Watch

"If you’’re a tech leader—and even if you’re not—you owe it to yourself to watch at least a couple of the films on this list. Each raises profound ethical questions and are gripping to boot.

So here are 5 lesser-known works of cinema waiting for you online or on old-fashioned DVD or Blu-Ray discs. For each film I’m including:

  • a reference to an ethical question raised by the film
  • a reference for digging more deeply into the film’s ethical issues
  • The Rotten Tomatoes rating at the time of this article’s publication
  • where to watch"

Saturday, September 28, 2024

Pulling Back the Silicon Curtain; The New York Times, September 10, 2024

 Dennis Duncan, The New York Times; Pulling Back the Silicon Curtain

Review of NEXUS: A Brief History of Information Networks From the Stone Age to AI, by Yuval Noah Harari

"In a nutshell, Harari’s thesis is that the difference between democracies and dictatorships lies in how they handle information...

The meat of “Nexus” is essentially an extended policy brief on A.I.: What are its risks, and what can be done? (We don’t hear much about the potential benefits because, as Harari points out, “the entrepreneurs leading the A.I. revolution already bombard the public with enough rosy predictions about them.”) It has taken too long to get here, but once we arrive Harari offers a useful, well-informed primer.

The threats A.I. poses are not the ones that filmmakers visualize: Kubrick’s HAL trapping us in the airlock; a fascist RoboCop marching down the sidewalk. They are more insidious, harder to see coming, but potentially existential. They include the catastrophic polarizing of discourse when social media algorithms designed to monopolize our attention feed us extreme, hateful material. Or the outsourcing of human judgment — legal, financial or military decision-making — to an A.I. whose complexity becomes impenetrable to our own understanding.

Echoing Churchill, Harari warns of a “Silicon Curtain” descending between us and the algorithms we have created, shutting us out of our own conversations — how we want to act, or interact, or govern ourselves...

“When the tech giants set their hearts on designing better algorithms,” writes Harari, “they can usually do it.”...

Parts of “Nexus” are wise and bold. They remind us that democratic societies still have the facilities to prevent A.I.’s most dangerous excesses, and that it must not be left to tech companies and their billionaire owners to regulate themselves."

Friday, April 29, 2022

LSU to Embed Ethics in the Development of New Technologies, Including AI; LSU Office of Research and Economic Development, April 2022

 Elsa Hahne, LSU Office of Research and Economic Development; LSU to Embed Ethics in the Development of New Technologies, Including AI

"“If we want to educate professionals who not only understand their professional obligations but become leaders in their fields, we need to make sure our students understand ethical conflicts and how to resolve them,” Goldgaber said. “Leaders don’t just do what they’re told—they make decisions with vision.”

The rapid development of new technologies has put researchers in her field, the world of Socrates and Rousseau, in the new and not-altogether-comfortable role of providing what she calls “ethics emergency services” when emerging capabilities have unintended consequences for specific groups of people.

“We can no longer rely on the traditional division of labor between STEM and the humanities, where it’s up to philosophers to worry about ethics,” Goldgaber said. “Nascent and fast-growing technologies, such as artificial intelligence, disrupt our everyday normative understandings, and most often, we lack the mechanisms to respond. In this scenario, it’s not always right to ‘stay in your lane’ or ‘just do your job.’”

Thursday, February 14, 2019

Parkland school turns to experimental surveillance software that can flag students as threats; The Washington Post, February 13, 2019

Drew Harwell, The Washington Post; Parkland school turns to experimental surveillance software that can flag students as threats

"The specter of student violence is pushing school leaders across the country to turn their campuses into surveillance testing grounds on the hope it’ll help them detect dangerous people they’d otherwise miss. The supporters and designers of Avigilon, the AI service bought for $1 billion last year by tech giant Motorola Solutions, say its security algorithms could spot risky behavior with superhuman speed and precision, potentially preventing another attack.

But the advanced monitoring technologies ensure that the daily lives of American schoolchildren are subjected to close scrutiny from systems that will automatically flag certain students as suspicious, potentially spurring a response from security or police forces, based on the work of algorithms that are hidden from public view.

The camera software has no proven track record for preventing school violence, some technology and civil liberties experts argue. And the testing of their algorithms for bias and accuracy — how confident the systems are in identifying possible threats — has largely been conducted by the companies themselves."

Friday, January 25, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values; The New Yorker, January 24, 2019

Caroline Lester, The New Yorker; A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

"The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The decisions made by Germany will reverberate beyond its borders. Volkswagen sells more automobiles than any other company in the world. But that manufacturing power comes with a complicated moral responsibility. What should a company do if another country wants its vehicles to reflect different moral calculations? Should a Western car de-prioritize the young in an Eastern country? Shariff leans toward adjusting each model for the country where it’s meant to operate. Car manufacturers, he thinks, “should be sensitive to the cultural differences in the places they’re instituting these ethical decisions.” Otherwise, the algorithms they export might start looking like a form of moral colonialism. But Di Fabio worries about letting autocratic governments tinker with the code. He imagines a future in which China wants the cars to favor people who rank higher in its new social-credit system, which scores citizens based on their civic behavior."