Thursday, April 25, 2019

The Legal and Ethical Implications of Using AI in Hiring; Harvard Business Review, April 25, 2019

  • Ben Dattner
  • Tomas Chamorro-Premuzic
  • Richard Buchband
  • Lucinda Schettler
  • , Harvard Business Review; 

    The Legal and Ethical Implications of Using AI in Hiring


    "Using AI, big data, social media, and machine learning, employers will have ever-greater access to candidates’ private lives, private attributes, and private challenges and states of mind. There are no easy answers to many of the new questions about privacy we have raised here, but we believe that they are all worthy of public discussion and debate."

    Enron, Ethics And The Slow Death Of American Democracy; Forbes, April 24, 2019

    Ken Silverstein, Forbes; Enron, Ethics And The Slow Death Of American Democracy

    "“Ask not whether it is lawful but whether it is ethical and moral,” says Todd Haugh, professor of business ethics at Indiana University's Kelley School of Business, in a conversation with this reporter. “What is the right thing to do and how do we foster this? We are trying to create values and trust in the market and there are rules and obligations that extend beyond what is merely legal. In the end, organizational interests are about long-term collective success and not about short-term personal gain.”...

    The Moral of the Mueller Report
    The corollary to this familiar downfall is that of the U.S. presidency in context of the newly-released redacted version of the Mueller report. The same moral questions, in fact, have surfaced today that did so when Enron reigned: While Enron had a scripted code of conduct, it couldn’t transcend its own arrogance — that all was fair in the name of profits. Similarly, Trump has deluded himself and portions of the public that is all is fair in the name of winning.
    “One of the most disturbing things, is the idea you can do whatever you need to do so long as you don’t get punished by the legal system,” says Professor Haugh. “We have seen echoes of that ever since the 2016 election. It is how this president is said to have acted in his business and many of us consider this conduct morally wrong. It is difficult to have an ethical culture when the leader does not follow what most people consider to be moral actions.”...
    Just as Enron caused the nation to evaluate the balance between people and profits, the U.S president has forced American citizens to re-examine the boundaries between legality and morality. Good leadership isn’t about enriching the self but about bettering society and setting the tone for how organizations act. Debasing those standards is always a loser. And what’s past is prologue — a roadmap that the president is probably ignoring at democracy’s peril."

    Monday, April 22, 2019

    Tech giants are seeking help on AI ethics. Where they seek it matters; Quartz, March 30, 2019

    Dave Gershgorn, Quartz; Tech giants are seeking help on AI ethics. Where they seek it matters

    "Meanwhile, as Quartz reported last week, Stanford’s new Institute for Human-Centered Artificial Intelligence excluded from its faculty any significant number of people of color, some of whom have played key roles in creating the field of AI ethics and algorithmic accountability.

    Other tech companies are also seeking input on AI ethics, including Amazon, which this week announced a $10 million grant in partnership with the National Science Foundation. The funding will support research into fairness in AI."

    A New Model For AI Ethics In R&D; Forbes, March 27, 2019

    Cansu Canca, Forbes; 

    A New Model For AI Ethics In R&D


    "The ethical framework that evolved for biomedical research—namely, the ethics oversight and compliance model—was developed in reaction to the horrors arising from biomedical research during World War II and which continued all the way into the ’70s.

    In response, bioethics principles and ethics review boards guided by these principles were established to prevent unethical research. In the process, these boards were given a heavy hand to regulate research without checks and balances to control them. Despite deep theoretical weaknesses in its framework and massive practical problems in its implementation, this became the default ethics governance model, perhaps due to the lack of competition.

    The framework now emerging for AI ethics resembles this model closely. In fact, the latest set of AI principles—drafted by AI4People and forming the basis for the Draft Ethics Guidelines of the European Commission’s High-Level Expert Group on AI—evaluates 47 proposed principles and condenses them into just five.

    Four of these are exactly the same as traditional bioethics principles: respect for autonomy, beneficence, non-maleficence, and justice, as defined in the Belmont Report of 1979. There is just one new principle added—explicability. But even that is not really a principle itself, but rather a means of realizing the other principles. In other words, the emerging default model for AI ethics is a direct transplant of bioethics principles and ethics boards to AI ethics. Unfortunately, it leaves much to be desired for effective and meaningful integration of ethics into the field of AI."

    Thursday, April 18, 2019

    'Disastrous' lack of diversity in AI industry perpetuates bias, study finds; The Guardian, April 16, 2019

    Kari Paul, The Guardian;

    'Disastrous' lack of diversity in AI industry perpetuates bias, study finds

    "Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

    The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said...

    The report released on Tuesday cautioned against addressing diversity in the tech industry by fixing the “pipeline” problem, or the makeup of who is hired, alone. Men currently make up 71% of the applicant pool for AI jobs in the US, according to the 2018 AI Index, an independent report on the industry released annually. The AI institute suggested additional measures, including publishing compensation levels for workers publicly, sharing harassment and discrimination transparency reports, and changing hiring practices to increase the number of underrepresented groups at all levels."

    Ethics Alone Can’t Fix Big Tech Ethics can provide blueprints for good tech, but it can’t implement them.; Slate, April 17, 2019

    Daniel Susser, Slate;

    Ethics Alone Can’t Fix Big Tech


    Ethics can provide blueprints for good tech, but it can’t implement them.



    "Ethics requires more than rote compliance. And it’s important to remember that industry can reduce any strategy to theater. Simply focusing on law and policy won’t solve these problems, since they are equally (if not more) susceptible to watering down. Many are rightly excited about new proposals for state and federal privacy legislation, and for laws constraining facial recognition technology, but we’re already seeing industry lobbying to strip them of their most meaningful provisions. More importantly, law and policy evolve too slowly to keep up with the latest challenges technology throws at us, as is evident from the fact that most existing federal privacy legislation is older than the internet.

    The way forward is to see these strategies as complementary, each offering distinctive and necessary tools for steering new and emerging technologies toward shared ends. The task is fitting them together.

    By its very nature ethics is idealistic. The purpose of ethical reflection is to understand how we ought to live—which principles should drive us and which rules should constrain us. However, it is more or less indifferent to the vagaries of market forces and political winds. To oversimplify: Ethics can provide blueprints for good tech, but it can’t implement them. In contrast, law and policy are creatures of the here and now. They aim to shape the future, but they are subject to the brute realities—social, political, economic, historical—from which they emerge. What they lack in idealism, though, is made up for in effectiveness. Unlike ethics, law and policy are backed by the coercive force of the state."

    Sunday, April 14, 2019

    Europe's Quest For Ethics In Artificial Intelligence; Forbes, April 11, 2019

    Andrea Renda, Forbes; Europe's Quest For Ethics In Artificial Intelligence

    "This week a group of 52 experts appointed by the European Commission published extensive Ethics Guidelines for Artificial Intelligence (AI), which seek to promote the development of “Trustworthy AI” (full disclosure: I am one of the 52 experts). This is an extremely ambitious document. For the first time, ethical principles will not simply be listed, but will be put to the test in a large-scale piloting exercise. The pilot is fully supported by the EC, which endorsed the Guidelines and called on the private sector to start using it, with the hope of making it a global standard.

    Europe is not alone in the quest for ethics in AI. Over the past few years, countries like Canada and Japan have published AI strategies that contain ethical principles, and the OECD is adopting a recommendation in this domain. Private initiatives such as the Partnership on AI, which groups more than 80 corporations and civil society organizations, have developed ethical principles. AI developers agreed on the Asilomar Principles and the Institute of Electrical and Electronics Engineers (IEEE) worked hard on an ethics framework. Most high-tech giants already have their own principles, and civil society has worked on documents, including the Toronto Declaration focused on human rights. A study led by Oxford Professor Luciano Floridi found significant alignment between many of the existing declarations, despite varying terminologies. They also share a distinctive feature: they are not binding, and not meant to be enforced."

    Studying Ethics Across Disciplines; Lehigh News, April 10, 2019

    Madison Hoff, Lehigh News;

    Studying Ethics Across Disciplines

    Undergraduates explore ethical issues in health, education, finance, computers and the environment at Lehigh’s third annual ethics symposium.

    "The event was hosted for the first time by Lehigh’s new Center for Ethics and made possible by The Endowment Fund for the Teaching of Ethical Decision-Making. The philosophy honor society Phi Sigma Tau also helped organize the symposium, which allowed students to share their research work on ethical problems in or outside their field of study.

    “Without opportunities for Lehigh undergrads to study ethical issues and to engage in informed thinking and discussion of them, they won’t be well-prepared to take on these challenges and respond to them well,” said Professor Robin Dillon, director of the Lehigh Center of Ethics. “The symposium is one of the opportunities the [Center of Ethics] provides.” 

    Awards were given to the best presentation from each of the three colleges and a grand prize. This year, the judges were so impressed with the quality of the presentations that they decided to award two grand prizes for the best presentation of the symposium category.

    Harry W. Ossolinski ’20 and Patricia Sittikul ’19 both won the grand prize. 

    As a computer science student, Sittikul researched the ethics behind automated home devices and social media, such as Tumblr and Reddit. Sittikul looked at privacy and censorship issues  and whether the outlets are beneficial.

    Sittikul said the developers of the devices and apps should be held accountable for the ethical issues that arise. She said she has seen some companies look for solutions to ethical problems. 

    “I think it's incredibly important to look at ethical questions as a computer scientist because when you are working on technology, you are impacting so many people whether you know it or not,” Sittikul said.""

    Tuesday, April 9, 2019

    Real or artificial? Tech titans declare AI ethics concerns; AP, April 7, 2019

    Matt O'Brien and Rachel Lerman, AP; Real or artificial? Tech titans declare AI ethics concerns

    "The biggest tech companies want you to know that they’re taking special care to ensure that their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn’t spill over to the dark side.

    But their efforts to assuage concerns that their machines may be used for nefarious ends have not been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what’s in society’s best interests.

    “Ethical AI” has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

    But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?"

    Monday, April 8, 2019

    Are big tech’s efforts to show it cares about data ethics another diversion?; The Guardian, April 7, 2019

    John Naughton, The Guardian; Are big tech’s efforts to show it cares about data ethics another diversion?

    "No less a source than Gartner, the technology analysis company, for example, has also sussed it and indeed has logged “data ethics” as one of its top 10 strategic trends for 2019...

    Google’s half-baked “ethical” initiative is par for the tech course at the moment. Which is only to be expected, given that it’s not really about morality at all. What’s going on here is ethics theatre modelled on airport-security theatre – ie security measures that make people feel more secure without doing anything to actually improve their security.

    The tech companies see their newfound piety about ethics as a way of persuading governments that they don’t really need the legal regulation that is coming their way. Nice try, boys (and they’re still mostly boys), but it won’t wash. 

    Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity."

    Sunday, April 7, 2019

    Hey Google, sorry you lost your ethics council, so we made one for you; MIT Technology Review, April 6, 2019

    Bobbie Johnson and Gideon Lichfield, MIT Technology Review; Hey Google, sorry you lost your ethics council, so we made one for you

    "Well, that didn’t take long. After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC—a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing.

    How did things go so wrong? And can Google put them right? We got a dozen experts in AI, technology, and ethics to tell us where the company lost its way and what it might do next. If these people had been on ATEAC, the story might have had a different outcome."

    Thursday, April 4, 2019

    Toyota is giving automakers free access to nearly 24,000 hybrid car-related patents; TechCrunch, April 3, 2019

    Kirsten Korosec, TechCrunch; Toyota is giving automakers free access to nearly 24,000 hybrid car-related patents

    "Toyota said Wednesday it will give royalty-free access to its nearly 24,000 patents related to electrification technology and systems through 2030 in a move that aims to encourage rival automakers to adopt the low-emissions and fuel-saving technology.

    Collectively the patents represent core technologies that can be applied to the development of various types of electrified vehicles, including hybrid electric, plug-in hybrid electric vehicles and fuel cell electric vehicles, Toyota said. This follows Toyota’s decision back in 2015 to offer 5,680 patents related to its fuel cell electric vehicles."

    THE PROBLEM WITH AI ETHICS; The Verge, April 3, 2019

    James Vincent, The Verge; 

    THE PROBLEM WITH AI ETHICS

    Is Big Tech’s embrace of AI ethics boards actually helping anyone?


    "Part of the problem is that Silicon Valley is convinced that it can police itself, says Chowdhury.

    “It’s just ingrained in the thinking there that, ‘We’re the good guys, we’re trying to help,” she says. The cultural influences of libertarianism and cyberutopianism have made many engineers distrustful of government intervention. But now these companies have as much power as nation states without the checks and balances to match. “This is not about technology; this is about systems of democracy and governance,” says Chowdhury. “And when you have technologists, VCs, and business people thinking they know what democracy is, that is a problem.”

    The solution many experts suggest is government regulation. It’s the only way to ensure real oversight and accountability. In a political climate where breaking up big tech companies has become a presidential platform, the timing seems right."

    Google’s brand-new AI ethics board is already falling apart; Vox, April 3, 2019

    Kelsey Piper, Vox; Google’s brand-new AI ethics board is already falling apart

    "Of the eight people listed in Google’s initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won’t serve, and two others are the subject of petitions calling for their removal — Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James’s removal.

    James and Gibbens are two of the three women on the board. The third, Joanna Bryson, was asked if she was comfortable serving on a board with James, and answered, “Believe it or not, I know worse about one of the other people.”

    Altogether, it’s not the most promising start for the board.

    The whole situation is embarrassing to Google, but it also illustrates something deeper: AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.

    A role on Google’s AI board is an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it. There are urgent ethical questions about the AI work Google is doing — and no real avenue by which the board could address them satisfactorily. From the start, it was badly designed for the goal — in a way that suggests Google is treating AI ethics more like a PR problem than a substantive one."

    Monday, April 1, 2019

    Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret; Forbes, March 27, 2019

    Sam Shead, Forbes; Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret

    "Google announced a new external advisory council to keep its artificial intelligence developments in check on Wednesday, but the mysterious AI ethics board that was set up when the company bought the DeepMind AI lab in 2014 remains shrouded in mystery.

    The new advisory council consists of eight members that span academia and public policy. 

    "We've established an Advanced Technology External Advisory Council (ATEAC)," wrote Kent Walker SVP of global affairs at Google in a blog post on Tuesday. "This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work." 

    Here is the full list of AI advisory council members:"