ethical implications of a moral machine and a bill of rights for artificial intelligence

Ethical Implications of a Moral Machine and a Bill of Rights for Artificial Intelligence Projects

In a bit of more cheery news, the top AI researchers have all AGREED on a list of ethical implications — almost an AI bill of rights — that need to be signed and accepted as industry standard for artificial intelligence projects. So far, about 760 developers and 1020 others have signed the agreement. I guess this will at least delay us from “summoning the demon” as Elon Musk, Stephen Hawking, and Bill Gates have put it.

In any case, here it is:

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

.  .  .

Of course none of this gets at the flaw in Turing’s Test, whether consciousness can be created through physical or technological means, or whether leading scientists agree with ancient philosophers who said the mind (synonymous with soul) exists outside the body. But you can’t win ’em all.

lancelot tobias mearcstapa schaubert monogram


Be sure to share and comment. And subscribe.

Comment early, comment often, keep it civil:

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Quick note from Lance about this post: when you choose to comment (or share this post with your friends) you help other readers just like you.

How?

Well, see, your comments & sharing whisper a few things to those who come after you:

The first is that this site is a safe place to speak up & stay curious. That it's civil. That discussion is encouraged. That there's no such thing as a stupid question (being a student of Socrates, I really and truly believe this). That talking to one another and growing together is more important than anything we could possibly publish. That the point is growing in virtue and growing together and growing wise. That discovery is invention, deference is originality, that we all can rise together. The only folks I'm going to take comments down from are obvious jerks who argue in bad faith, don't stay curious, or actively make personal attacks. And, frankly, I'd rather we talk here than on some social media farm — I will never show ads and the only thing I'm selling anywhere on the site or my mailing list is just the stuff I make.

You're also helping folks realize that anything you & they build together is far more important than anything you come to me to read. I take the things I write about seriously, but I don't take myself seriously: I play the fool, I hate cults of personality, and I also don't really like being the center of attention (believe it or not). I would much rather folks connect because of an introduction I've made or because they commented with one another back and forth and then build something beautiful together. My favorite contributions have been lifelong business and love partnerships from two people who have forgotten I introduced them. Some of my closest friends NOW I literally met on another blog's comment section fifteen years ago. I would love for that to happen here — let two of you meet and let me fade into the background.

Last, you help me revise. I'm wrong. Often. I'm not embarrassed to admit it or worried about being cancelled or publicly shamed. I make a fool out of myself (that's sort of the point). So as I get feedback, I can say, "I was wrong about that" and set a model for curious, consistent learning, and growing in wisdom. I'm blind to what I don't know and as grows the island of my knowledge so grows the shoreline of my ignorance. It's the recovery of innocence on the far end of experience: a child is in a permanent state of wonder. So are the wise: they aren't afraid of saying, "I don't know. That's new: please teach me." That's my goal, comments help. And I read all reviews: my skin's tough, but that's not license to be needlessly cruel. We teach one another our habits and there's a way to civilly demolish an idea without demolishing another person: just because I personally can take the world's meanest 1-star review doesn't mean we should teach one another how to be crueler on the internet.

For three magical reasons — your brave curiosity, your community, & my ignorance:

Please comment & share with friends how you prefer to share:

Follow The Showbear Family Circus on WordPress.com

Thanks for reading the Showbear Family Circus.
  1. "I think you can write about yourself without the vain, self-focused naval gazing. Good storytelling is a gift from writers…

  2. "His fans didn’t just write fiction about it. One calculated the tensile strength of the material it was made of.…

  3. My mother was the volatile Italian and my dad was the calming influence when things went awry. Dad was our…

  4. Lancelot, thank you, for that congrats, but I fear that continues my jinxed lament - that the late Andy Warhol…

Copyright © 2010— 2023 Lancelot Schaubert.
All Rights Reserved.
If we catch you using any of the substance of this site to train any form of artificial intelligence, we will prosecute
to the fullest extent permitted by any law.

Human children and adults always welcome
to learn bountifully and in joy.