Thursday 1 December 2016

New article on the possibility of machines as moral agents online



One of the real perks of supervising PhD students is that they tend to force you outside your established academic comfort zone, and explore new territories of philosophical inquiry. This is what has happened when I have followed the footsteps and eventually joined the work of Dorna Behdadi, who is pursuing a PhD project in our practical philosophy group on the theme of Moral Agency in Animals and Machines. She has led the work on a new paper, where I am co-authour, that is now available online as a so-called preprint, while it is being considered for publication by a scientific journal. The title of the paper is "Artificial Moral Agency: Reviewing Philosophical Assumptions and Methodological Challenges", and deals with the notion of machines or any artificial entity (possibly a very advanced not yet existing one) could ever be ascribed agency of a moral sort, that might imply moral wrongdoing, responsibility for such wrongdoing (by the machine), or similar things.

Tts abstract runs thus:

The emerging field of "machine ethics" has raised the issue of the moral agency and responsibility of artificial entities, like computers and robots, under the heading of "artificial moral agents" (AMA). We analyze the philosophical assumptions at play in this debate and conclude that it is characterized by a rather pronounced conceptual and/or terminological confusion. Mostly, this confusion regards how central concepts and expressions (like agency, autonomy, responsibility, free will, rationality, consciousness) are (assumed to be) related to each other. This, in turn, creates a lack of basis for assessing either to what extent proposed positions and arguments are compatible or not, or whether or not they at all address the same issue. Furthermore, we argue that the AMA debate would benefit from assessing some underlying methodological issues, for instance, regarding the relationship between conceptual, epistemic, pragmatic and ethical reasons and positions. Lastly, because this debate has some family resemblance to debates on the moral status of various kinds of beings, the AMA discussion needs to acknowledge that there exists a challenge of demarcation regarding what kind of entities that can and should be ascribed moral agency.
 The paper can be viewed and downloaded for free here and here.

***

7 comments:

  1. I have e-mailed you a copy with my comments, for what they're worth.

    ReplyDelete
    Replies
    1. Thanks, Björn! Dorna and I will look at it when we get the review reports back ...

      Delete
  2. Machines and morals

    Here are some thoughts ”off the cuff”, loosely related to your paper, a few days after having read it (once).

    First of all, I found most of the texts quoted in the paper quite confused, especially those that appear in the first part.

    I guess that was the point: 1) to demonstrate the current state of discourse; 2) to point out that it lacks and needs more rigor; 3) to reason that it is possible to make it more rigorous; 4) to show this by giving some examples.

    Are there other, better, examples in the field so far?

    Further aims could be to 5) present a (complete) framework for (every future contribution to) the discourse; and 6) to present (fictive) examples of how such contributions would / should be structured. This would likely also include 7) a tighter connection to (the volumes of) previous work in philosophy not dealing explicitly with (only) artificial entities (but instead mostly concentrating on humans).

    Having come that far, one could also 8) evaluate previous texts on how well they fit in to your framework, or even how clearly and how exhaustively they present their own explicit framework. You could also 9) suggest your own concrete answers to each question in your framework.

    Anyway, I would suggest restructuring the paper in a more constructive way: Begin with your contributions; your framework; your suggestions. Only then, exhibit and discuss (snippets) of other texts, and show how they fall short, by not adhering to your framework.

    For most texts, those that seem far from fitting into the framework, this discussion should probably be limited to isolated details (lest it gets out of hand). Hopefully you can find other texts that either explicitly presents something relatively close to your framework, or can be interpreted to fit more or less within it.

    ReplyDelete
  3. Now what do I mean by framework, exactly? Well, it seems to me that there are several, interrelated concepts and questions here, many of which are often left unstated. My immediate instinct is to draw up a (multidimensional) matrix (no pun intended) to clarify any and all possible combination of (potentially) relevant factors.

    For simplicity, I assume that all concepts are binary, that is, they either apply or they don’t (0 or 1) - there are no intermediate states. All relevant concepts are listed in some fixed order, yielding an index. Then, any combination of positions with regard to these concepts can be represented by a vector of length n, where n i s the number of relevant concepts, yielding 2^n possible combinations. (Some pruning can probably be done, I’m sure, as some combinations are too unlikely.)

    A vector could, for instance, be regarded as answering one of two questions: What does it take to be a moral agent? What does it take to be morally responsible? Who (if not the agent itself) is responsible? (There are other ways of using and interpreting the matrix.)

    The first thing one should ask of any contribution to the field is: What is their vector? Many will not have have filled in every slot with either a 0 or a 1. They should! Some may be able to do so, but haven’t yet done it explicitly. They should!

    The next step is to ask each contributor why some slots are filled but others are not. Is there a reason that some slots have a specific value, and how (if at all) does it relate to other slots? This line of questioning will, in effect, tease out both (a) the assumptions and (b) the reasoning behind the position of the contribution.

    One could, for instance, take the position that no artificial (made by humans) entity could ever be conscious (regardless of how it is built) as an axiom; and reason from there that conscious entities are (or are not!) always moral agents. Etc.

    ReplyDelete
  4. Concepts: artificial, autonomous, embodied, physical, complicated, hardware, software, implementation-independent, biological, mechanical, material-independent, (classically) computable, Turing-equivalent, quantum, planning, problem-solving, deciding, anticipating, rich world-model, input, output, adjustable, self-adjustable, learning, opaque *, thinking, intelligent, conscious, free will, ability to accept praise or blame, ability to cause or feel pain, remorse or pride (qualia), action, physical, virtual, relating to humans, (moral) agency, (moral / legal) responsibility, justice: retributive, corrective……..

    (* I’m thinking here of, e.g., self-learning software that develops large and complex ”neural” networks which encode say, accurate face recognition. It is impossible for the developers or anyone looking ”under the hood” to anticipate, interpret or understand that ”knowledge” once it is developed by the system. Another example is so called evolutionary algorithms.)

    Questions and relationships: The list of concepts (above) is not complete, and different authors may emphasize different things. Some will focus on the details of one or more of these concepts and develop further distinctions. But I think it is a good idea to strive for, and to try to interpret, any position (vector) as the result of (i) some axioms and (ii) reasoning. Doing so allows one to categorize different positions. Examples of axioms could be: autonomous (in some specified sense) entities have (moral) agency; no artificial system could ever be conscious; a creator of a tool is always (or never!) morally responsible for its uses…….. Examples of reasoning could be: any entity that violates rules which it demonstrably has the ability to access and follow is morally culpable of neglect, regardless of implementation - thus, our dog is morally responsible for biting the neighbors little girl; There is no such thing as ”free will” since everything is determined - thus the lunatic who shot 15 people cannot be blamed for what he did (but we should imprison him anyway)……..

    ReplyDelete
  5. Common / accepted positions in discussions of non-artificial entities: I guess your work as a philosopher is first and foremost to develop a solid framework for discussion. Once that is done, philosophers, computer scientists, cognitive psychologists and everyone else can present and discuss their positions effectively and efficiently. I am not a moral philosopher but I suspect that much (most?) of this work has already been done, when it comes to non-artificial agents, that is, humans. I also suspect that most, if not all, of the discussion concerning artificial agency is (merely) an extension of earlier work. My hunch is that is more like the top of an existing iceberg, rather than a new iceberg unto itself. My feeling when reading some of the quotes in your paper was that the discussion was (a) unclear with regards to unstated assumptions, but also that it (2) seemed to run ”in parallell”, as it were, to traditional ethics, effectively neglecting to position itself firmly within an existing framework and then extending it (only) as needed to handle also artificial entities.

    Your own stance: Is not strictly needed, of course, but it would be nice to see you developing your own position within the framework you develop and propose. It would serve to illustrate the framework itself, but also actually exhibit a coherent and hopefully practically useful stance towards the issue, from which concrete measures may be inferred.

    ReplyDelete
  6. My stance: The comments I wrote in the PDF-file were probably not all that helpful, since I made the mistake of mixing constructive thoughts on the article’s disposition with critiques of the various quotes from my own (unarticulated) position. Sorry ’bout that.

    For what it’s worth, I find most of the non-functional discussion uninteresting - regardless of whether it pertains to human or to artificial intelligence, consciousness etc. My own background in comp. sci. has left me highly skeptical of anything OTHER than a computational theory of mind (CTOM) and I find most people, including philosophers, acting on what is really nothing more than a sense of entitlement and disgust when they exclude features such as intelligence or consciousness from non-human entities. Most don’t even do a good job of rationalizing their position from already preconceived notions. And most don’t present, or even feel the need to present, proper reasons for assigning humans a unique (and unattainable) position.

    From my perspective, ”intelligence” would amount to something like flexible goal-attainment. ”Consciousness”, to the extent that is even ”a thing”, is probably a bi-product of certain (but not all) sufficiently complex computational systems; to the extent that you count qualia as (a necessary) part of consciousness, they may be (possibly necessary) means to effect certain (more or less crucial) actions.

    To my mind, neither intelligence nor consciousness is needed for agency. Intelligence (in a limited sense), but not consciousness, is needed for responsibility. I’m not sure about the significance of putting the word ”moral” in front of ”agency” or ”responsibility”.

    ”Free will” is conceptually hollow, regardless of whether it pertains to humans or artificial agents, and regardless of whether you assume a deterministic or a non-deterministic world-view: If actions are ”free” in the sense of being uncoupled from any or all causality, then they are more or less random and as such they cannot be ascribed to an agent, even if they emanate (in some sense) from that agent.

    I assume a deterministic world (for now). For me, that implies that not even humans, ultimately, can be held responsible for their actions. Moral responsibility may be a conceptual impossibility, but (for now) we seem to need at least legal responsibility in order to reason about and arrange our societies. I am not convinced of Dennett et al’s brand of compatibilism, but I am open to the possibility of defining a coherent notion of ”volitional action” for which it makes (the best possible) sense to assign legal responsibility.

    Such responsibility hinges on an agent’s capability of building and adhering to a model of rules for action, as well as the capability of adapting its behavior in accordance with ”praise” and ”blame”. This does not necessarily involve corresponding feelings or states of ”shame” or ”pride”.

    When it comes to foresight (planning, ”imagination”, theory of mind…) I note that humans are capable of only limited foresight when it comes to the consequences of our actions. There seems to be no clear way of drawing the line for what constitutes a sufficient feel of foresight, in order to meet out responsibility.

    ReplyDelete