An interview with Eric Schwitzgebel and Mara Garza

2015-11-13

Eric Schwitzgebel

Eric Schwitzgebel is a professor of philosophy at the University of California, Riverside. He’s well known in the philosophy community for his work exploring the intersection of psychology and philosophy and his blog “The Splintered Mind”. He’s also written some popular articles on his own research, including “Cheeseburger ethics” on whether professional ethicists are good people. He is also the author of “Perplexities of Consciousness”. He tweets at @eschwitz.

Mara Garza

Mara Garza received her undergraduate degree from the University of California, Berkeley, where she wrote a thesis on Nietzsche’s theory of the will. She then spent a year as a visiting scholar in the philosophy department at the University of Pittsburgh, and in 2013, began her graduate work at the University of California, Riverside.

Her primary research interests are in moral and legal philosophy and in German philosophy (especially Kant, Schopenhauer, and Nietzsche!). In particular, she’s interested in how a variety of issues of intersect with ethics, including motivation and self-control, accounts of agency in ethics and criminal law, AI and technology, identity and gender.

Eric and Mara stood out to us as great interview candidates when we read their article A Defense of the Rights of Artificial Intelligences where they argue that “Our duties to them [AIs] would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to human strangers – obligations similar to those of parent to child or god to creature.”

The following interview was conducted via email.

Your core thesis is that there are some possible AIs that deserve the same moral consideration that we give to humans. How controversial do you expect this to be in the philosophical community?

Eric and Mara: The thesis, as stated, is so modest or “weak” that we expect most philosophers will accept it. Philosophers tend to have a liberal sense of what is “possible” based on their exposure to far-out thought experiments (brains in vats manipulated by genius neuroscientists to think they are reading philosophy, molecule-for-molecule twins congealing out of swamp gas by freak quantum accident…). Our aim with that thesis is to establish a baseline claim that we think will be widely (though not universally) acceptable to thoughtful readers.

Once readers accept that claim, we hope they are then led to further thought about exactly which possible AIs would deserve moral consideration and how much. Some of our thoughts about this we expect will be controversial, such as that as AIs might deserve more moral consideration because we have special obligations to them that arise from being their creators and designers.

You refer to works of science fiction in your defence of the psycho-social view of moral status, saying that they help illustrate certain scenarios and invite certain moral views. To what extent can reflection on sci-fi answer more detailed questions about our moral stance towards AIs, such as when they could be conscious, or what obligations we owe to them? If reflection on sci-fi can help us answer these questions, what answers do you think it favours?

Eric: I reject the idea that philosophy is necessarily conducted via expository essays. A thoughtful piece of fiction is a type of thought experiment, and if it delves into philosophical issues in a thoughtful way, then it is every bit as much a work of philosophy as is an expository essay. One advantage that extended works of fiction have over the one-paragraph thought experiments typically found in expository essays is that extended works of fiction more fully engage the imagination and the emotions. Philosophical thinking that does not adequately engage the imagination and the emotions leaves out important dimensions of our cognitive life that should inform our philosophical judgements, especially about moral issues.

I think that wide exposure to thoughtful science fiction clearly reveals that the moral status of AIs should be guided entirely by the psychological and social properties of the AIs and not by facts about their material architecture, species membership, bodily shape, or manufactured origin, except insofar as the latter facts influence their psychological and social properties. Asimov’s robots, Data from Star Trek, R2D2 and C3P0 from Star Wars, the sentient ships of Iain Banks ­ these are only some of the most prominent examples. The reader is invited to regard such entities as conscious, intelligent, and possessing desires, and in light of those facts to deserve moral consideration similar to that of human beings.

It is less clear what science fiction reveals about AI consciousness. My view is that science fiction tends to work best as exciting, plot-driven fiction when the reader is invited to assume that the AI who outwardly acts as if it is conscious is in fact really conscious ­ as with Data, C3P0, etc. But that issue is usually a starting point in the fiction, taken for granted, rather than something that is explored with a critical eye. Some fiction does explore epistemological questions about the boundaries of AI sentience, but such fictions are less common, and the issue is philosophically tricky. Our society hasn’t explored that issue, either in fiction or in expository philosophy, in nearly the depth that it ought to.

In your defense against the objection from existential debt, you use a thought experiment to the effect that it would be morally wrong to have a child and then kill them painlessly at the age of 9 to argue that an AI’s existential debt to us does not justify our otherwise-immoral treatment of them. However, in the blog post The moral limitations of in vitro meat, Levinstein and Sandberg argue that a future where humans lead happy lives cut short (perhaps to feed some blood-thirsty alien race) would be preferable to extinction, and that therefore we ought to have ‘happy meat’ instead of phasing out animal agriculture. Do you agree? If so, what do you think that this implies about our obligation to AIs?

Eric and Mara: We are somewhat reluctant to take a public stand on the issue of humanely raised meat, on which there is a large and complex existing literature that is beyond the scope of our current research. However, the case of aliens raising humans is within our scope.

We are inclined to think that if the only two options to consider are the extinction of humanity vs. humanity’s continued existence with happy lives cut painlessly short, the latter would be preferable all else being equal. Cutting a person’s life short in such a case might still be morally execrable murder, but if the choice is between mass murder and genocide-to-extinction, we think the former is probably less bad, if there is no way to avoid acting and if the agents committing the atrocities are the same in both cases. (This last caveat is to acknowledge some doubt about whether it would make sense for you, as an agent, to commit mass murder to prevent someone else from committing genocide-to-extinction.) Maybe a good science fiction story could flesh this out in a bit more detail, to give us a bit more imaginative footing in thinking what would really be involved on one side or the other.

We’re not sure how much follows for AIs from this. However, we’re inclined to think that there are at least some conceivable cases in which allowing mass murder of human-grade AIs would probably be less bad than allowing genocide-to-extinction. But yuck, as we write this, it feels horrible to say, somehow too calculating and cold. There might be room here for a view on which refusing to even make that kind of calculation is morally the best course.

Robin Hanson imagines an “em economy” scenario, where we make large numbers of computer emulations of humans, or “ems”, to perform various useful tasks. One of the many aspects of this scenario that invites moral inquiry is that it will sometimes be useful to create an em that has a short lifespan, and will soon be terminated, perhaps against their will (for an intriguing example, see “Bits of Secrets”. On one hand, it seems prima facie wrong to cut short a happy em life. On the other hand, these ems would not be created if we were not allowed to cut their lives short. If we imagine ems that are specifically designed for this purpose, the unique skills and characteristics that they would have makes their not being created arguably akin to the extinction of some human culture (albeit a culture that never had a chance to exist). The em scenario offers various disanalogies to a similar scenario with real humans: for instance, we could program the ems to have memories of long happy lives and/or not fear death (although this could make them less useful in the linked example). What are your opinions on the morality of creating and killing such ems?

Eric: This is a fascinating ethical question. It is related to a couple of other fascinating questions that we think AI ethics raises, including the ethics of creating cheerfully suicidal AI slaves and the challenge of how to conceive of “equal rights” when faced with AIs that can merge and duplicate at will (e.g., how many votes and how many social benefits should a recently fissioned entity get).

I don’t see a simple answer to these types of questions. I think that it would be a serious moral mistake to think it’s always okay to create and then kill at whim any AI whose life was overall good. Once a conscious being is created with human-like intelligence and emotions, it normally has a claim on our moral concern. It would be odious, for example, to create a human child and then kill it painlessly after eight happy years so that you can use the child care money to purchase a boat instead; similarly for an AI child, I think, if it is born into a similar psychological and social situation.

On the other hand, reflection on some science fiction examples, for example, in Linda Nagata’s Bohr Maker and David Brin’s Kiln People, inclines me to think that under some conditions it can be okay to spawn temporary duplicates of yourself who are doomed to extinction. One feature of the Nagata and Brin cases that seems relevant is that the duplicates identify with the future continuation of the being they were spawned from, and care more about its welfare than about their own welfare as separate entities. They will sacrifice themselves for its well being; and normally (but not always) their memories will be merged back into it. I don’t think this is sufficient for the moral permissibility of making doomed spawn, since we can imagine cases where a spawn has that sort of attitude in a way that is clearly irrational or problematic (e.g., maybe it wouldn’t have that attitude, except that it was forcibly reprogrammed into the attitude against its own protests); but it’s a start.

The cheerfully suicidal slave raises a whole different range of issues. Suppose, for example, that we create a conscious sun probe who wants nothing more than to die on a scientific mission to the Sun. Suppose it’s advantageous to make a probe that is conscious, because consciousness relates in some inextricable way to its successful functioning as a probe (e.g., maybe the probe works best if it can create creative scientific theories on the fly in a conscious, self-reflective way). And suppose that, knowing that, we program it so that it gets immense pleasure from a three day suicide mission into the Sun’s photosphere. Maybe this is terrific! We’ve created something great in the world, useful for us, intrinsically awesome, and bursting with pleasure? Or maybe we’ve done this horrible thing of creating a brainwashed slave so content with its slavery and limited in its vision that it doesn’t value its own continued existence?

Further progress on these topics will require detailed thinking through a variety of cases. It’s the kind of exciting issue that should keep ethicists busy for a long time, if AI technology continues to progress.

You advocate an Excluded Middle Policy, whereby we should only make AIs whose moral status is clear, and avoid the creation of ‘edge-case’ AIs. We can imagine a world where the field of AI advances much more quickly than the philosophy of consciousness and morality, such that most of the AIs that we could make would be edge-cases. How likely do you think that this is to transpire?

Eric and Mara: We think that is quite possible. Eric, especially, is pessimistic at least in the medium-term about our ability to develop a good theory of consciousness, despite his thinking that consciousness is extremely important to moral status.

Probably it’s good to create lots of happy, fulfilled beings. We want to be a little cautious about that claim, given that strong versions of that type of claim invite the conclusion that people have a moral obligation to have as many happy children as they can afford, and it’s not clear that people do in fact have such an obligation. Also, it’s not entirely clear whether it would be good to create a dozen happy beings and one horribly miserable being, compared to not creating any of those beings.

But let’s say that our best, most philosophically and technologically informed judgement is that it’s 50% likely that we can create a million happy, fulfilled human-grade AIs in a simulated world, with no significant suffering, for only a small amount of money; and 50% likely that by spending that money we’d just be creating a non-conscious sim with no significant moral value. In such a case, it seems misguided to condemn someone who launched such a world just because they violated the Excluded Middle Policy.

We don’t intend that people interpret our proposed Excluded Middle Policy as exceptionless. We suggest that it’s a good policy to consider as a default, but as with most policies, it could be thoughtfully set aside in a good cause. The core idea is that if you create an entity that you are only 50% confident deserves rights, then you’re risking a substantial moral loss. If you treat it as though it deserves rights and it does not, then you might end up sacrificing the interests of some entities who really do deserve rights for something that doesn’t. Conversely, if you treat it as though it does not deserve rights and it does deserve rights, then you might end up perpetrating moral wrongs against it, for example by shutting it down at whim. If you compromise by giving it half as many rights, but might be treating it much worse than it deserves; or alternatively you might still end up sacrificing substantial human welfare for no good result. Better, if possible, to be clear from the outset which entities deserve rights and which do not.

Suppose that the research into and design of AI continues without any attempt to engineer the level of moral worth of AIs. Do you think that the creation of morally relevant AI would be likely in this scenario?

Eric and Mara: We’re about 50/50 on that question. But even if we were 99% confident that morally relevant AIs would not be created, the remaining 1% would be highly significant, since in that scenario we might end up committing whole holocausts without realizing it. So we think the moral issues are worth getting clear about almost regardless of one’s opinions of the probabilities.

It seems that we will have the ability to create a large population of morally valuable AIs - perhaps in a “Sim” scenario, where we put them in a simulated world and they live happy and good lives. Above, you said that “probably it’s good to create lots of happy, fulfilled beings”. Does this imply that we should be figuring out how to make morally valuable AI?

Eric and Mara: We have argued that the launcher and manager of a simulated world full of conscious AIs would literally be a god to those AIs. So this question is tantamount to asking if we should aim to become gods.

How hubristic that sounds! We aren’t sure that humanity is ready for that sort of power. But maybe. Maybe if it’s done with extreme caution, humility, and oversight, with very clear and conservative regulatory structures.

We see two risks that trade off against each other here. On the one hand ­ what we have emphasized ­ is the moral risks and benefits for the AIs: the good of creating them, and of treating them well, and of giving them maybe the power and respect that we give to human peers. But on the other hand, there’s the complementary risk ­ emphasized especially in the work of Nick Bostrom ­ that by creating AIs sophisticated enough to have moral status and then giving them rights that suit their status, we create risks to humanity that we might not be well prepared to handle.

So it’s a morass. If AI research continues to advance a lot farther, there will be huge moral and prudential risks and benefits whatever we choose. We have only dipped our toe in the waters.

If we do decide to play our hand as gods or as Dr. Frankensteins, we want to be ready to greet our creations with a “Welcome to Reality!” sign and some pleasure stimulus, rather than with slavery, torture, and death.

Sign up for our mailing list

Your name

Your email