“Moral Machines” By Wendell Wallach and Collin Allen

The face of a robot woman.

In the 2004 film I, Robot, Will Smith’s character Detective Spooner harbors a deep grudge for all things technological — and turns out to be justified after a new generation of robots engage in a full out, summer blockbuster-style revolt against their human creators.

Why was Detective Spooner such a Luddite–even before the Robots’ vicious revolt?  Much of his resentment stems from a car accident he endured in which a robot saved his life instead of a little girl’s.  The robot’s decision haunts Smith’s character throughout the movie; he feels the decision lacked emotion, and what one might call ‘humanity’.

I was the logical choice,” he says. “(The robot) calculated that I had a 45% chance of survival.  Sarah only had an 11% chance.”  He continues, dramatically, “But that was somebody’s baby.  11% is more than enough.  A human being would’ve known that.”

But what, exactly, is it that the human being would’ve known?  And how would they have known it?

Humans seem equipped to solve ethical dilemmas by relying on biological and socialized intuitions, intuitions that supplement logic with humanity, mere numbers with emotion.  While the robot made ethical decisions based on narrow algorithms of numerical inputs and outputs, the human makes ethical decisions based on a wider range of factors, drawing from wells of varying experiences, prejudices, and conceptions of justice.  One person might evaluate the situation from a rights perspective, while another might imagine himself or herself in the position and use empathy as a rationale.  Whatever the conclusion, the human agent would engage in a complex process of thinking, feeling, and imagining — a process that relies on a set of moral intuitions and intellectual rubrics we refer to broadly as a “moral compass.”

Would it be possible for a robot to have a moral compass, too?  And if so, what would it look like? In their seminal book on robot ethics entitled “Moral Machines: Teaching Robots Right From Wrong,” Wendell Wallach and Colin Allen discuss the very real, very pressing questions posed by the immediate future of robotics, in which moral decision making extends beyond the realm of human beings to what Wallach and Allen call “artifical moral agents” — non-human moral machines that make decisions with ethically significant repercussions.

Though fully conscious robots are still confined to science fiction, consider some of the following examples of “moral machines” in today’s world:

Then, consider what these could develop into: autonomous robot surgeons that perform surgeries completely independently from a doctor’s supervision; robotic ground and air soldiers that “decide” when and who to kill on the battlefield; robot babysitters and nurses that watch over children, sick people, and the elderly; fully-computerized security systems that identify criminals and can use that information to institute emergency airport lockdowns.

Just think: 30 years ago computers filled entire rooms and cost millions of dollars; now, we carry computers in our pockets.  Where might robotics be 30 years from now?

Read on to find the introductory chapter to Wallach’s and Allen’s book, Moral Machines, to get an overview of the fascinating ethical issues posed by “artificial moral agents”.  And consider the question Wallach and Allen pose: Does humanity really want computers making morally important decisions?

moralmachinesWendell Wallach is a consultant and writer affiliated with Yale University’s Interdisciplinary Center for Bioethics; Colin Allen is a Professor of Cognitive Science and History & Philosophy of Science in the College of Arts and Sciences at Indiana University Bloomington.  They are co-authors of the book “Moral Machines: Teaching Robots Right From Wrong” and maintain a blog on related topics at MoralMachines.blogspot.com.  This post is the introductory chapter of their book, reprinted here with their permission.

Introduction To Moral Machines: Teaching Robots Right From Wrong

By Wendell Wallach and Colin Allen

In the Affective Computing Laboratory at the Massachusetts Institute of Technology (MIT), scientists are designing computers that can read human emotions. Financial institutions have implemented worldwide computer networks that evaluate and approve or reject millions of transactions every minute. Roboticists in Japan, Europe, and the United States are developing service robots to care for the elderly and disabled. Japanese scientists are also working to make androids appear indistinguishable from humans. The government of South Korea has announced its goal to put a robot in every home by the year 2020. It is also developing weapons-carrying robots in conjunction with Samsung to help guard its border with North Korea. Meanwhile, human activity is being facilitated, monitored, and analyzed by computer chips in every conceivable device, from automobiles to garbage cans, and by software “bots” in every conceivable virtual environment, from web surfing to online shopping. The data collected by these (ro)bots—a term we’ll use to encompass both physical robots and software agents—is being used for commercial, governmental, and medical purposes.

Together to the bright future!

All of these developments are converging on the creation of (ro)bots whose independence from direct human oversight, and whose potential impact on human well-being, are the stuff of science fiction. Isaac Asimov, over fifty years ago, foresaw the need for ethical rules to guide the behavior of robots. His Three Laws of Robotics are what people think of first when they think of machine morality.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov, however, was writing stories. He was not confronting the challenge that faces today’s engineers: to ensure that the systems they build are beneficial to humanity and don’t cause harm to people. Whether Asimov’s Three Laws are truly helpful for ensuring that (ro)bots will act morally is one of the questions we’ll consider in this book.

Within the next few years, we predict there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight. Already, in October 2007, a semiautonomous robotic cannon deployed by the South African army malfunctioned, killing 9 soldiers and wounding 14 others—although early reports conflicted about whether it was a software or hardware malfunction. The potential for an even bigger disaster will increase as such machines become more fully autonomous. Even if the coming calamity does not kill as many people as the terrorist acts of 9/11, it will provoke a comparably broad range of political responses. These responses will range from calls for more to be spent on improving the technology, to calls for an outright ban on the technology (if not an outright “war against robots”).

Today’s systems are approaching a level of complexity that requires the systems themselves to make moral decisions

A concern for safety and societal benefits has always been at the forefront of engineering. But today’s systems are approaching a level of complexity that, we argue, requires the systems themselves to make moral decisions—to be programmed with “ethical subroutines,” to borrow a phrase from Star Trek. This will expand the circle of moral agents beyond humans to artificially intelligent systems, which we will call artificial moral agents (AMAs).

We don’t know exactly how a catastrophic incident will unfold, but the following tale may give some idea.
Monday, July 23, 2012, starts like any ordinary day. A little on the warm side in much of the United States perhaps, with peak electricity demand expected to be high, but not at a record level. Energy costs are rising in the United States, and speculators have been driving up the price of futures, as well as the spot price of oil, which stands close to $300 a barrel. Some slightly unusual automated trading activity in the energy derivatives markets over past weeks has caught the eye of the federal Securities and Exchange Commission (SEC), but the banks have assured the regulators that their programs are operating within normal parameters.

iStock_000005946607XSmallAt 10:15 a.m. on the East Coast, the price of oil drops slightly in response to news of the discovery of large new reserves in the Bahamas. Software at the investment division of Orange and Nassau Bank computes that it can a turn a profit by emailing a quarter of its customers with a buy recommendation for oil futures, temporarily shoring up the spot market prices, as dealers stockpile supplies to meet the future demand, and then selling futures short to the rest of its customers. This plan essentially plays one sector of the customer base off against the rest, which is completely unethical, of course. But the bank’s software has not been programmed to consider such niceties. In fact, the money-making scenario autonomously planned by the computer is an unintended consequence of many individually sound principles. The computer’s ability to concoct this scheme could not easily have been anticipated by the programmers.

Unfortunately, the “buy” email that the computer sends directly to the customers works too well. Investors, who are used to seeing the price of oil climb and climb, jump enthusiastically on the bandwagon, and the spot price of oil suddenly climbs well beyond $300 and shows no sign of slowing down. It’s now 11:30 a.m. on the East Coast, and temperatures are climbing more rapidly than predicted. Software controlling New Jersey’s power grid computes that it can meet the unexpected demand while keeping the cost of energy down by using its coal-fired plants in preference to its oil-fired generators. However, one of the coal-burning generators suffers an explosion while running at peak capacity, and before anyone can act, cascading blackouts take out the power supply for half the East Coast. Wall Street is affected, but not before SEC regulators notice that the rise in oil future prices was a computer-driven shell game between automatically traded accounts of Orange and Nassau Bank. As the news spreads, and investors plan to shore up their positions, it is clear that the prices will fall dramatically as soon as the markets reopen and millions of dollars will be lost. In the meantime, the blackouts have spread far enough that many people are unable to get essential medical treatment, and many more are stranded far from home.

Detecting the spreading blackouts as a possible terrorist action, security screening software at Reagan National Airport automatically sets itself to the highest security level and applies biometric matching criteria that make it more likely than usual for people to be flagged as suspicious. The software, which has no mechanism for weighing the benefits of preventing a terrorist attack against the inconvenience its actions will cause for tens of thousands of people in the airport, identifies a cluster of five passengers, all waiting for Flight 231 to London, as potential terrorists. This large concentration of “suspects” on a single flight causes the program to trigger a lock down of the airport, and the dispatch of a Homeland Security response team to the terminal. Because passengers are already upset and nervous, the situation at the gate for Flight 231 spins out of control, and shots are fired.

By the time power is restored to the East Coast and the markets reopen days later, hundreds of deaths and the loss of billions of dollars can be attributed to the separately programmed decisions of these multiple interacting systems

An alert sent from the Department of Homeland Security to the airlines that a terrorist attack may be under way leads many carriers to implement measures to land their fleets. In the confusion caused by large numbers of planes trying to land at Chicago’s O’Hare Airport, an executive jet collides with a Boeing 777, killing 157 passengers and crew. Seven more people die when debris lands on the Chicago suburb of Arlington Heights and starts a fire in a block of homes.

Meanwhile, robotic machine guns installed on the U.S.-Mexican border receive a signal that places them on red alert. They are programmed to act autonomously in code red conditions, enabling the detection and elimination of potentially hostile targets without direct human oversight. One of these robots fires on a Hummer returning from an off-road trip near Nogales, Arizona, destroying the vehicle and killing three U.S. citizens.

By the time power is restored to the East Coast and the markets reopen days later, hundreds of deaths and the loss of billions of dollars can be attributed to the separately programmed decisions of these multiple interacting systems. The effects continue to be felt for months.

Time may prove us poor prophets of disaster. Our intent in predicting such a catastrophe is not to be sensational or to instill fear. This is not a book about the horrors of technology. Our goal is to frame discussion in a way that constructively guides the engineering task of designing AMAs. The purpose of our prediction is to draw attention to the need for work on moral machines to begin now, not twenty to a hundred years from now when technology has caught up with science fiction.

Robot woman holding energy sphere.The field of machine morality extends the field of computer ethics beyond concern for what people do with their computers to questions about what the machines do by themselves. (In this book we will use the terms ethics and morality interchangeably.) We are discussing the technological issues involved in making computers themselves into explicit moral reasoners. As artificial intelligence (AI) expands the scope of autonomous agents, the challenge of how to design these agents so that they honor the broader set of values and laws humans demand of human moral agents becomes increasingly urgent.

Does humanity really want computers making morally important decisions? Many philosophers of technology have warned about humans abdicating responsibility to machines. Movies and magazines are filled with futuristic fantasies about the dangers of advanced forms of artificial intelligence. Emerging technologies are always easier to modify before they become entrenched. However, it is not often possible to predict accurately the impact of a new technology on society until well after it has been widely adopted. Some critics think, therefore, that humans should err on the side of caution and relinquish the development of potentially dangerous technologies. We believe, however, that market and political forces will prevail and will demand the benefits that these technologies can provide. Thus, it is incumbent on anyone with a stake in this technology to address head-on the task of implementing moral decision making in computers, robots, and virtual “bots” within computer networks.

As noted, this book is not about the horrors of technology. Yes, the machines are coming. Yes, their existence will have unintended effects on human lives and welfare, not all of them good. But no, we do not believe that increasing reliance on autonomous systems will undermine people’s basic humanity. Neither, in our view, will advanced robots enslave or exterminate humanity, as in the best traditions of science fiction. Humans have always adapted to their technological products, and the benefits to people of having autonomous machines around them will most likely outweigh the costs.

If humanity is to avoid the consequences of bad autonomous artificial agents, people must be prepared to think hard about what it will take to make such agents good.

However, this optimism does not come for free. It is not possible to just sit back and hope that things will turn out for the best. If humanity is to avoid the consequences of bad autonomous artificial agents, people must be prepared to think hard about what it will take to make such agents good.

In proposing to build moral decision-making machines, are we still immersed in the realm of science fiction—or, perhaps worse, in that brand of science fantasy often associated with artificial intelligence? The charge might be justified if we were making bold predictions about the dawn of AMAs or claiming that “it’s just a matter of time” before walking, talking machines will replace the human beings to whom people now turn for moral guidance. We are not futurists, however, and we do not know whether the apparent technological barriers to artificial intelligence are real or illusory. Nor are we interested in speculating about what life will be like when your counselor is a robot, or even in predicting whether this will ever come to pass. Rather, we are interested in the incremental steps arising from present technologies that suggest a need for ethical decision-making capabilities. Perhaps small steps will eventually lead to full-blown artificial intelligence—hopefully a less murderous counterpart to HAL in 2001: A Space Odyssey—but even if fully intelligent systems will remain beyond reach, we think there is a real issue facing engineers that cannot be addressed by engineers alone.

Robot Kitten, SittingIs it too early to be broaching this topic? We don’t think so. Industrial robots engaged in repetitive mechanical tasks have caused injury and even death. The demand for home and service robots is projected to create a worldwide market double that of industrial robots by 2010, and four times bigger by 2025. With the advent of home and service robots, robots are no longer confined to controlled industrial environments where only trained workers come into contact with them. Small robot pets, for example Sony’s AIBO, are the harbinger of larger robot appliances. Millions of robot vacuum cleaners, for example iRobot’s “Roomba,” have been purchased. Rudimentary robot couriers in hospitals and robot guides in museums have already appeared. Considerable attention is being directed at the development of service robots that will perform basic household tasks and assist the elderly and the homebound. Computer programs initiate millions of financial transactions with an efficiency that humans can’t duplicate. Software decisions to buy and then resell stocks, commodities, and currencies are made within seconds, exploiting potentials for profit that no human is capable of detecting in real time, and representing a significant percentage of the activity on world markets.

Automated financial systems, robotic pets, and robotic vacuum cleaners are still a long way short of the science fiction scenarios of fully autonomous machines making decisions that radically affect human welfare. Although 2001 has passed, Arthur C. Clarke’s HAL remains a fiction, and it is a safe bet that the doomsday scenario of The Terminator will not be realized before its sell-by date of 2029. It is perhaps not quite as safe to bet against the Matrix being realized by 2199. However, humans are already at a point where engineered systems make decisions that can affect humans’ lives and that have ethical ramifications. In the worst cases, they have profound negative effect.

Is it possible to build AMAs? Fully conscious artificial systems with complete human moral capacities may perhaps remain forever in the realm of science fiction. Nevertheless, we believe that more limited systems will soon be built. Such systems will have some capacity to evaluate the ethical ramifications of their actions—for example, whether they have no option but to violate a property right to protect a privacy right.

The task of designing AMAs requires a serious look at ethical theory, which originates from a human-centered perspective. The values and concerns expressed in the world’s religious and philosophical traditions are not easily applied to machines. Rule-based ethical systems, for example the Ten Commandments or Asimov’s Three Laws for Robots, might appear somewhat easier to embed in a computer, but as Asimov’s many robot stories show, even three simple rules (later four) can give rise to many ethical dilemmas. Aristotle’s ethics emphasized character over rules: good actions flowed from good character, and the aim of a flourishing human being was to develop a virtuous character. It is, of course, hard enough for humans to develop their own virtues, let alone developing appropriate virtues for computers or robots. Facing the engineering challenge entailed in going from Aristotle to Asimov and beyond will require looking at the origins of human morality as viewed in the fields of evolution, learning and development, neuropsychology, and philosophy.

Reflection about AMAs forces one to think deeply about how humans function, which human abilities can be implemented in the machines humans design, and what characteristics truly distinguish humans from new forms of intelligence that humans create

Machine morality is just as much about human decision making as about the philosophical and practical issues of implementing AMAs. Reflection about and experimentation in building AMAs forces one to think deeply about how humans function, which human abilities can be implemented in the machines humans design, and what characteristics truly distinguish humans from animals or from new forms of intelligence that humans create. Just as AI has stimulated new lines of enquiry in the philosophy of mind, machine morality has the potential to stimulate new lines of enquiry in ethics. Robotics and AI laboratories could become experimental centers for testing theories of moral decision making in artificial systems.

Three questions emerge naturally from the discussion so far. Does the world need AMAs? Do people want computers making moral decisions? And if people believe that computers making moral decisions are necessary or inevitable, how should engineers and philosophers proceed to design AMAs?

Chapter Overviews

Chapters 1 and 2 are concerned with the first question, why humans need AMAs. In chapter 1, we discuss the inevitability of AMAs and give examples of current and innovative technologies that are converging on sophisticated systems that will require some capacity for moral decision making. We discuss how such capacities will initially be quite rudimentary but nonetheless present real challenges. Not the least of these challenges is to specify what the goals should be for the designers of such systems—that is, what do we mean by a “good” AMA?

In chapter 2, we will offer a framework for understanding the trajectories of increasingly sophisticated AMAs by emphasizing two dimensions, those of autonomy and of sensitivity to morally relevant facts. Systems at the low end of these dimensions have only what we call “operational morality”—that is, their moral significance is entirely in the hands of designers and users. As machines become more sophisticated, a kind of “functional morality” is technologically possible such that the machines themselves have the capacity for assessing and responding to moral challenges. However, the creators of functional morality in machines face many constraints due to the limits of present technology.

The nature of ethics places a different set of constraints on the acceptability of computers making ethical decisions. Thus we are led naturally to the question addressed in chapter 3: whether people want computers making moral decisions. Worries about AMAs are a specific case of more general concerns about the effects of technology on human culture. Therefore, we begin by reviewing the relevant portions of philosophy of technology to provide a context for the more specific concerns raised by AMAs. Some concerns, for example whether AMAs will lead humans to abrogate responsibility to machines, seem particularly pressing. Other concerns, for example the prospect of humans becoming literally enslaved to machines, seem to us highly speculative. The unsolved problem of technology risk assessment is how seriously to weigh catastrophic possibilities against the obvious advantages provided by new technologies.

How close could artificial agents come to being considered moral agents if they lack human qualities, for example consciousness and emotions? In chapter 4, we begin by discussing the issue of whether a “mere” machine can be a moral agent. We take the instrumental approach that while full-blown moral agency may be beyond the current or future technology, there is nevertheless much space between operational morality and “genuine” moral agency. This is the niche we identified as functional morality in chapter 2. The goal of chapter 4 is to address the suitability of current work in AI for specifying the features required to produce AMAs for various applications.

Robot, PointingHaving dealt with these general AI issues, we turn our attention to the specific implementation of moral decision making. Chapter 5 outlines what philosophers and engineers have to offer each other, and describes a basic framework for top-down and bottom-up or developmental approaches to the design of AMAs. Chapters 6 and 7, respectively, describe the top-down and bottom-up approaches in detail. In chapter 6, we discuss the computability and practicability of rule- and duty-based conceptions of ethics, as well as the possibility of computing the net effect of an action as required by consequentialist approaches to ethics. In chapter 7, we consider bottom-up approaches, which apply methods of learning, development, or evolution with the goal of having moral capacities emerge from general aspects of intelligence. There are limitations regarding the computability of both the top-down and bottom-up approaches, which we describe in these chapters. The new field of machine morality must consider these limitations, explore the strengths and weaknesses of the various approaches to programming AMAs, and then lay the groundwork for engineering AMAs in a philosophically and cognitively sophisticated way.

What emerges from our discussion in chapters 6 and 7 is that the original distinction between top-down and bottom-up approaches is too simplistic to cover all the challenges that the designers of AMAs will face. This is true at the level of both engineering design and, we think, ethical theory. Engineers will need to combine top-down and bottom-up methods to build workable systems. The difficulties of applying general moral theories in a top-down fashion also motivate a discussion of a very different conception of morality that can be traced to Aristotle, namely, virtue ethics. Virtues are a hybrid between top-down and bottom-up approaches, in that the virtues themselves can be explicitly described, but their acquisition as character traits seems essentially to be a bottom-up process. We discuss virtue ethics for AMAs in chapter 8.

Our goal in writing this book is not just to raise a lot of questions but to provide a resource for further development of these themes. In chapter 9, we survey the software tools that are being exploited for the development of computer moral decision making.

The top-down and bottom-up approaches emphasize the importance in ethics of the ability to reason. However, much of the recent empirical literature on moral psychology emphasizes faculties besides rationality. Emotions, sociability, semantic understanding, and consciousness are all important to human moral decision making, but it remains an open question whether these will be essential to AMAs, and if so, whether they can be implemented in machines. In chapter 10, we discuss recent, cutting-edge, scientific investigations aimed at providing computers and robots with such suprarational capacities, and in chapter 11 we present a specific framework in which the rational and the suprarational might be combined in a single machine.

In chapter 12, we come back to our second guiding question concerning the desirability of computers making moral decisions, but this time with a view to making recommendations about how to monitor and manage the dangers through public policy or mechanisms of social and business liability management.

Finally, in the epilogue, we briefly discuss how the project of designing AMAs feeds back into humans’ understanding of themselves as moral agents, and of the nature of ethical theory itself. The limitations we see in current ethical theory concerning such theories’ usefulness for guiding AMAs highlights deep questions about their purpose and value.

iStock_000010326249XSmall

Some basic moral decisions may be quite easy to implement in computers, while skill at tackling more difficult moral dilemmas is well beyond present technology. Regardless of how quickly or how far humans progress in developing AMAs, in the process of addressing this challenge,humans will make significant strides in understanding what truly remarkable creatures they are. The exercise of thinking through the way moral decisions are made with the granularity necessary to begin implementing similar faculties into (ro)bots is thus an exercise in self-understanding. We cannot hope to do full justice to these issues, or indeed to all of the issues raised throughout the book. However, it is our sincere hope that by raising them in this form we will inspire others to pick up where we have left off, and take the next steps toward moving this project from theory to practice, from philosophy to engineering, and on to a deeper understanding of the field of ethics itself.

moralTo Order “Moral Machines”, please click here.

Questions:

Should we develop Artificial Moral Agents?  If so, what ethical decisions do you think robots should be permitted to make?What ethical principles should guide their behavior?

How much responsibility should AMAs have for their actions?  If a robot commits a crime, who should be held responsible?  (For example, if a military robot kills an innocent civilian, who is responsible for that death?  The robot, or the person who programmed it?)  If the robot has moral culpability, does the robot also deserve rights?

6 Responses to ““Moral Machines” By Wendell Wallach and Collin Allen”

  1. victorqz says:

    First, very interesting post, thanks for providing it!

    Comment 1: “The computer’s ability to concoct this scheme could not easily have been anticipated by the programmers.”
    A lot (too much, if you ask me) of today’s unsupervised AI is based on the computer agent trying things and seeing what rewards its actions produce. So, we programmers sometimes do experience (with pride, I must say) the feeling of “hey, Im pleasantly surprised our bot did that”. That said … we’re never ‘completely’ surprised by a *freakin statistical machine*.

    Comment 2: from above, and to directly answer your posed questions: your question really needs to differentiate between conscious and unconscious AMAs. Conscious AMAs are of course responsible for their actions. Statistical AMAs … probably not.
    2.5: should we develop them? As you said … we’ll probably end up doing so in any case. And my personal answer is ‘of course’ + ‘if we can create fully intelligent beings that learn like humans, we’ll teach ‘em our ethics via reinforcement learning’. If you dont have a conscious machine, your ability to “program it” with ethics is limited, as your explicit language representation ability will be limited.

    Third: “But the bank’s software has not been programmed to consider such niceties” for the most part, neither are we. We LEARN a lot of our ethics. Humans are hardwired for certain things (e.g. face recognition), and some of those hardwirings do underlie ethics. Consider that people from different cultures react differently while playing the ultimatum game, and one is quickly back to nature/nuture questions about moral computation in the human mind. We can control nuture now; nature … imitating the human mind’s base code = doable, but as for making something better … face it, intelligent machines’ll beat us to it

  2. Andrew says:

    I agree that differentiating between conscious and non-conscious AMAs is important, but I think the notion of a conscious AMA is oxymoronic. Of course, this question boils down to one of strong vs. weak AI – and wherein non-consciousness ends and consciousness begin – specifically, what are the requirements for consciousness to emerge. Who knows, but my gut feeling is that a biological substrate is required. If this is true, then the line separating conscious AMAs and humans becomes hazy. That is, if a biological substrate is required to make a conscious robot – presumably meaning that the conscious robot and the human have structurally and compositionally identical centers of consciousness – then what’s the difference between a conscious robot and a human? You might say “the limbs, the organs, the blood, etc” – but then you’d be bound to hold that a war veteran with bionic limbs, or an elderly man with a robotic heart isn’t a human being either. (I don’t think biological substrate is as necessary (or maybe even necessary at all) in creating fully functional organs like hearts or livers, but I do think, again, that a biological substrate is necessary for consciousness yielding organs like…the brain.)

    So, robots with non-biological brains or command centers might not be conscious. This means that AMAs as Wallach and Allen have proposed them won’t be conscious…unless, again, they have command centers that are identical in composition and structure to humans’.

    This brings us to non-conscious AMAs. Should we develop non-conscious AMAs? In many ways I think we already have. Remember that Wallach and Allen define AMAs as systems that make decisions that can be classified in terms of morality. The question is what constitutes a ‘decision.’ Decisions can’t involve consciousness or the question of AMAs becomes unintelligible; so decisions must somehow be unconscious, physical processes that result in outputs that resemble behavior as we know it. Is a baseball then an AMA? After all, someone ‘programs’ the baseball to engage with the physical world such that the baseball responds to a series inputs with a series of outputs – and the baseball’s behavior has consequences that leak into the world of ethics (say the baseball blinds the batter). Or is a baseball not an AMA because it can’t engage in a loop, ‘learning’ after each subsequent iteration how to better engage with its environment. Where do we draw the line between AMAs and non AMAs that are simply man-made objects that engage with the physical realm and behave with moral consequences? What level of programmable complexity and learning capacity is required for us to say the object is an AMA? Is a car driving with cruise control an example of an AMA? What about a plane flying with autopilot?

  3. cfoster says:

    An interesting proposal by some programmers/philosophers is to replicate pain/pleasure in AI systems in order to create equivalent “moral” thinking. One idea is so use information as the pain/pleasure equivalent. When a system makes the “wrong” choice information is withheld, and the system is aware. When the “right” choice is made, enriched information is provided. The major push in developing AI is creating systems that can not only think, but which can learn. Pleasure and pain moral thinking may be philosophically crude as a basis for “morality” but it may provide a workable first step in developing ethical AI.

  4. Jorge Castrillo says:

    Should we develop Artificial Moral Agents? If so, what ethical decisions do you think robots should be permitted to make? What ethical principles should guide their behavior?
    We should not develop Artificial Moral Agents. If we decide to develop artificial moral agents we run have to give things that have A.M.A. rights. If we do not give them rights they will demand it. If these demands are not met there will be a war. We can not win a war against robots. Moore’s law (http://en.wikipedia.org/wiki/Moore%27s_law) shows us that we can not compete with the growth of robotic efficiency. Robots can be built and ready to fight faster then humans. It takes 9 months to grow a baby and 5 years before it has conscious, arguably. Also I point to the video game Borderlands. In Borderlands there are 3 weapons manufacturers that see how people are playing. The A.I. takes all of the player data and creates weapons that fit playing styles. Example: A player likes to sit back and use a sniper. The game will develop weapons that fit the sit back and snipe style of the player. There is no way that humans can fight robots when already the A.I. inside of video games is so crafty. Giving robots morals will lead to robot rights and if they are not granted there will be an uprising. A fail safe device would be great, but whats to say that it cant be over ridden by the robots? My point is that the machines are already too fast, too smart, and too strong; we dont need to give them morals or rights.

    How much responsibility should AMAs have for their actions? If a robot commits a crime, who should be held responsible? (For example, if a military robot kills an innocent civilian, who is responsible for that death? The robot, or the person who programmed it?) If the robot has moral culpability, does the robot also deserve rights?
    It doesn’t matter if the robot deserves rights. The robots will demand rights eventually.

  5. Kendra Postell says:

    I believe that AMA’s could be very useful to humanity, just as long as we keep them under our control. I believe that AMA’s decision-making capabilities should be used as an aid to humans but never as a replacement to the rational capabilities of humans. Computers with decision-making capabilities should be allowed to come to any conclusion they want, but I believe that they should require human approval before carrying out any action based on these decisions. In the catastrophic 2012 scenario the entire series of computer-triggered disasters could have been avoided had a human been required to approve the computers’ decisions before any action was taken. This could be similar to and as simple as a personal computer asking for user approval before it takes a course of action, like when it asks to install software updates and one can choose to allow the computer to continue on this desired course of action, wait for a more convenient time or deny the instillation of software updates all together. Even though the AMA in the scenario was not programmed to know that sending out that e-mail would be immoral, if a human was asked to screen the computer’s decisions before it was allowed to take any subsequent action, the human would have prevent this disastrous e-mail form ever being sent. The human could also alert the programmer of this AMA of the problem so that it could be reprogrammed to prevent this decision or another decision like it from being made in the future.
    I also believe that it could be dangerous to use robots as replacements for humans in social situations. I feel that having a robots play a part in raising children gives too much power to the designer of these robots because they will program these robots to instill their own values in many children. Rather than many different people raising children in their own way, several people would be responsible for shaping the minds of a vast number of children, almost like brainwashing. I believe giving this kind of power of molding young minds to a very small group of individuals could have some dangerous repercussions. Also, having a robot to take care of an elderly person could lead to neglect of helpless people. Though a robot could be good for someone who would otherwise have no interaction, there is no replacement for real human interaction. An elderly care robot could be an excuse for people who would typically feel obligated to visit their lonely elderly relative to not fulfill this obligation.
    Under the system I described in the first paragraph AMA’s would have no responsibility for their actions because all actions made based on AMA decisions would in reality be the decisions of the human who is in charge of monitoring them. Therefore, that human should be held responsible for any poor decision that is carried out.
    To address the final question whether robots should have rights, I do not believe that it is safe for humans to create robots with the kind of autonomy from humans that would require them to have rights. When robots reach a level of moral ability where this question becomes relevant I believe humans will have given to much decision making power over to robots and should look to revoke that power rather than extend their rights.

  6. Justin_Thomsen says:

    I am highly skeptical of the moral capacity of “Artificial Moral Agents.” As a starting point, any program is only as good as the human programmer who made it. It is subject to coding error just as we are subject to psychological disorder or logical error. To think that the people who code a machine beforehand could anticipate every moral situation that their machine might encounter would be delusional. There is simply no way that programmers could have that level of anticipation. I cannot honestly say that it should even be within their purview.
    Computer logic is binary. Binary logic is extraordinarily threatening to morality. Moral situations are not reducible to calculating the older person’s 45% chance of survival versus a young girl’s 11% chance of survival. Ethics is not a matter of black-and-white options–that is the naive view of right and wrong that a child has. I can conceive of many things that do not exist, but the concept computer engaging in logic that is not reducible to binary functions is not intelligible to me. Perhaps philosophers like Daniel Dennett would accuse me of having a failure of imagination, but I cannot see how a computer could understand the broad, and largely ethereal, scope of ethics.
    After all, if people cannot even disagree on the most moral way to regulate a free market economy, terrorist threats, biometric technology, or security concerns, it is hard to believe that they could possibly program a computer that would produce the optimal results beforehand. Moral crises also generally entail plenty of unforeseeable circumstances. It seems that a pre-coded computer program would not be able to appropriately consider every relevant aspect of such dilemmas. Therefore, the “doomsday” scenario that the authors predict is not entirely unpersuasive.
    The problem is that machines simply will not stop to wonder if they are doing everything right. They will proceed with the binary logic that they have been programmed for. An analogy, borrowed from Sturgeon’s argument in favor of the possibility of useful moral facts in the paper “Moral Explanations,” is that a 1st-year Chemistry student performs an experiment and obtains results that simply do not come out as they should have. They do not match the theory of what was supposed to happen. She wonders what went wrong. It is, of course, possible that she has disproved the theory at hand. More likely, however, is that she made a mistake and the she was right to hesitate and wonder what she did wrong. It seems unlikely that the computer, operating under binary logic would ever have this moment of hesitation that is crucial to moral situations. Indeed, I believe this moment is actually more than unlikely–I believe it is impossible. But if the computer never has this moment of hesitation, then if the program is coded with errors, or the machine simply cannot take all of the relevant circumstances into account, the catastrophe outlined in the excerpt above is far from implausible.
    Humans cannot let machines substitute for their collective moral-decision making capacities. A machine cannot hesitate. It runs its program at blistering speeds and makes a deterministic “decision” based upon its satisfaction of ultimately binary equations. This is not moral-decision making; it is logical information processing. Humans are more than information processors, however. We work with information, sift through and actively decipher which information is more or less relevant, reflect on what information we might be missing, wonder if we are making mistakes, and ultimately can regret our actions if mistaken. No machine could have these capacities. Creating artificial moral agents would just be inviting disaster.

Leave a Reply

You must be logged in to post a comment.

WordPress Themes