I2 Pub Interactive Intelligence

Ethical Optimization

Ethics, put simply, are the principles that determine which actions taken by individuals or a group are “right.” It can exist in many forms, be it in a rulebook, philosophical ideas, or programming. One practical, and quite recent, intersection between computer science and ethics is Artificial Intelligence ethics. With respect to AI, “ethics… specifies the moral obligations and duties of an AI and its creators” (J. H. Moor). A common problem in pop culture is if a selfdriving car should prioritize the life of the driver or of a pedestrian. Expanding on the problem makes it clear that there is no clear answer; should the car prioritize the life of a child, or of an elderly man, one of which must die to save the driver? A problem such as the aforementioned would fall into the bucket of “duties of an AI to behave ethically.” In regard to the “creators” job, programmers must ensure that the AI they create can be trusted to make the correct decision when put in ethical dilemmas. While AI is not intelligent enough to make decisions like a human yet, they one day may be, and it is important to keep the idea of ethics in mind during development. Another unmentioned aspect of AI ethics is how AI fits into society. Automation and AI are creating workplace displacement, with manual laborers being forced out of positions when an AI can do it better, faster, or cheaper. While there is no “perfect” solution to the AI dilemma, a large portion of the problems can be solved by constructing implicitly ethical agents that are not able to create situations that will violate the Three Laws of Robotics. AIs should not be built with the idea of being human replacements, but rather human assistants. This allows for the easy integration of agents into society without much regard for how they will disrupt workflow. This also allows for complex, edge case, non-utilitarian decision making to occur if necessary.

Some well-established rules for governing AI are Isacc Asimov’s Three Laws of Robotics. These rules can be considered quite conservative and “harsh” for AI and its creators since it is very low tolerance for deviation. They are as follows; The first law states that a robot may not directly or indirectly, through action or inaction, let a human being come to harm. Second, a robot must obey human orders except when the orders would violate the first law. Finally, a robot must protect its existence unless doing so would violate the first or second law (Asimov). The most important takeaway from the Three Laws of Robotics is that the most important rule is the first. In no way should an AI bring harm to a human through action or inaction. Immediately a problem presents itself: what if an AI is placed in a position where it acts as the executioner? Take for example facial recognition as law in the justice system. A simple mis-id due to errors in the network could result in an innocent person being locked away. If Asimov’s three laws are to be followed, placing AI in these types of situations is dangerous. A violation is simply one step away.

To solve this problem, the idea of implicit agents can be deployed. An implicitly ethical agent is one which has its action space unchangeably constrained in a way that completely avoids the possibility of an unethical decision (Siau et. al). Take, for example, the idea of an ATM. Assuming proper function and no tampering by unethical humans, an ATM has a set algorithm that performs the correct, ethical task every time it is prompted to perform its function. It cannot be swayed or coerced like a human can and is only able to output sums of money that are representative and proper outputs for the input of the system (i.e., the system is deterministically ethical). It also cannot forcibly take money or decide to refuse a payment assuming all checks have passed for a withdrawal. A more complex, standard black-box example of an implicit agent deployment would be in oncology. Image classifiers are often used to determine if a patient has a tumor in their body or not. While these agents are well trained and perform excellently, it is ultimately the doctor that prescribes a prognosis for the patient. The agent’s “action space” is not large enough to cause issues for the patient or harm them. The final decision (and therefore the blame) is left to the doctor. The implicit ethicality of both of these agents is what allows them to follow the Three Laws of Robotics, since they either run on algorithms that force them to make the right choice, or the choice is left up to humans to make. The latter example is more scalable, as the black box AI can become as complex as possible while still staying ethical as long as the agent has its “hands off the control panel” per se.

It may seem harsh, to handcuff agents to prevent them from making important decisions; It is, however, a necessary step to prevent human harm due to the banality of evil. As Payman Tajalli, Doctor of Philosophy, states: as long as AI are programmed with anthropocentric notions of ethics, they are destined to thoughtlessly commit evil. This is because, “according to Nietzsche the fact of the impossibility of grounding values on a solid basis needs to be acknowledged and understood before a solution to the problem of ethics could be explored” (Tajalli). In essence, the fact that humans have not created a universal theory of ethics is indication enough that it would be impossible for us to embed a framework within an agent that was to behave ethically. On top of that, since we cannot embed a sophisticated framework into agents, there is the possibility of humans coming into harm due to an agent following its protocols thoughtlessly. This is the idea that Hannah Arendt calls the banality of evil. “Evil is committed when pre-defined routines and processes are followed as a matter of duty, without thinking. Evil is banal and sophistication in intelligence does not ensure its prevention” (Tajalli). The solution to this, that some may call hacky, is actually quite elegant. Remove agents without a moral compass, from affecting systems that would require one. This act would prevent “little Eichmanns” from running rampant, following protocols without higher level thought, and violating Asimov’s laws left and right. While this does limit the use cases of AI, it is important to recognize that there is always a tradeoff to be made between efficiency and risk. Is placing an AI in a position usually occupied by a human to make moral choices (i.e., judges, police, etc.) worth the possible fallout? That is for the societies of tomorrow to decide.

The last important piece to consider is actually the most relevant currently. AI has been replacing manual workers’ jobs for quite a while now, and people are rightly concerned. While the causes of this are to a large extent consequences of capitalistic enterprises, people are still feeling the effects. This problem bleeds into social issues and is still worth discussing, but it is not the fault of AI. What is important to discuss is how we expand the usage of AI as it becomes more commonplace and societally integrated. Going forward, it would be best to bucket societally integrated AI into oracles and disposables. Oracles would be AI built for the sole purpose of advising and would assist humans with their jobs, not replace them. Not only does this AI prevent job displacement, but it is also implicitly ethical. Disposables would be AI allowed to interact with the world but be doing jobs that are considered dangerous for humans (i.e., radioactive waste cleanup, deep sea exploration).

It is important that those educated in the field of Artificial Intelligence and Machine Ethics champion for agents that will place the least duress on humans as a species. One very promising way to do this is to create implicitly ethical agents that are only integrated into society as oracles or as disposables.


References

Asimov, I. (1950). I, Robot. Garden City, N.Y.: Doubleday.

J. H. Moor, “The Nature, Importance, and Difficulty of Machine Ethics,” in IEEE Intelligent Systems, vol. 21, no. 4, pp. 18-21, July-Aug. 2006, doi: 10.1109/MIS.2006.80.

Siau, & Wang, W. (2020). Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. Journal of Database Management, 31(2), 74–87. https://doi.org/10.4018/JDM.2020040105

Tajalli. (2021). AI ethics and the banality of evil. Ethics and Information Technology, 23(3), 447–454. https://doi.org/10.1007/s10676-021-09587-