In my small contribution to the Institute’s recent LinkedIn Live event, “LDE: Leadership Gift – Wisdom of the Year,” I referred to Rushworth M. Kidder’s work concerning ethical dilemmas. Rush was a brilliant leader. I had the privilege of holding the job titles “Membership Data Coordinator/ Network Administrator,” responsible for the information technology and membership database at the Institute for Global Ethics (IGE), then based in Camden, Maine, which Rush founded and served as president.
Every employee of IGE went through multi-day training on Rush’s ethical thinking, most comprehensively set out in his book How Good People Make Tough Choices: Resolving the Dilemmas of Ethical Living (a revised edition published in 2009). We, and all the business people, teachers, first responders, and others who took the courses IGE offered learned to think about ethics differently. It was a cornerstone of Rush’s thinking that most ethical choices we make are not Right versus Wrong; they are Right versus Right dilemmas. He categorized these dilemmas into four paradigms:
- Truth versus loyalty
- Individual versus community
- Short-term versus long-term
- Justice versus mercy
There are excellent examples available online in this excerpt, including a librarian who has a phone conversation overheard by a policeman who then seeks the identity of the caller, and a new manager who discovers inadvertently that his predecessor had been taking questionable payments for off-hours work.
These dilemmas are not unresolvable according to Kidder’s thinking. In fact, he laid out three ways to consider them:
- Ends Based: Known to philosophers as “utilitarianism,” this principle is best known by the maxim “Do whatever produces the greatest good for the greatest number.”
- Rules Based: This principle is best known as the “categorical imperative.” Rules exist for a purpose: they promote order and justice and should be followed. Follow the principle that you want others to follow. “Stick to your principles and let the chips fall where they may.”
- Care Based: Putting love for others first. It is most associated with the Golden Rule: “Do unto others as you would have them do unto you.”
When you are faced with a Right versus Right dilemma, Rush said, there is a decision-making process to help resolve it. According to Kidder, there are nine steps in the process:
- Recognize there is a moral issue
- Determine the actor (who does the problem belong to?)
- Gather the relevant facts
- Test for right vs. wrong issues
- Test for right vs. right paradigms
- Apply the resolution principles
- Investigate the “trilemma” option
- Make the decision
- Revisit and reflect on the decision
Step 7 refers to the trilemma – a third option falling somewhere between the two “horns” of a dilemma; this way of thinking frequently offers a resolution.
As I mentioned in my “LDE Leadership Gift” video, I think that some of what Rush uncovered can be applied to our thinking about Artificial Intelligence and its uses. Are there Right versus Right dilemmas emerging in the age of Generative AI? I believe there are.
- Short-term versus long-term: We can replace many workers with AI. In the short term, this saves the company huge amounts of money and pulls us ahead of many competitors. In the long term, it reduces the number of employed people who will be able to buy our products and services as well as those of other companies.
- Justice versus mercy: AI does not have emotions. It can learn empathetic language but cannot feel empathy. Applying AI decision-making tools may be able to expedite decisions but cannot provide a merciful alternative based on human emotions, eliminating the care-based thinking option for resolution. While this may help reduce favoritism and other leadership missteps, it could also produce some distressing outcomes, such as penalizing a late-paying customer who has just suffered a serious medical emergency.
- At this time, AI (generative or otherwise) has not shown itself to be able to suggest an alternative solution, taking the trilemma possibilities away.
Some might argue that this is a good thing, making decisions simpler and mitigating consequences: “Don’t blame me; AI did it.” Should we be abdicating our own responsibility?
There are many ethical issues embedded in the use of AI. Training biases, intellectual property rights, lack of transparency, and other factors should be weighed before we put this emerging technology to work in fields where they can have serious human impacts. I’m not saying that we shouldn’t use AI. In fact, I’m an advocate for its uses in many areas. Where it bumps up against human ethical choices, however, I urge us all to use extreme caution. There are Right versus Right dilemmas everywhere.
Tag/s:Artificial IntelligenceBusiness Transformation Creativity Employee Experience Personal Development