Artificial intelligence is broadly defined as occurring whenever a machine learns something, as learning involves intelligence. It often involves the machine being given a goal and learning new and better means to achieve that goal.
The challenge lies in choosing which moral principles to program into the AI, as different ethical systems like utilitarianism and Kantian ethics conflict. There is no consensus on which moral system should dictate the AI's decisions.
The hybrid approach combines top-down principles with bottom-up data collection. It involves asking people which moral features matter in a situation, refining those features, and building conflicts to train the AI to predict human moral judgments.
AI collects features that matter to people, such as age, dependents, or criminal records, and predicts which patient should receive a kidney based on these factors. It can also confirm or challenge a doctor's decision, aiding in the decision-making process.
AI can inherit human biases from data, and its decision-making process is often a 'black box,' making it difficult to understand the reasoning behind its conclusions. Additionally, local values and expertise must be considered to ensure fairness.
While AI is already used in some kidney transplant centers for medical efficiency, integrating moral considerations is still in development. Walter Sinnott Armstrong estimates it could take about 10 years for such systems to be refined and widely adopted.
AI can be applied to dementia care, hiring decisions to ensure fairness regarding gender and race, and even military operations. These applications aim to introduce moral considerations into various decision-making processes.
Can AI help us make difficult moral decisions? Walter Sinnott Armstrong explores this idea in conversation with David Edmonds in this episode of the Philosophy Bites podcast.