Psychologists have warned that AI technology’s lack of human experience and real understanding could limit the public’s acceptance of its ethical judgments.
Artificial Moral Advisors (AMAs) are artificial intelligence-based systems designed to help humans make ethical decisions based on established ethical theories, principles or guidelines.
Although prototypes are being developed, AMAs are not yet being used to consistently provide unbiased recommendations and rational ethical advice.
As AI-powered machines increase their technological capabilities and move into the ethical domain, we need to understand how people think about such artificial moral advisors, says research led by the University of Kent’s School of Psychology.
Experts found that while artificial intelligence may have the ability to provide impartial and rational advice, people still don’t fully trust it when it comes to ethics.