There are a lot of people claiming to be AI ethics experts, but the field is only just emerging. In mainstream, the topic has only been around for a short while. So, how can someone be an expert in it as there is still lots of unanswered research questions in area?
- How can one focus on ethics without also focusing on morals? Morals are the basis of ethics?
- Does ethics in AI, intrinsically, have a universal equivalence? i.e something that is codified as ethical in west may not be sufficiently compatible for the east. What attributes in ethics form for-all-there-exists vs for-some-there-exists as an existential quantification?
- How can one control the abuse and falsely manipulated justification of AI ethics? i.e someone trying to drive political/cultural change/influence in a society/organization using AI ethics?
- How do you make sure that the people in control of ethics, who are by their own accounts calling themselves as ethical experts, are in fact ethical? AI is only as ethical as the human that programmed it? Can the codification of AI ethics be programmed to mutate as defined by the environment and changing norms of society? In so doing, allowing the AI agent to question the ethical and moral dilemmas for/against humans?
- If one builds a moral reasoner in horn clauses, can such reasoning then genetically mutate for ethics, on a case by case basis, for conditioning of an AI agent? Can AI agents be influenced by other AI agents, like in a multiagent distributed system - argumentation via game theory, reinforcement policies, towards mediation and consensus?
- Can ethics and morals be defined in a semantically equivalent language?
- If one defines horn clauses for moral reasoning and a set of ethical rules, can such moral/ethical conundrums be further defined using markov decision processes, in form of neural network, for any and all states as a good enough coverage for a global search space that can be further reasoned over with transfer learning?
- How do you resolve human bias in a so called AI ethics expert?
- Who defines what is ethical and moral for AI? Is there an agreed gold standard of measure?
In general, a moral person wants to do the right thing with a moral impulse that drives the best intentions. Morals define our principles. While ethics tend to be more practical towards a set of codified rules that define our actions and behaviors. Although, the two concepts are similar, they are not interchangeable nor aligned in every case. Ethics are not always moral. While a moral action can also be unethical.
AI Ethics Lab
AI Ethics Lab