Skip to Content
DTR_Speaker-Banner_1920x480_Nicolas-Economou-1
Data and AI

Understanding the Role of Principles, Standards, and Regulations in Creating Digital Ethics

Nicolas Economou, H5

The Capgemini Research Institute spoke with Nicolas to understand more about the role of principles, standards, and regulations in the ethical and trustworthy design and deployment of AI.


DEFINING ETHICAL AI

How do you define ethics in AI?

Ethics is a long-standing academic discipline that provides ways to think about our behaviors. It offers pathways to rational debate, to critical evaluations of alternatives, and to decisions that have a moral foundation. Digital Ethics is the application of such methods to the challenges that AI presents. It is also important to recognize what ethics is not: it is not a universal law that just delivers the perfect answer, nor is it a simple “check the box” compliance exercise.

Another important consideration is that there are different types of ethics. Consider professional ethics: lawyers, for example, abide by certain rules of professional conduct. That is laudable and important, but the ethics of a corporation or a society may differ in certain ways from the ethics of the legal profession. Legal ethics may be consistent with the erosion of our privacy, as long as we have legally contracted our rights away. But, at the scale of society, the erosion of privacy takes on different ethical dimensions.

We should also remember that algorithmic systems are amoral. By that, I don’t mean that they are immoral. I mean that they do not have a moral compass. Yet, they can make decisions that have pervasive moral consequences. This places a particular responsibility on corporations not just to comply with their legal obligations, but also to develop and implement Digital Ethics, i.e. an institutional perspective on how to assess and address moral problems related to data, algorithms, and the practices that surround those.

Consider, for example, personal data: there may be uses of such data that are legal, but that may have certain business, brand, or societal consequence, that could cause a corporation to avoid certain legal, but ethically challenging uses.

DEFINING TRANSPARENCY IN AI AND THE CURRENT STATE OF PLAY

What role do you think transparency plays in AI?

The IEEE Global Initiative has done excellent work in promulgating principles for the ethical design and operation of AI. One of those key principles is transparency. Transparency is not only a topic of predominant focus in the international AI governance dialogue; it also has an intuitive appeal: “if I can see under the covers, I will be able to understand the system.” But I think that this predominant focus and this intuitive appeal can hide some dangers.

One of those dangers is that transparency can serve as an inadequate stand-in for what you really want to know. For example, in drug manufacturing, having transparency into the manufacturing process of a drug will not tell you if the drug is effective at treating an ailment – clinical trials do. Similarly, in car manufacturing, you will not know if the car is safe until you crash-test it. In both these examples, transparency – or transparency alone –cannot give you the answer you really want.

Whereas transparency is very important, I worry that, considered as a panacea, as it sometimes tends to be, it entails certain challenges.

Could you elaborate on concerns you have with an excessive focus on transparency?

Beyond the issue I just raised, I think there are two other concerns:

One has to do with the cost of achieving transparency at the scale of society. For example, could courts handle extensive reviews of each socio-technical system and algorithm in matters before them, if transparency were the sole instrument available to them? Surely, in some cases such examinations will be indispensable. But it is important to pause and think whether and when, to use my earlier analogy, a simple crash-test may offer a better answer than a complete review of a manufacturing process.

I also worry that an excessive focus on transparency might confine the discussion to the elites able to understand algorithms, thus deepening the digital divide. What we need are broadly understandable and accessible gauges of the fitness for purpose of AI systems, akin – if you will allow me to rely on the same analogy anew – to car crashtest ratings. Such gauges can empower citizens. Achieving those gauges requires complementary thinking to that which underpins transparency.

How do you view the current state of these ethics and transparency issues in organizations?

Companies are struggling with this, because it is such a complex challenge. In addition, there are at present no consensus standards that companies can choose and certify adherence against. I think that, over time, corporate Digital Ethics will involve at least three elements: first, a Digital Ethics Charter published by companies; second, a set of standards that companies will be able to affirm adherence against (the IEEE is developing such standards); and third, auditing mechanisms. A good analogy in this last respect is financial audits: we trust companies to be able to produce sound financial statements, but it is auditors who attest the extent to which such statements meet the representations companies make.

REGULATIONS AND STANDARDS IN AI

Could you tell us more about the IEEE’s principles of effectiveness, competence, accountability, and transparency, and how these relate to trustworthiness?

In my personal view, these four principles are individually necessary and collectively sufficient in determining the extent to which AI-enabled processes should be trusted. They are also globally applicable but culturally flexible, as they are all evidence-based, rather than normative. They can help provide the factual basis that corporations, compliance officers, risk officers, and general counsels’ offices need to determine whether a certain use of AI can be trusted to comply with their compliance obligations and Digital Ethics.

Effectiveness

An essential component of trust in a technology is trust that it succeeds in meeting the purpose for which it is intended. What empirical evidence exists in this regard? For instance, consider privacy, which is such a hot topic these days. AI is increasingly used to identify personal information in vast corporate data repositories, in order to help comply with regimes such as the GDRP and, soon, California’s CCPA. If you are a procurement or compliance department, what evidence do you have that the AI system you are about to purchase is actually effective at finding the personal information you are supposed to protect? Saying to a regulator: “I trusted a marketing claim” won’t really cut it. Or in HR AI applications: what evidence do you have that the application is effective at avoiding bias?

Competence

A second essential component of informed trust in a technological system, especially one that may affect us in profound ways, is confidence in the competence of the operator(s) of the technology. We trust surgeons or pilots with our lives because we know that they have met rigorous accreditation standards before being allowed to step into the operating room or cockpit. No such standards of operator competence currently exist with respect to AI. When it comes to legal and compliance settings, this is not tenable. This area is another topic of focus for our work at the IEEE Global Initiative.

Accountability

A third essential component of informed trust in a technological system is confidence that it is possible, if the need arises, to apportion responsibility among the human agents engaged, from design to deployment and operation. A model of AI creation and use that cannot hold people accountable will also lack important forms of deterrence against poorly thought-out design, casual adoption, and inappropriate use of AI.

Transparency

A final key element of informed trust is transparency. Without appropriate transparency, there is no basis for trusting that a given decision or outcome of the system (or its operators) can be explained, replicated, or, if necessary, corrected. I believe that an effective implementation of the transparency principle should ensure that the appropriate information is disclosed to the appropriate stakeholders to meet appropriate information needs. When it comes to legal and compliance functions in particular, my view is that, if duly operationalized, these four principles allow stakeholders to determine the extent to which they can trust AI to meet certain objectives, or to comply with their institutional ethics.

Do you expect any regulation regarding ethical use of AI and how do you see that regulation being enforced?

Like in so many other technological domains, a combination of industry-driven endeavors and regulation will prevail. The balance between these is likely to depend on the societal context. The EU Commission has an AI regulatory agenda, as has the Council of Europe, which has also announced a certification program for AI applications in the law. At the same time, expert industry bodies, such as the IEEE, are developing AI standards. To me, what is essential is that the mechanisms be evidence-based, in particular with respect to the principles we just discussed, absent which trust cannot be achieved.

Once we have these standards, how do we make sure that organizations adhere to them? Would there be incentives for organizations to follow ethical practices in AI? If so, what kind of incentives would those be?

A combination of regulation and market-based incentives will prevail. Consider critical societal functions, such as transportation or medicine: adherence to standards is often imposed by regulations. Regulation will also be needed, in my view, in areas where the imbalance of power between ordinary citizens and corporations is too vast. In the US, for example, when we “click-to-accept” privacy agreements to access an online service, the consent we offer is not the mark of an empowered consumer, just evidence of our loss of agency. But sound standards – what I like to call “the currency of trust” – such as those being developed by IEEE, can accelerate adherence to best practices because the market will naturally gravitate towards products and services that meet trusted standards.

ACTIONABLE STEPS FOR ETHICAL AI

What actionable steps can organizations take today to build and use ethical AI?

The first step is to define a process. What does it mean to implement Digital Ethics? You need to define what you stand for as an organization – your brand values – and then create a methodology to assess the extent to which your use of AI is currently meeting (or failing to meet) those values. You should also consider the impact of AI on various stakeholders (employees, customers, shareholders, society). From such a gap- and stakeholder impact analysis, you can assess both where you stand, and define where you want to be. To achieve your objectives, you must develop a methodology that incorporates ethics as a mechanism for critical thinking and decision making. In doing so, I think it is important to consider what expertise you have, and what expertise you might need to retain, for example, in the discipline of ethics, or in operationalizing principles such as those proposed by IEEE.

There is often not a single right answer to complex ethical questions. But you should have an answer that you can stand behind and have mechanisms to show that your claims are actually a true reflection of your operations. In this respect, the IEEE has set up an Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), which aims to help companies establish concrete evidence that they meet certain standards of accountability, transparency, and so on, in their use of AI.