Context and issues

The previous article presented how the explainability of AI was a major avenue to gain user trust. When explainability is not deployed, users are less inclined to take ownership of the technology. This can lead to mistrust.ย ย 

Confidence in a technology is paramount for it to be accepted in society. Outside the framework of AI, GMOs are an obvious example. Although scientific studies have not yet shown any risk associated with their use, a large part of the population does not want them to be used.

Bias, a source of mistrust

We have already discussed the notion of bias previously, through the different ways in which it appears in AI systems. Thus, the discovery of biases considered as "immoral" can impact the confidence in the product or even the technology itself.

The banking system is one of the most cited examples of AI applications. However, it is particularly sensitive to the loss of confidence as the crisis experienced by Greece has reminded us. The contribution of AI must therefore be made in a context that maintains the trust created over the past centuries.

The Apple Case Card

Let's study the the case of the recent Apple Card scandal Card scandal. This digital credit card uses a personal credit scoring system credit scoring system created by a major American bank. Many users realized that there were massive biases between the credit limits assigned to women and men.

The cases identified are all the more shocking because they concern married couples who have pooled their financial assets. From a legal point of view, each member of the couple is therefore committed to the debts of the other. Thus, in these examples, there are only blatant cases of discrimination that cannot be justified by intrinsic biases in society.

ย 

Societal resistance as a safeguard

In the face of the tide observed on social networks, the discovery of this bias has strongly impacted the trust that users, especially women, can attribute to the product. The lack of confidence can extend to the to all credit scores since in this case, it is a bank, and not Apple, that generated the biased model.

Here, the banking industry has succeeded in limiting the risk of confidence by quickly bringing in a regulator. The latter acts as a defender of the users and has the mission to investigate - a posteriori - the possible biases of the model.

ย 

AI and explainability: expectations that match the stakes

An algorithm does not need to have immoral biases to lose the support of its users. Some decisions involve consequences big enough that they cannot be satisfied with the that they cannot be satisfied with the advertised performance of the solution. A coherent explanation is therefore necessary. This situation is not specific to AI systems and can be applied to all domains that have a significant impact on the lives of its users.

ย 

AI and health, a blind trust?

Consider the case of medicine. Patients readily accept their doctor's expert judgment when it concerns a minor illness. However, in more complex cases, the lack of communication and explanation from the physician can be a source of concern for the patient. For exampleeven an operation considered simple by the medical community may be perceived very differently by the patient, depending on the practitioner's level of communication.

ย 

Explainability as a driving force for pedagogy

The use of AI systems by a health expert can amplify the mistrust towards the latter. Indeed, if the doctor himself does not understand the reasons for which a diagnosis has been established, how will he be able to communicate with his patient ?

At a time when anyone can improvise themselves as a doctor by reading blogs/forums on the internet, the "machine said so" explanation is clearly not the best way to establish a relationship of trust, which is essential, between the doctor and his patient.

AI and explainability: a need for knowledge to act

A desire to understand

The previous section may lead one to believe that an AI solution may not be explainable if its impact is minor. In practice, users want to understand AI systems, even for smaller applications.

Sf explanations are not natively integrated, users will define their own heuristics, depending on their interaction with the system. It is the creation of these heuristics that leads to theรฎIt is the creation of these heuristics that most often leads to the discovery of biases.

The users do not seek to understand the system scientific curiosity, but because they want to interact interact actively with it. Indeed, they accept a decision, they seek to manipulate the AI in order to maximize the gains it brings them.

A daily relationship with AI systems

This attempt to understand the boxesรฎblack boxes is even the foundation of a new field: the Search Engine Optimization (SEO). This new profession proposes to discover the heuristics that govern the order of search engine recommendations (Google mainly), to exploit them and improve the visibility of certain contents.

We all apply such a process when we interact with AI systems: LinkedIn profile optimization, editorial line of a editorial line of a chainรฎne YouTube channel, whether or not to put a picture of your cat on your Tinder profile. We always try to identify the choices that increase our visibility on these platforms. In this context, the contribution of explicability would allow us to interact much more serenely with these platforms.

Towards greater regulation?

The news often mentions the explicability of AI for high-impact domains (banking, privacy...) and regulatory bodies are starting to get hold of the subject. The European Commission is even already working on an "AI RGPD ".

However, the notion of "trust" mentioned in all the documents must not stop at these major themes, but spread to all sectors. When users are able to understand how solutions work and interact with them, then AI can bring its full potential to society.