Skip to Content

Algorithmic consent: Why informed consent matters in an age of Artificial Intelligence

Amrita Sengupta
February 26, 2020

Some time back, I had a doctor’s appointment which I booked via an app. On reaching the center, which was run by a well-known fitness platform, I was told that they would collect my vitals (pulse, temperate, heart rate, and blood pressure, among others) before I could meet the doctor. I was also asked to sign a printed copy of terms and conditions, which read that they could share any and all information collected with any third parties. Additionally, when asked about the privacy policy, I was told that it was not available. There were multiple problems with this experience:

  1. The booking did not inform its users that the fitness center would mandatorily collect additional information before I could meet the doctor.
  2. I did not have the choice to share my vitals-related information. It was a necessary condition for me to meet with the doctor, even though I had reports of all the tests I needed with me.
  3. Even if I did consent to them using this information, it would not really be “informed” given the non-simplistic nature of terms and conditions, and the unavailability of the privacy document.

This experience points to few of the many problems that exist within the idea of informed user/consumer consent, as shown above. The ethical issues that surround informed consent in healthcare and health care-related research is well known. The American Medical Association suggests:

Consent and ethical use of consumer personal data are particularly exacerbated in the digital world, given the sheer amount of data that is generated through the consumers’ online and offline use patterns and how it ultimately gets used. While the use of consumer generated data by companies has become commonplace as a business model, the way consent is currently set up can have huge consequences, as we witnessed in the Cambridge Analytica case.[2]

With the increasing use of artificial intelligence, several ethical questions are being raised around its deployment, one of them being end-user consent, as our recent report on ethics in AI outlines.[3] The executives surveyed in this research reported the collection and processing of patients’ personal data in AI algorithms without consent as one of the top two ethical issues resulting from the use of AI.

Global regulations such as the Global Data Protection Regulation (2016)[4] set standards for what consent should look like:

In our report on GDPR in 2018, we found that over half (57%) of data subjects[5] said they would take action if they discovered that organizations were not doing enough to protect their personal data. Of these, 72% said they would ask organizations to provide more details on the data they hold on them. Three-quarters (73%) intended to go so far as to request that their personal data be deleted and revoke consent for data processing.[6]

It is evident that there are real consequences when consumer consent is not sought appropriately. Specifically, there are some critical questions that organizations need to consider while thinking about consumer consent:

  • Are the terms and conditions simple enough for a consumer to understand?
  • Is the user in a position to provide informed consent (questions of age, literacy among others)?
  • Do consumers opt in to consent or is consent assumed unless they choose to opt out?
  • What are the options available to the user in case they only wish to provide partial consent?
  • Is the request for consent valid? For example, does a photo editing app need all the data from consumers’ contacts list? Can there be existing checks which allow for consumers to provide partial consent and still have access to use the app?

How should organizations obtain consumer consent and subsequently build consumer trust:

  1. Be transparent about:
    1. The purpose behind the use of the technology
    2. What data is collected, processed, and used
    3. Measures taken to safeguard the security and privacy of personally identifiable data
    4. Any known issues, data breaches, safe practices to follow.
  2. Empower users/customers with:
    1. Means to seek more information/explanation about the technology and data collection
    2. Control over their data
    3. Ability to seek recourse if things go wrong.

It is undeniable that technology and AI systems offer organizations with significant opportunities and benefits. Addressing ethical questions such as those around user consent will help organizations earn people’s trust and loyalty, something which is in short supply in today’s digital economy.

If you are interested in the topic further, please read the below reports from the Capgemini Research Institute:

  1. Why addressing ethical questions in AI will benefit organizations
  2. Championing Data Protection and Privacy: A Source of Competitive Advantage in the Digital Century

[2]Wired, “How Cambridge Analytica Sparked the Great Privacy Awakening,” March 2019.
[3] Capgemini Research Institute, “Why addressing ethical questions in AI will benefit organizations,” June 2019.
[5] A data subject is any person whose personal data is being collected, held or processed.
[6] Capgemini Research Institute, “Championing Data Protection and Privacy: a source of competitive advantage in the digital century,” September 2019.