Good Taimes

Publish date:

AI solutions require privacy, security, fairness, transparency, ‘explainability’, auditability and ethics to hit success – with the very best AI radiating the company purpose

With all of us increasingly relying on data and algorithms in both personal and business lives, it’s not that simple to just leave our cares behind. Consumers are much more open to products and services if they trust that their privacy is respected, and security is guaranteed. Workers will embrace support from AI earlier when its mechanisms are transparent, its training data is unbiased and it augments them in their daily work. Regulators will demand AI solutions that can be audited and explained. And all of society expects ethical AI, driven by compelling purposes for positive futures. So, it’s about doing AI good, but also doing AI for good. Such a funky perspective.


  • With data and AI at the heart of Technology Business initiatives, organizations find themselves under increasing scrutiny to not only comply with data protection regulations such as GDPR, but also to ensure proper, ethical use of data and algorithms.
  • From an executive perspective, creating a strong foundation with a strategy and code of conduct for ethical AI is vital – alongside policies that define acceptable practices for the workforce, awareness across the organization, and suitable governance.
  • AI systems need to be transparent and understandable. Explainable AI (XAI) leverages approaches and technology to achieve this, even with “black box” algorithms that have been created with deep learning and reinforcement learning.
  • Potential biases in training data need to be monitored and addressed as an addition to high-standard data management practices.
  • (AI) Technology helps to build ethical AI solutions in areas such as bias detection, transparency, ‘explainability’, auditability and continuous monitoring of accuracy.
  • In contrast to being closely monitored for ethical use, AI lends itself to address challenges in societal areas as diverse as climate change and CO2 reduction, digital literacy and inclusion, environmental protection, health improvement and sustainable food production.


  • ZestFinance, a company that helps lenders use machine learning to deploy transparent credit risk models, developed its “ZAML Fair” tool to help reduce the disparity that affects minority applicants for credit.
  • Using an AI model control platform such as IBM’s Watson OpenScale, credit lenders can monitor risk models for performance, bias and transparency, to limit the risk of exposure from regulations, creating more fair and explainable outcomes for customers.
  • Similarly, insurance underwriters can use machine learning to consistently and accurately assess claims risks, ensuring fair outcomes for customers and explain AI recommendations for regulatory and business intelligence purposes.
  • Scotiabank has set a vision for its interactive AI systems to improve outcomes for customers, society and the bank. The bank also monitors systems for unacceptable outcomes to ensure there is accountability for any mistakes, misuse, or unfair results.
  • Created by an independent expert group for the European Commission, the Ethics Guidelines for Trustworthy AI, has had a positive impact on both public and private organizations inside and outside Europe.
  • In this TechnoVision edition, our editorial Being Architects of Positive Futures suggests various, advanced ways of using data and AI for positive outcomes.


  • By addressing ethics issues upfront, organizations stand to gain additional benefits as well as avoid regulatory, legal and financial risks that may result from a market or public backlash on AI.
  • When consumers believe an organization offers ethical AI interactions, over half said that they would place higher trust in it, share their positive experience, be more loyal, purchase more, and be an advocate for it. Organizations whose AI systems consumers perceive as interacting ethically, enjoy a 44-point Net Promoter Score (NPS®) advantage.
  • Nearly two in five consumers would complain to the company and demand an explanation if they experienced an unethical interaction. In the worst case, a third would stop interacting with that company altogether.
  • Using AI for responsible, “positive” purposes is not only an additional way to boost the ethical use of AI, it also provides an engaging and safe training ground for getting hands-on experience with AI in the first place.


Featured Expert

Fabian Schladitz

Expert in Artificial Intelligence, Big Data & Analytics