Skip to Content
DTR_Speaker-Banner_1920x480_Paul-Cobban-1
Data and AI

Leveraging the Power of Ethical and Transparent AI for Business Transformation

Paul Cobban, DBS

Paul Cobban is the chief data and transformation officer at DBS, a multinational bank with total assets worth S$551 billion. The bank has won plaudits as the “World’s Best Digital Bank.” He chairs the Future Enabled Skills workgroup of the Institute of Banking and Finance and he is also a member of both the Institute of International Finance’s Fintech Advisory Council and the Technology Roadmap Steering Committee of the Infocomm Media Development Authority.
The Capgemini Research Institute spoke with Paul to understand more about the role of ethical and transparent AI in driving business transformation.

AI AT DBS

DBS has been recognized as one of the world’s best digital banks. Did AI have a role to play in this transformation, and to what extent do you believe you have been able to leverage AI for business transformation?

Our transformation has been 10 years in the making. In the early phases, AI was not part of the story, but it is definitely playing a critical role now. Going back five or six years, we partnered with A*STAR, which is the government’s research and development arm in Singapore. Through the partnership, we learned how to make use of our data in non-traditional ways. They taught us how to predict when ATMs are going to fail or which one of our branches is going to have the next operational error. Then we broadened those use cases and started using data to predict when our relationship managers are likely to quit, so that we can put in interventions.

Last year, we introduced an AI-chatbot to help our HR teams recruit and do a first round of interviews. We have seen a significant increase in productivity, mainly around augmenting people’s jobs and making them easier.

DEFINING ETHICAL AND TRANSPARENT AI

How have you arrived at a definition of ethics and transparency in AI at DBS?

The Monetary Authority of Singapore (MAS) issued a document on this called FEAT, which stands for “Fairness, Ethics, Accountability, and Transparency.” We used that as a foundation for our own internal variant, PURE, which stands for “Purposeful, Unsurprising, Respectful, and Explainable.” This was the foundation for the process we put in place to assess our data use cases. It is broader than just AI – it is about the use of data, and AI is a subset of that.

Talking about the PURE descriptors, the first idea about being purposeful implies that we should not collect data just for the sake of collecting data. Instead, we should have a very concrete purpose for doing so – with the intent of making the lives of our customers better. The way in which we use the data should not shock our customers, and it should be unsurprising to them. Respectful refers to how we should not invade the privacy of people without good reason. At the same time, we are also very mindful of the fact that there are certain use cases, such as fraud and criminal activity, where you have to have a balanced approach.

There are increasing expectations from customers that any decision that is made using an algorithm needs to be explainable, and the MAS guidelines are very clear that the explainability and accountability of a decision need to lie with a human being at some point.

We recognize this as a very nascent area, and we will need to continue to iterate as we learn.

THE BUSINESS OWNER OF THE ALGORITHM IS ACCOUNTABLE

Do you have a defined governance mechanism for tackling ethical issues in AI?

Yes, it is all based around the PURE concept. We have a process where everybody who is using data for a specific use case needs to do a self-assessment against the PURE principles. The vast majority of use cases are innocuous and do not need any formal assessment. Anything that triggers any of the PURE principles then goes to a PURE Committee, which I co-chair along with one of my colleagues from the business unit. Those use cases are then presented and discussed at the PURE Committee for ratification. They are then either approved or a mitigating control will be put in place to make sure that we do not trigger any of the PURE categories.

When issues do arise with AI, where do you think accountability and responsibility should lie?

We don’t have any issues yet, but we have plenty of questions. For example, what is surprising to you may not be surprising to me. And, what is surprising to me today may not be surprising to me tomorrow as things evolve and people get used to things. Nothing here is black and white. There is a lot of judgment at play, especially in these early days of AI. However, accountability needs to be very clear. So, we are in the process of compiling an algorithmic model inventory, which means we “inventorize” every model in the company and ensure there is an owner associated with that model – and, that owner is accountable for the decisions that model makes. It is therefore important for that individual to be conversant enough with advanced analytics depending on the model and know how it operates.

The other thing to note here involves the use cases of the model, as not all are sensitive. So, for example, we use algorithms to predict which one of our ATMs might have the next mechanical failure, but that is not very contentious. If the model gets it wrong, the worst that can happen is that the ATM can have an outage. However, if you are assessing people for credit, that is a different issue. You must have a judgment call around it and that is where some of the complexities are.

You mentioned ownership of these algorithmic models – could you tell us who the owner usually is?

It depends on the model. Typically, it is the individual who is making decision before the algorithm. If I am responsible for the uptime of ATMs and I want to improve that, I will create an algorithm that helps me do it, and I will be accountable. The accountability and responsibility do not lie with the data scientist who develops the algorithm. The business owner in question needs to understand enough about the model to take on that accountability.

How do you ensure that all the relevant teams are aware of, and are responsible for, ethics and transparency issues in AI?

We have a substantive training and awareness program called DataFirst. We also have various big data and data analytics training programs, and we have trained half the company on the basics of data in the past 18 months. Through these programs, we have equipped 1,000 of our employees to become data translators. Our senior leaders have also undergone specialized data courses.

THE ROLE OF HUMANS IN ETHICAL AI

Current technology is not fully geared to deal with all issues – for example, explainability, bias, etc. So, how far can AI solve its own problems today?

In the short term, we are seeing a remarkable acceleration in tools that can adjust bias and non-explainability. It is not simply a case of waiting for technology to solve all the problems – it comes down to human judgment to make the call. Going back to my previous example of ATMs, we found that ATMs in the west of Singapore break down more frequently than in the east. This is not of any concern, but if my credit algorithm was biased towards one gender, then that is a cause for concern. We always need that judgment overlay.

Do you believe that there has to be a human in the loop for all the AI systems before they make consequential decisions about people?

If you look at autonomous cars, by definition, there is no human in the loop. So it is only a matter of time before AI will increasingly act on its own. But that is when you really have to pay attention to what is going on. For example, as we have seen with algorithmic trading, it can cause a massive shift in the market.

THE NEED FOR A BALANCED APPROACH WHEN IT COMES TO REGULATION

Do you see regulations as necessary for implementing ethical AI or is self-regulation the way to go? In the latter case, can companies be trusted with building ethical AI practices in the absence of agreed standards?

This is a challenging question. We have seen how unregulated big-tech companies, in the opinions of most people, have crossed the line. However, we have also seen where regulations with data have gone too far too quickly and have had negative unintended consequences. The approach MAS is taking is sensible – it involves discussing the issues across the industry, putting in place some guidelines initially, and getting feedback to see how that operates before we cement any regulation.

You also have to think about the balance between the rights of the individual and the rights of business, and where you want to play. One analogy I often use is the measles vaccination. Should you make everyone take the vaccination for the greater protection of society? By doing so, you eliminate individual rights. These issues are difficult and regulating too much too soon can be an issue. But, on the other hand, leaving things completely unregulated is also very dangerous. The other challenge around regulation is that in an increasingly connected world, regulations in one part of the world differ from those in other parts. Regulators have a duty to collaborate among themselves and have some kind of baseline approach to this.

ETHICAL AI – A COMPETITIVE ADVANTAGE

What would be your top suggestions for organizations across sectors that are just starting out on the journey of developing ethical AI?

The approach we have taken is working quite well and we recognize that it is a competitive advantage and worth doing. Second, create a cross-functional team to do two things – do some external research about what is relevant within the industry and beyond, and, look internally to find out what is being done with data and define how quickly you need to act. My final recommendation would be to focus on the use cases rather than just the data collection.