Ethics in AI | Interview of Rumman Chowdhury

Rumman is a quantitative social scientist by education, which means that in addition to economic value, she also looks at algorithms and data from an ethical point of view. “When we talk about AI, we often assume that we must adapt to the technology, but my view is that AI should help us instead."
header-without-text.png

Rumman Chowdhury is Managing Director and Global Lead of the Responsible AI practice at Accenture. The Responsible AI practice is federated locally because ethics is based on cultural and social norms, and they differ worldwide. With Brussels being the capital of the European Union, Accenture Belgium is one of the key federations led by Ozturk Taspinar, Digital Lead BeLux, and David Bruyneel, Responsible AI Lead BeLux. 

When we talk about ethics and AI, many people think, for example, about robots taking over our jobs, or worse still, killer robots. Is that fear justifiable?

Rumman Chowdhury: Well, no. A large part of our perspective on this technology is formed by the media, through what we read in books or see in movies, and it’s usually a story about HAL or Terminator etc. But the real face of artificial intelligence looks very different. And it is much harder to understand because it has no physical form, and as humans, we want something to be tangible and visible. 

Why do people love chatbots? They give us a way to communicate with AI technology. A chatbot is nothing more than an interface, the actual AI is a lot of code running in the cloud. But as a concept, this is hard for us to understand, therefore we build a story around physical forms, while the core of AI revolves around not having a physical form. It sits in the cloud and can be accessed from anywhere in the world via, for example, a smartphone. One of the reasons we are so afraid is that it is hard to imagine how it exists. 

Does that also have something to do with Big Data? The idea that a lot of data is being collected and we don’t immediately understand how it’s being used? 

People have fixed ideas about our relationship with data, with technology, and how companies use them. But we often assume that data is a transaction. Something very linear. Suppose you want a 10% discount on a clothing website, you give your email address, and you will receive spam, but you will at least get your discount. But in fact that’s not how it really works. When people say that data is the new oil, they do so based on their linear experience: ‘I give my email and you spam’, but it’s more like ‘I use your GPS information to see how healthy you are’. And how do you do that? I know where and when you eat lunch, and whether one of the places you stop at on the way home is a gym, or whether you travel by car or walk, etc. I prefer to compare data with a periodic table, we use raw data and combine it endlessly. 

There are multiple examples. In the US you have the DNA service 23andme, which works with the police to identify murderers. Or you can request genetically determined playlists on Spotify. Which is a little bizarre. But it is all uncharted territory and there is a lot of experimentation happening. Often, people don’t think about the consequences. That’s my job, to think about the consequences. 

One of the problems we have already seen is ‘bias’, or biased data. Is there growing awareness of this?

Bias is a problem, but there are two sorts of bias. The first is quantifiable bias, which is embedded in your data. This is mostly the bias we as scientists talk about. It concerns measurements. If I were to send out a questionnaire and for one or another reason the link does not work for someone whose telephone number ends in 9, then that is systematic bias. Data scientists think about that.

But the other bias is social bias, and that is what concerns the others. This is a lot harder to calculate. Sometimes, data gives a misleading result. Consider for example, black Americans, of whom a higher ratio end up in prison than white Americans. Often this can find a cause in racism. You cannot build a system on this and then pretend that your data is fair and trustworthy. 

You would therefore have to make a translation between scientists and non-quantative people, so that both perspectives are taken into consideration. Data scientists look at data and try to make sure names, addresses, etc., are correct, but they don’t think about the fundamental social problems that plague the data. That is difficult to measure but you know in advance that there are errors in such a system based on biased data. How do you ensure that such a system is fair? In a few cases the answer is: you should not build such a system. If you build on top of a society or a social system that is fundamentally flawed, then I don’t see how you can build an automated system for that, one that people can trust.

What can you do?

It is better to design your AI in the form of interventions at different points in the pipeline. And although we may not be allowed to use AI to decide whether a person should go to prison or not, there are useful areas of application, for example, to see whether they could be released earlier from prison, or whether there is a risk of that person absconding, etc. That is already being used and is often controversial, but there are ways of using it that are not fully automated. Probably the last thing we want to do is give an automated system the power to decide over someone’s life.

One of the problems is of course that data is what it is. Scientists usually see themselves as neutral. If Google translates sentences like “she is a nurse, he is a doctor,” then it is based on how the world often works. Is it then up to a company to make changes to rectify the situation?

Let’s start by saying there are no neutral parties. If you do not take any action, you agree with the status quo. Unless you think that we live in a completely honest world, you agree to a form of dishonesty. We are not going to pretend to be good by doing nothing. You perpetuate the current situation, which is not always fair.

Google is also not the only one with that problem. Netflix and Facebook have also made adjustments at the request of governments. “We follow the laws of the country” they say. But certain countries also have laws that make it legal to kill someone who is homosexual. We cannot pretend that our actions have no consequences. Then you bypass your responsibility as a good company.

In addition, companies like Google do want to have an impact. Their “AI For Social Good” department definitely wants to improve the world with technology. But that cannot be limited to the department of “corporate social responsibility”. It should be reflected in the actual values of the company. That is why I think my job is so important: I focus on the business as a whole, not just one department. If you look at where companies put their ethicists, you will see that very few are involved in the actual business. Even companies like Google, which employs a lot of ethicists, they are put in research, or in “AI For Social Good” because those are safe places for them. In such places, an ethicist cannot force the company to make decisions that may be less profitable in the short term. In the long-term, involving ethicists in the actual business would be better for the company, that’s how I see it.

What does ethical business look like for a company? 

Ethics is based on cultural and social norms, and they differ worldwide. It is not up to us, with our Western standards, to force an ethic on others. An example: in the West and especially in the EU, we attach great importance to privacy, and we see it as something universal. But there are very patriarchal countries where it is used as a way to oppress. Consider, for example, countries where women and girls have limited mobility. There, free access to something like Instagram is a form of rebellion. Or protest. And if they have strict privacy laws, then a man can use his wife’s privacy right to ensure she can’t go online. That was one of the problems with the Abshar app. Then you have privacy as a restriction, whereas we generally see privacy as protection.

Our global goal with AI is to show countries, companies and people how they can, and should, incorporate their values and priorities into the technology they build. Whatever those values are. When I talk to a client, I don’t say something like “you have to do these five things to build technology ethically.” I show them the Accenture framework, how we think about it, and I use it to make the company think about how they will do it.

Every company has core values, usually in a mission statement, and their technology must reflect them in a very direct way. If as a company you say that diversity is important to you, then you should look at each algorithm to see if it is fair. You have to do that anyway, but if you say that you are diverse, that diversity is a priority, then you had better take action, because more and more companies are accountable for what they do.

Do you see a growing demand for rules to govern the responsible use of AI and technology? 

Different parts of the world are now in such a trajectory towards the responsible use of AI. And that often follows the same path. It starts with everyone worrying about jobs and killer robots, because that’s the story we know from history. Then people become aware of privacy, fairness, justice, accountability, transparency and so on. That is where cultural norms pop up. For example, privacy is more important in Europe than in the US. Whereas discussions about algorithmic “bias” are becoming much stronger in the US. In the EU people talk less about discrimination, except a bit about gender discrimination. When it comes to racial discrimination, almost all discussions are conducted in the US. That country is also doing more on that level culturally and historically. These kinds of conversations are an expression of the battles that we have fought and that we are still waging.

I also want to see how things are going in India. That country has a strict caste system, even if it has been officially abolished. There is still cultural hierarchy. And India also wants to digitize strongly. They are moving towards a digital currency; they want to store biometric data of people in the Adhar system and so on. A lot of technology has a balancing effect, it is intended not to be hierarchical. If I use a biometric data system, I assume that everyone has the same access to technology, but we know that’s not how society works. So how do you reconcile these different structures? It will be interesting.

In your job you work on diversity in recruiting. How do you approach that? 

At Accenture, we want to achieve 50/50 men and women by 2025. And the intention is that those women are not all juniors, while the top is still completely male. At Accenture we believe it is both a strategic choice (quality of our work) and our responsibility towards (AI) society to contribute through diversity to the reduction of bias at source: in algorithms. Because of this, diversity has not only a social but also a strategic dimension with us.

To achieve this, we work with AI at various points in human resources. For example, we start with sourcing, where you find candidates. If you don’t look at bias problems there, or if you just take things as they come, or maybe you say, “I take 50/50”, no matter how you source these people, you can’t do anything about it later. That is often a problem. Because ethicists come in at the end. They are not brought in at the first selection. Someone says at the end, “we now have these candidates, and I want to use an algorithm to see if they fit that role well, so come and help me, ethicist”. But all those decisions have already been made in advance, and the field is already so much smaller. If you give me the five women you’ve found, and then say, “make sure this works,” then it’s your problem.

We often hear that there is a problem in the pipeline. For example, there are too few women with computer engineering degrees. 

It is a network problem, not a pipeline problem. It is often about who you ask and where you set your priorities. Something like education is a signal, and it is a wrong signal. You assume that someone with, for example, a computer engineering degree has the right skills, but it is usually an illustration of privilege. Maybe that worked in the past, when you had no other information about candidates outside of that piece of paper, but in the meantime we have so many other ways to check a person’s qualities and skills. Why would you still rely on the old-fashioned notion of diplomas? And I say this as someone who went to a good university. It’s not that I’m angry or something. You have to be very lucky or born into the right family to get into a good university in certain countries.

In addition, it is not the case that those people who are hired with such a degree are immediately great in their job. Most new recruits who start at large companies go through six months or a year of training. If you give someone from a minority a year of training, they will do just as well. It is a myth that putting someone with the right skills in your company will immediately excel. We often forget how much training you receive when you start working.

How do you approach this in practice?

We are working internally on AI for our HR department. This includes sourcing, making job recommendations more neutral and so on. Accenture is unique because it has a very mobile workforce. Our people are sent to projects, come back and move on to the next. Every day we match people with jobs and prospects, so it is important that you keep that in mind and that you are aware of the trajectory you place someone on, and whether that is fair.

So, we are working on various interventions around that. For example, we have an algorithm that checks whether job descriptions are neutral in tone. For us, the difficulty is also that we have to monitor this diversity globally, and what may be regarded as a minority group in one place may not be considered as such elsewhere. In the US, you might think of minorities such as blacks and Hispanics, while in Singapore they make a distinction between different Asian people. We do not, for example, think of people of Indonesian or Malaysian origin as a subcategory to take into account if we want to make an algorithm for the UK. But it is important in Singapore and India. There is a lot of nuance, and we often come to the conclusion that you can’t do it with algorithmic fixes alone; we also need people. If you want to know if someone will be a good employee, has the qualities you are looking for, these are not easy to describe or measure. Corporate culture, resilience, critical thinking, learning skills, there are no magical figures for that. We must build frameworks for that and reflect on how we are going to use them in a way that we hope will not introduce prejudices.