39. Ethical AI with Sukesh Kumar Tedla

When Sukesh Kumar Tedla first started Unbiased he wanted to combat the surge of fake news, but soon his focus turned upon another major challenge for the technology industry; ethical AI. In the last episode of the season, Henrik speaks to the founder and CEO about how transparency, regulatory measures and good data combined can lead to more trustworthy algorithms.

Links

  • Connect with Sukesh on LinkedIn

  • Visit Unbiased’s website

  • The EU White Paper on AI , which Sukesh mentions in the podcast

  • An article from Reuters on the scrapped Amazon recruitment AI, which showed bias against women

  • Google recently fired their second AI ethics researcher, according to this article from the Verge

  • Self-driving car dilemmas reveal that moral choices are not universal, according to this article in Nature

  • Listen to another episode on AI - Henrik’s interview with Daniel Langkilde, CEO of Annotell, from 2019

If you don’t have time to tune in…

HE: Hello and welcome to Let’s tech podcast. My name is Henrik Enström and the companies that do this are Codic Consulting and Codic Education. Today, I have Sukesh Kumar Tedla with me, CEO at Unbiased. Welcome, Sukesh.

SKT: Thanks for having me.

HE: Nice to have you here. So, the theme today and the idea is to discuss ethical AI. So, could you maybe tell me a bit about yourself and why you wanted to found Unbiased?

SKT: Well, I founded Unbiased back in 2018, but before that I quickly want to say something about myself. I’m originally from India. I’ve been here in Sweden for 7 years now. I did my Master in telecommunications from Karlskrona, BTH. So, back in 2016 I moved to Gothenburg, and I started working at Ericsson and Volvo Cars, but during 2017 I came across this technology called blockchain and since then I got really impressed by it and I saw the potential in the technology. Since then, I have been working in that space of blockchain technology. So, I am also the chairman for the Swedish Blockchain Association today. In addition to that, during the same time, there was this large talk about fake news, misinformation and privacy issues and so forth. As blockchain technology looked like a solution to me for these kinds of problems. So, that's when I started working with it. And, yeah, here we are, almost three years now and I’ve been working with blockchain and working with Unbiased.  

HE: Are you still convinced that the blockchain can solve the problem with fake news?

SKT: Well, to a certain extent. It’s not going to be a 100 percent straight-forward solution, so to a certain extent it’s good and will definitely solve the issue. But coming back to Unbiased, when we initially started Unbiased, the mission was basically to fight fake news and misinformation, but since then the mission has evolved into not only addressing the end-user issues, like fake news and misinformation, but also the technological challenges in relation to these advanced technologies, like AI and machine learning algorithms.  

How can we make the whole process ethical and transparent so that anyone can just go and audit what happened with an AI model at any point in time in the future.

Because when we started our journey in this space, we came across a different technology to address this problem. So, one of the common technologies that were out there was AI and machine learning. So, when we started building our own algorithm and our own model, we started realising and asking questions like: “Why should people trust our models? How can we prove to the audience or the end-users that our models are built in a transparent way, can they validate everything? But as many of you know, AI is a black box.

HE: You don't know why it comes to certain conclusion?

SKT: Yeah, exactly. We can't really explain why an algorithm behaves in a certain way. It could be various factors with the data, what kind of data it was trained upon, what kind of practices did the developers choose during the building process, and their own personal biases and their impact on the algorithm. So, that’s when we realise that there is actually a more fundamental solution that needs to be addressed first, and that's the need for ethical AI. And since 2019 we pivoted and started working with adjusting the issues of ethical AI. So, starting from the data level of building an AI and to actually bringing it into production, like how can we make the whole process ethical and transparent so that anyone can just go and audit what happened with an AI model at any point in time in the future. So, that’s what we're doing - we're building a platform which helps build an ethical AI for organisations and downloaders.  

HE: Interesting. So, many different topics here. I mean, it's a lot of hype around blockchain technology and also around AI, and since we mentioned them a lot here, can you start by just explaining on a basic level what blockchain is?

SKT: Sure. So, blockchain is a decentralised and distributed ledger technology, which basically means, in layman terms, it’s like a group of computers across the globe all sharing the same database and communicating in real time. So, if the information on one of the databases or one of these servers are changed, it's replicated across all the other servers. But the information is also validated using cryptography and advanced computation networks. So, yeah, that's basically the blockchain technology.  

So, once you put something on blockchain it's recorded forever basically, if it's a public blockchain. So, because there are thousands of servers anyone can connect to the blockchain and get the same data today. You might have heard of Bitcoin and Bitcoin basically runs on blockchain technology. So, people often confuse blockchain and Bitcoin, they think they are the same, but they are actually different. Bitcoin is the end use case of blockchain technology, so there could be many use cases of blockchain, and many things can be built on top of blockchain technology today.

HE: But I guess maybe they were invented at the same time, blockchain and Bitcoin?

SKT: Yeah, I mean we have distributed networks and distributed servers and everything for a long time, but I would say Bitcoin technology, or the Bitcoin use case, brought blockchain technology into the mainstream, put the limelight on it.  

I think a majority of startups out there, you start with one idea as a founder on a very high level, but when you start working with it you will come across many other challenges that you have to address. So, you have to do a few pivots during the process.

HE: You have pivoted a bit from your original idea with using blockchain to fight fake news. Do you think that's very common for a startup, that you pivot from one idea to something that you find is more tractable later on?

SKT: Yeah, I think it's definitely common practice for many startups. So, if you're not pivoting, then you're doing something wrong I would say.

HE: Or maybe you’re very lucky maybe?  

SKT: [laughs] Yeah, I would say that's really a rare scenario, but like I think a majority of startups out there, you start with one idea as a founder on a very high level, but when you start working with it you will come across many other challenges that you have to address. So, you have to do a few pivots during the process.

HE: And in the current form, what you're trying to do with Unbiased,? You say a platform, is it a little bit like a search engine or how does it work?

SKT: Right now it's at data marketplace, where companies can go and get services in relation to data they need to train AI algorithms on. So, for anyone who are familiar with machine learning and AI, these algorithms need lots and lots of structured data. So, in order to get this structured data, you need some kind of platform where normal people are just sitting on the PC’s and working on identifying where is the cat, where is the dog, where is the person in this image, where is the car in this image and so on. But this can be very challenging, because right now we're using AI in many use cases - the banking sector, the automotive sector with self-driving cars, drones. So, there's a requirement for lots of data and data is the first step in building an AI algorithm. So, we built this data marketplace which has blockchains integrated in it so that each and every time there is a data point that is being generated on the platform, it's being recorded on blockchain. So, at the end of the day when the customer is getting a data set from our platform, they have an accounted audit history of what happened with this data set. In the coming months we're actually adding an additional tool set that can also make the building of an AI algorithm transparent as well, not only the data file, but also the actual training of the algorithm and modeling it and putting it into production.  

HE: Is it true then to say that you are focusing on the annotation of data, the actual structure, or do you want to get the correct info on things like: “Where is the car? Where is the person?”

SKT: Annotation is one of the services that we are offering right now, but we also offer a marketplace where you can trade data between organisations. So, if your organisation has some datasets that you're not using, you can sell them on the marketplace. Because they have a clear proof on blockchain, it's easy for people to trust it and people can trade it in a decentralised fashion. We usually talk, or we refer to, “data oil” or “the next gold” or whatever, but it's not tradeable. It's just sitting there in the database today. But right now, with our solution, data can be traded, data can be trusted, and you can also have accountability as well - what's happening with the data and so forth.  

We usually talk, or we refer to, “data oil” or “the next gold” or whatever, but it’s not tradeable. It’s just sitting there in the database today. But right now, with our solution, data can be traded, data can be trusted, and you can also have accountability as well.

HE: Is it a problem for you to make sure that whoever enters data into your marketplace actually owns it in the first place?

SKT: Well, that’s a discussion. It’s always difficult to track that information, but at the end of the day you have at least some level of transparency of who is doing what on the platform, and also from a regulatory standpoint or complaint standpoint, they can just go and verify this information as well in future if there are any issues or anything that needs to be taken care of. So, everyone has the same set of information. And on the European level there are two regulations that are being worked up on right now, so one is related to ethical AI and the other one is the data governance act. So, we might see these two regulations coming into practice in the next one to two years. These regulations basically talk about what we're trying to address right now with the transparency and accountability of data and AI algorithms.  

HE: When we were talking with you initially you brought up the idea to discuss ethical AI, and how did you get interested in ethical AI from the beginning, so to say?

Amazon’s recruitment algorithm (...) was giving preferential treatment to men instead of women.

SKT: It’s quite a journey with the AI domain for me, because if we had met like four years before I had this perception of AI as some kind of terminator. But when I started learning about AI technology in 2018, I really got interested in it, and as a person with a technology background, I really like the space. And then once I started learning on how one can build an AI algorithm and how it works, the intricacies of building an algorithm, then I also at the same time saw many problems with it when it came to ethical practices. There are many examples today, for instance with Amazon's recruitment algorithm one or two years back. They identified and isolated that it was giving preferential treatment to men instead of women, but no one would have known outside Amazon's organisation if they hadn’t disclosed it.  

So, these kinds of things happen and there are regulations that are being drafted right now. So, when I came across this topic of ethical AI and started researching it, I thought: “Okay, there is a topic that we really need to address and which kind of aligns with our mission of misinformation and the fake news domain, because it's a broad spectrum.” AI has its own place within that.

HE: Do you know why the EU lawmakers and policymakers first initiated this new law or regulations on ethical AI?

SKT: Yeah, definitely. I think even in the law and the background work that has been done. The examples that they were referring to as well, related to the Amazon recruitment process. But also, one of the key items in their plan was that the dependence on AI is increasing each and every day, so moving forward we will have AI in self-driving cars in our streets, in our phones. Basically, we will be taking the suggestions from algorithms, like related to finance or on a personal level, which might have an impact on your personal life. They're trying to address these constants beforehand from a technology standpoint, so that all the organisations and developers working in this space have some kind of common ground and common practices that are adopted across the board, so that whatever the algorithm that a developer or company is building has to follow certain guidelines.  

HE: What could these guidelines be?

SKT: There are seven points today in the regulation that are mainly focused upon. One is the transparency, and the second thing is auditability of the AI development process, and accountability - who is to be held accountable at different stages - and also the quality of the algorithm. Some algorithms are just suggestions, which might not have much impact, but there are algorithms that could be deciding your loans, or your recruitment into an organisation, or it could be something else which has much more personal impact on you as an individual. So, these kinds of algorithms need much more practices in place.  

HE: Do think yourself that AI is such a place now where it should dictate who a company hires?

SKT: I think it's actually being used in many organisations today, and many people don't realise that or don't even question that these algorithms have some kind biases in them. And now with the advancements with the algorithms like GPT-3, which is basically generating text for you like a human, it's kind of impossible for you to identify between a human and algorithm anymore. But the developers of GPT-3 clearly have stated that the algorithm is actually biased towards a second group of people and different domains. So, people have to be careful, they cannot just throw the word AI and algorithm in the application, they have to be careful on evaluating the impact of these algorithms as well.  

HE: I guess the AI algorithm will never be better than the data that it gets, so if there is inherent bias in whoever annotated in creating the data, then of course the algorithm will also be biased?

SKT: Yes, definitely.

HE: As humans, we have to admit that we are biased. Even if we try not to be, we demonstrably are biased. But do you think it's possible to actually make ethical AI?

SKT: I think so to a certain extent, maybe 90 percent I would say. Because when we're trying to take this extra precaution, not just on the paper, but actually implementing them into practice, tools and different platforms and services, then we are actually trying to address the issues. We're trying to think of it in a much broader way than our own biases. How will this algorithm impact the end users? How will this algorithm impact different cultures or different groups of people? So, when we start pushing ourselves and see these different perspectives, I think we will easily identify many challenges in these algorithms.  

HE: I guess regulations will typically come after there’s a problem, or after the technology field has developed and certain problems are evident?

SKT: Yeah, definitely. I think we're always seeing the problems in this space. For instance, there are already some regulations in USA, in some states, where if you are using an AI algorithm to recruit people, you have to get that audited before you can use it in practice, and similar use cases in the US or in some states. But also now with these regulations, there will be much more requirements in the healthcare industry or the automotive industry, which might have an impact on the individual's life in these scenarios.

HE: I guess it has been discussed quite a lot when you have self-driving cars and there's an accident, who actually is responsible, for example. But if you go to the medical field, have you thought about any possible ethical issues there?

SKT: Yes. I think one good example lately is Covid-19 research. There were many scenarios where people were using algorithms, or people were trying to build algorithms, to address Covid-19. There was research done in the US, I think it was a couple of months back, basically when they evaluated the data, it was limited to people from three states across the whole United States. And even in those three states it was really limited in terms of the demographics of the people.

HE: So, then the algorithm will be very biased or not understand the full scope?

SKT: Yeah, exactly. It's pretty obvious that the algorithms are going to be biased. But this information is not clearly available to the general people or the companies that are going to use this algorithm moving forward. These things need to have some kind of label or tracking mechanism, so that when algorithms have this kind of impact, maybe indirect or direct impact, they have to have practices in place. If not, they shouldn’t be used in real life.  

HE: So, if you look at other companies, for example Google, how do you think they should address the problem of ethical versus unethical AI? Many threads to follow there obviously with Google…

SKT: I mean, they try to address this with ethical AI, but they recently fired their own ethical AI team and discredit pretty much who is leading it. And there was a lot of backlashes, because the reason for firing her… Her name was Timnit Gebru. So, they fired her because she was raising some concerns related to one of their own algorithms.  

HE: They didn’t like that?

SKT: No. [laughing]  

HE: So, it sounds like they should step up their game then?

SKT: Yeah. [laughing]

AI cannot be monitored to the level that we want. But what we can do with these kinds of regulations is that we try to take the necessary action. (...) We can’t monitor the issue to a 100 percent, but we can try to limit the exposure of it.

HE: How do you think AI should be monitored? I mean, since it would affect us in so many ways in society in the future.

SKT: It cannot be monitored to the level that we want. But what we can do with these kinds of regulations is that we try to take the necessary action, and just because there is a regulation one has to try to follow it. So, we can’t monitor the issue to a 100 percent, but we can try to limit the exposure of it.  

HE: Yeah. I think one interesting thing is to talk about actual problems that have happened where you can show a strong bias or unethical AI. You mentioned the recruitment at Amazon and the Covid research and data. Do you have any other examples of actual problems that have happened with machine learning algorithms?

SKT: Yeah, I think there were many examples with for instance self-driving cars. One good example was with Tesla in the States. There could be many reasons of course, we cannot pinpoint what’s the exact issue of it, but there was this truck lying flat on the highway, but the Tesla car thought it was more like a billboard or something and it just went and hit the truck, because it didn't realise that it was a truck lying on the road. One reasoning for that particular issue was that the algorithm was probably more trained on the data from the sensors and the cameras which have different viewpoints, different angles that they take in images from, and these images don’t really reflect that particular scenario for instance. So, of course one cannot predict all kinds of scenarios, but it's still a risk. The person was fine, but it was a risk.

HE: Yeah. That sounds a little bit like stupid AI, rather than unethical. Or one case that I've heard of for example when it comes to self-driving cars, the ethical dilemma of: “Should I save my own driver, but it could mean that I kill three persons on the street?”  

SKT: Oh, yes. I read this week - it was quite interesting - this analysis where the researchers were trying to ask questions in different geographical locations as well, like in Asia. How are people in Asia answering this ethical question, and how is the actual AI algorithm working or trying to address this? So, for instance in Asia, the younger population said that - if there is an old person, a younger person and a baby for instance - they would probably sacrifice themselves to save the elders, because it’s in their culture. But when it comes to the United States and the Western parts of the world, they were a bit more selfish and said: “Okay, we want to save ourselves rather than the others.” So, yeah, there’s cultural differences as well. So, the researchers were actually arguing that the AI algorithms and the way they're behaving should be different in different geographies.

HE: That’s interesting.

SKT: Yeah [laughing], it’s really interesting.

HE: But I mean, it sounds so much like a very highly theoretical example and I'm pretty sure when I took my driver's license, they didn't discuss any of this: “Who should you steer towards if you had to kill one person?” It's so unlikely that it will ever occur, and you have to react directly and not give it thought to where you’re steering if you are about to crash. But I guess one could argue that: “Okay, the AI actually has higher capacity than any human and it could make those choices”, and then maybe it becomes interesting but I'm not sure.  

SKT: It’s really hard to pinpoint more specific examples, but there are definitely a lot more of them. So, if anyone is interested in this particular topic, they can just search for EU AI White Paper and they read through many other examples as well that they have pointed out, and the reasoning for these kinds of regulations as well. So, I think that's a good start for many people.

HE: Another such area where you might think about the ethical AI is the case of deep fakes, where you can nowadays create videos where it looks like it's President Biden who said something, but actually it's not. And I guess they did this with the Queen as well, of England.

SKT: This is one of the broader spectrums that I was talking about earlier with fake news and misinformation. It's kind of fuelled by AI. Maybe 10 years or 15 years back it was much more manual, spreading all this information to news and other channels, but now you just press the button and it gets spread to everyone instantly. So, you don't even have to monitor it sometimes, because the algorithm that we have today act almost like humans.

HE: Yeah, a very interesting subject to talk about. Maybe to end off here, what do you think the future for ethical AI is? Will this be one of the concerns as we continue to develop AI into the future?

SKT: Yeah, I'm really glad to see that many people, many researchers, many organisations and governments are talking about it and trying to address this issue. So, I think moving forward, at least in the next one to two years, we will see a lot more development in this space. And this is actually one of the hottest topics according to Gartner’s research last year when they researched many organisations and business leaders in the industry. It’s a growing topic, and we will see a lot more hype in this space in the next couple of years.

HE: Thank you very much, Sukesh Kumar.

SKT: Thanks for having me.

Previous
Previous

40. Framtidens arbete med Greta Braun

Next
Next

38. Cyberkriminalitet med Jan Olsson