Welcome to the fourth week in detect and mitigate ethical risks. Your instructor for this week is, again, Renee Cummmings. Renee, welcome back. It's always a pleasure to be with you, Megan, during this part of the course. Renee, we're talking about transparency this week. But what do we mean when we say transparency and explainability in data-driven technology specifically? Well, when it comes to transparency, it's about an access to information and it's about unrestricted access to information. People want to know that, stakeholders want to know, what are the processes? What are the policies? What are the procedures in place when it comes to this technology? When it comes to explainability, it's about the explanation. Why is this piece of technology doing what it's doing, when it's doing, and how it's doing? Really important because what people want to know is, what is the result that I have received and why I've received that result, very, very critical. Just let me throw this dig, explainability is critical when you're thinking even of algorithmic decision-making systems in the criminal justice system, high-stakes decision-making. When someone gets a decision, be it a sentence or a rejection for a financial service. You've got to show as an ethical data scientist that you know why that decision was made and they deserve an explanation. That's basically the difference between transparency and explainability. Yeah, and we're certainly seeing that today, all of the issues that we're seeing arise around ethics and technology, especially within the criminal justice system and the inequities there. I'm going to go a little off script here, Renee and ask you as a criminologist, how has ethnics and the ability to be transparent and explain that affected what you do in your roles and in your consulting? Well, what it has done because of so many of the high-stake decision-making when it comes to AI are happening in the criminal justice system, it has really brought me to the fore when it comes to ethics because we're seeing sentencing, parole, facial recognition, over-surveillance of communities. Many of the technologies that are being designed right now, when it comes to AI, are being designed for criminal justice. What it's showing us is that we've got to be conscious, we've got to be cautious, and we've got to be diligent when it comes to detecting and mitigating risks. Because what we have seen with these technologies, many of them have been pulled back, there is now stakeholder pushback, when it comes to something like facial recognition. We are seeing the big tech company now putting a moratorium to something like facial recognition. But what it's showing us is that because they did not really measure their risk appetite, because they did not mitigate risks, such as bias and discrimination, they have these big investments and these big investments have now been put on hold. This is showing us why it's so critical for emerging technologies to embrace an ethical approach because it saves you a lot of money in the long run, a lot of time, a lot of energy, and your reputation. If you detect and mitigate right now and in this course. Absolutely, well, thank you so much for your insight for all the work that you're doing. I know our learners are really going to be engaged and become these ethical leaders that they're striving to be to mitigate these risks and to be transparent, and be able to explain for their organizations. Thank you, Renee.