AI is extremely powerful and can be used in ways that negatively affect society. To develop and use AI systems responsibly, AI developers must consider the ethical issues inherent in AI. They must have a realistic view of their systems and their capabilities, and be aware of the different forms of bias potentially present in their systems. With this awareness, developers can avoid unintentionally creating AI systems that have a negative rather than positive impact. Even outside the realm of science fiction, some conversations about the ethics of AI suppose that AI will become sentient, that is become self-aware, develop its own moral code, and turn against its inventors. Nick Bostrom of Oxford University, says in the very long-term, machine superintelligence might eventually match human intelligence, and that when that happens, there may be risks to human society. Professor Stephen Hawking expressed reservations about the future of AI by saying, "The rise of powerful AI will either be the best or the worst thing to happen to humanity. We do not yet know which." Elon Musk went even further by saying, "AI is more dangerous than nuclear weapons." But there are also ethical issues concerning AI that must be addressed in the medium term, short term, and now. In developing AI systems, experts must guard against introducing bias, whether gender, racial, social or any other form of bias. Early AI systems were prone to bias. An example being image recognition software reflecting the unconscious bias of its developers. Due to poorly selected training data, it identified scenes showing kitchens, laundry, and shops with women, and scenes showing sports, coaching, and shooting with men. AI powered facial recognition systems provide another example of an unintentionally biased system. In one case, a system proved to be far more effective at recognizing individuals with lighter toned skin than individuals with darker toned skin. Again, this was discovered to be because the training data provided to the AI system was not sufficiently varied, featuring more individuals with lighter toned skin than those with darker toned skin. These types of bias can be extremely dangerous when used in real-life setting. For example, there are AI powered risk assessment systems used in courts that help predict the probability of a person re-offending, and hence provide guidelines for sentencing or granting parole, based on the calculated risk of recidivism. There is concern that these systems may be biased against people of color. Developers of AI systems can guard against introducing bias by providing effective training data, and performing regular tests and audits to ensure the system is performing as expected. Another ethical issue that has arisen is a person entitled to know when they are speaking to a human being and when they are speaking to a bot. As many bots are now indistinguishable from human beings, especially during short conversations, a lack of trust in AI systems can be exacerbated by lack of transparency. The sudden discovery that one is not speaking to a human being but rather a bot, can be unsettling and can make the transaction feel unequal. Trust is key in developing useful, successful AI systems. For developers, there are four aspects of AI that help people perceive it as trustworthy. Transparency, people should be aware when they are interacting with an AI system and understand what their expectations for the interaction should be. Accountability, developers should create AI systems with algorithmic accountability, so that any unexpected results can be traced and undone if required. Privacy, personal information should always be protected. Lack of bias, developers should use representative training data to avoid bias and regular audits to detect any bias creeping in. You as a consumer of services provided by AI, should take an informed and balanced view. Remember that every news article, every photo or video shared on social media, and every advert has been created by someone, and their motivations may be different from yours.