Welcome to week five of the collaborative forecasting course. You have learned all kinds of gainful techniques for gaming out the future from simulations that are powered by personal predictions, to drawing out consequences with the future wheel using both positive and shadow imagination, and even card games that can help you generate exciting ideas that you never thought of before for a future that benefits all. In this last week of the course, we're going to talk about the best way to conclude any game which is with an epic win. An epic win in gaming slang usually refers to an incredible outcome, something amazing that happen that you might not have even thought was possible, it was a come from behind victory, it was a huge success of a strategy that had never been tried before. It's something that really surprises you and catches you off guard with just how positive the outcome could be. When it comes to shaping the future, designing the future, anticipating future consequences, we're all looking for that epic win. An epic win that allows as many people as possible to benefit from a given future, a win where we all come together and collaborate to achieve something at a higher order or a higher scale. A victory that will affect millions or billions of lives. That's the scale of victory we're going for. To introduce to you on a practical level of how an epic win might come to be, let me tell you about one of the most just shocking and effective future forecasting campaigns that I've ever encountered. This campaign was called Slaughter Bots. We didn't create this at the Institute for the future, this was actually created by researchers at the University of California at Berkeley. They created it to address their growing unease with the evolution of a specific technology, the evolution of drones and particularly their possible weaponization. They created an immersive future scenario in the form of a video called Slaughter Bots. We're going to put a link to this video if you'd like to watch it. I do want to caution you that there is violence in this video, it was created intentionally to be disturbing to make people really feel horrified by the potential for weaponized drones. So if you watch it, it might make you feel quite bad. So you don't have to watch it but you'd like to see a really effective example of immersive forecasting and you can follow the link to the Slaughter Bots scenario. If you'd rather not watch it, I will just summarize for you that they've imagined a future where anybody can put essentially a bomb or a gun attached to a drone, the unmanned aerial vehicle and send it out into the world and use facial recognition to target specific individuals whether they are people who have been chosen intentionally or matching to their social media activities. Perhaps you might want to target people who are members of a particular Facebook group, or who have liked the posts of a certain political figure They have imagined or really targeted way of enacting violence from a distance remotely with these vehicles. It's quite distressing to look at. Now this is an interesting example of moving from the kind of future that keeps us up at night, that makes it hard to sleep, to moving towards a future that gets us out of bed in the morning. The opportunity to do something good for the world, the opportunity to enact a solution. What happened after this scenario was released, is that people who make drones got involved in the conversation. In particular, Chris Anderson who's one of the leading figures in the drone technology space, one of the first leading technologists to adopt drones as his signature passion piece many years ago, he got involved in this conversation. He watched the scenario and he posted a really compelling and vulnerable blog post called, 'We are the people they warned you about'. He wrote about his role in developing smart drone technology. He said, I'm an enabler of what's described in the slaughter bot scenario, but I have no idea what I should do differently. He started to grapple with the ethical implications of the technology he was making. He wrote about what he did after he watched this video. He says, I open my laptop and return to work which happened to be writing the code for yes swarming drones. The exact technology that's depicted in the video. Chris Anderson says I paused for a moment and Riley thought, I guess how this is how that future happens and then went right back to coding. What else could I have done? Choose not to write swarming code at all because it might someday kill us? But if I stopped writing the code, someone else would just do it instead, and I have to do something that was less interesting to work on. Should I write better code that was less likely to run a muck but I don't even know what that means. At this point, I'm lucky if the code works at all. But Chris didn't just stop there. He didn't punt on this issue of does he have some kind of ethical responsibility? He reached out to one of the key researchers of the scenario, Stuart Russell, the researcher of AI at UC Berkeley who helped to make the video and who has closed the scenario with a call to action, who runs a website autonomousweapons.org to try to raise awareness of this issue. So Chris Anderson wrote to this researcher and said, what should I as a drone technology leader do? He had an answer, Stuart had an answer. He said the answer is support a treaty banning lethal autonomous weapons, cooperate in developing and complying with some kind of treaty mechanism that would be analogous to the Chemical Weapons Convention, maybe support laws like know-your-customer law that makes technology companies responsible for who they sell their technology to and how that technology is used, you can support notifications and checks for large-scale purchases that might indicate that they're intended to be used for some malevolent purpose, so that you can prevent some kind of large-scale diversion of this technology for violence. This was an amazing outcome for somebody who makes a technology to join conversation with somebody who researches the ethical implications of the technology, and together they were able to publicize and advocate for an epic win to turn this incredibly dark apocalyptic scenario into a community scale solution in which everybody's working together to achieve collective benefit. That's what an epic win future looks like. When you can turn what's keeping you up at night into something that gets you out of bed in the morning. Where you can feel hope and optimism that there are concrete things you can do to solve the problems, mitigate the risks, and do something that benefits all. That's an epic win. In the next video, I'm going to share with you some additional epic wins that have been generated out of future forecasting practices in the recent months, and then I will share with you some techniques to help you develop your own epic win.