General AI Challenge
Challenge Update #16 - Learn more about AI Race Avoidance (Round 2)
While the jury are busy assessing Round 1 of the General AI Challenge, we are preparing for Round 2! The second round will focus on AI race avoidance in the development of general artificial intelligence and will launch in November 2017.
At the AI Roadmap Institute, a partner organisation of Good AI, we have been working on visualizing different scenarios that could occur from the AI race, where developers race towards being the first to achieve general AI and might neglect either safety procedures or agreements with other stakeholders for the sake of first mover advantage. The next round of the Challenge will open up wider discussions to lots of different questions including:
How to incentivise AGI race winner to obey original agreements and/or share AGI with others?
We understand that cooperation is important in moving forward safely. However, what if other actors do not understand its importance, or refuse to cooperate? How can we guarantee a safe future if there are unknown non-cooperators?
All these points are relevant to internal team dynamics as well. We need to invent robust mechanisms for cooperation between: individual team members, teams, companies, corporations and governments. Looking at the problems across different scales, the pain points are similar.
What levels of transparency is optimal and how do we demonstrate transparency?
How do we stop the first developers of AGI becoming a target?
With regards to the AI weapons race, is a ban on autonomous weapons a good idea? What if our enemies don’t follow the ban?
We have recently held a workshop (pictured below) on AI Race avoidance. You can find the outcomes of it in our new blog on the subject here: Avoiding the Precipice: Race Avoidance in the Development of Artificial Intelligence.
The work done together with the AI Roadmap Institute is the first step in preparation for Round 2 of our worldwide General AI Challenge.
Round 2 will aim to tackle this difficult problem via citizen science and promote AI safety research beyond the boundaries of the AI safety community. We will be asking participants to create a proposal of what practical steps can be taken to avoid the pitfalls of the AI race and advance the development of beneficial general AI.