top of page

General AI Challenge










Challenge Update #15 - End of Gradual Learning Round

August 11, 2017

The Gradual Learning round of General AI Challenge is almost over. The first solutions are already submitted, and we are very excited to see more of them!


We have hit a milestone of 500 participants, and the number of registrations is still growing! You can register for the Challenge and submit your solution right up until the deadline, August 14th 2017 (23:59 CET). Remember that you can submit your solution to compete in two different categories:

1. Quantitative prize: for the agent that solves all of the evaluation tasks in the shortest number of simulation steps through gradual learning.


This prize will use objective evaluation / black-box testing (no jury, no evaluation of designs, etc).

1st place: $15,000

2nd place: $10,000

3rd place: $5,000

Total: $30,000

2. Qualitative prize: for the idea, concept, or design that shows the best promise for scalable gradual learning (this does not have to be a working AI agent; an idea described in a white paper alone can also win the prize).

This prize will be subjective and evaluated by the jury via white-box evaluation. The jury will evaluate the top 10 submissions from the quantitative prize along with “wild-cards” (at least 10 submissions selected by the GoodAI team). We do not guarantee feedback on the submitted solutions.


1st place: $10,000

2nd place: $7,000

3rd place: $3,000

Total: $20,000

We must assume that even the winning implementations may have limitations, errors, missing features, or that we will discover that we need slightly different requirements in order to get us closer to general AI. We expect to learn from this round a lot and will reflect these lessons in the next round.

You can send in as many solutions as you’d like, but each new one must be significantly different from the others.

Submission procedure

You are expected to send an email to with basic information we request (name, contacts, competition category, evaluation hardware, target OS and sharing preference), and a white paper explaining the main principles and motivations behind the agent’s design in a brief, structured manner (2 pages max.; in case the whitepaper exceeds the 2-page limit, participants must include a one-page summary of the paper at the beginning). 


After that, if you compete for the quantitative prize, we will ask you to upload the data to Microsoft Azure Blob Storage.


We prepared a detailed step-by-step guide through the submission process on Linux and on Windows. You can also find the detailed requirements and submission email template in this guide.

If you want to refresh the evaluation details and criteria, you can find them in the Specifications document.


Microsoft Azure access for participants

If you would like to train or test your agent on Microsoft Azure, you can get free access from our technological partner Microsoft Czech Republic and Slovakia. To get started, please get in touch with Tomas Prokop at Provide your team name, specify if you’d like to use VMs with GPUs (such as NV6) and attach a copy of your registration confirmation e-mail.

Note that even those who already requested access to Azure need to additionally request access to the GPU-equipped instances, if GPU use is intended.

Enabling access to GPU-equipped VMs can take several workdays. We recommend to ask for the access soon after you realize you or your team may need it.

Evaluation timeline & results

  • Evaluation start: August 15, 2017

  • Announcement of results: September 30, 2017

Stay tuned on our website, forum and social media!

Good luck!


General AI Challenge team

bottom of page