Talk: The Importance of Open-Endedness in AI and Machine Learning

Open-ended Evolution, Artificial Intelligence
Kenneth Stanley

Why should we care about open-endedness?

There is nothing you can point to that would be worth coming back to a billions year from now to see what happened. And yet, we are inside of such a system and such a system produced us.

Evolution is a seemingly open-ended process for which we only have access to a single run’s current and past results. It is this algorithm capable of generating endless surprises over the course of billions of years.

But open-endedness also happens outside of “strict” evolution. It has been in the history of art, science, technology. We as humans are both a product of an open-ended process and also open-ended ourselves.

How to achieve it?

No algorithm yet exists worth running forever. We haven’t been able to create open-endedness yet.

  • Usually there is an objective function which means the algorithm converges or get stuck.
  • Even other more innovative ideas end up reaching diminishing returns (co-evolution, self-play, GANs, Alife worlds)
  • This is true for deep learning, optimization, evolution, etc.

So far, it seems we still don’t know something.

Early insights with NEAT

What this algorithm tells us is: for a thing to be open-ended, it needs to be unbounded. It has to be possible to make more complex things.


An experiment about letting people “breed” pictures (with NEAT evolving CPPNs under the hood).

An important fact is people were never actively looking for the final product when obtaining interesting pictures.

Novelty search

The idea is to do divergent search instead of convergent search and remove the objective function. It sometimes appear to solve better than optimization.

There is also a need for stepping stone collectors (SSCs). This is a process that store the results in a growing archive. The more stepping stones you have, the more places you can get to. In a way, picbreeder and natural evolution are stepping-stone collectors.

This eventually led to Quality diversity. This transition happened because there was still a need to communicate if the system is doing what we want.

However, QD is certainly not one of those algorithms worth running for a billion years. The space of possibilities gets filled progressively and there is no new possibility arising.

Towards achieving Earth’s creativity

On Earth, evolution is simultaneously creating new opportunities and possibilities and searching through them.

This idea led to Minimal Criterion Co-evolution algorithms. They used the example of mazes and neural networks that drive robots through the mazes. Those two form two populations that can evolve. This is not really like what happens in nature where a tree is both a “problem” and a “solution”, but is OK in a machine learning context. There is a minimal criterion only:

  • The maze has to be solved
  • The solver has to solve mazes

When any of those pass the minimal criterion, they get to reproduce. This is inspired by Nature where the minimal criterion is reproduction.

This MCC seems to give algorithms worth running for several years.

Recent work added a resource limitation:

A maze can only be used a finite amount of time by maze solvers in order to solve the minimal criterion.

This pushes towards diversity in a much better way than enforced speciation.

Last changed | authored by


← Back to Notes