Self-training

tags
Machine learning, Knowledge distillation, Language modeling, LLM

Implications for open-ended evolution

Looking at self training through the lense of Open-ended Evolution, it feels like pure self training (only a model with itself) cannot lead to radical improvement or novel behavior since it is fundamentally limited by the original distribution that the model is capable of modeling.

Only external inputs (harness, environment, etc.) can lead the model to change its output distribution significantly enough, whereas self-learning can help with refining an existing distribution, by pruning tails, or increasing the strength of certain subspaces for example.

Last changed | authored by

Comments

Loading comments...

Leave a comment

Back to Notes