- tags
- Machine learning
Transfer learning
Links to this note
- Continual learning
- Distillation
- Few-shot learning
- Foundation models
- Knowledge Base Index
- Notes on: Embarrassingly Simple Self-Distillation Improves Code Generation by Zhang, R., Bai, R. H., Zheng, H., Jaitly, N., Collobert, R., & Zhang, Y. (2026)
- Notes on: LoRA Learns Less and Forgets Less by Dan Biderman, Jacob Portes, Jose Javier Gonzalez Ortiz, Mansheej Paul, Philip Greengard, Connor Jennings, Daniel King, Sam Havens, Vitaliy Chiley, Jonathan Frankle, Cody Blakeney, John P. Cunningham (2024)
- Notes on: Training Language Models via Neural Cellular Automata by Dan Lee, Seungwook Han, Akarsh Kumar, Pulkit Agrawal (2026)
- Synthetic training data
- Time to threshold
- Zero-shot learning
Last changed | authored by Hugo Cisneros
Loading comments...