Link Search Menu Expand Document

Self-Improving RL

–>


Self Improving RL – The RL team is working on prototyping self-improving algorithms using LLM generated code in reinforcement learning based feedback loops. They are also developing a heuristic to quantify and measure the self improving power of algorithms.

Abstract. The rapid rise of large language models trained on code presents a significant leap for the field of self improving algorithms. In this paper we explore these capabilities through the Closed Loop Improvement Model Builder, or CLIMB-1. Simultaneously, we develop an empirical, standardized heuristic, the Power of Self Improvement (PSI) table. The table evaluates the power of self improving algorithms and furnishes the field of AI safety with tools to discern risks among self improving algorithms. We conclude by showing that although there are no algorithms currently posing existential risks, the potential of the field to leverage LLMs necessitates further development of safety regulations and protocols such as the PSI table.


I2 - Fusing neuroscience and AI to study intelligent computational systems. Contact us at interintel@uw.edu.