Use Sparse Autoencoders to Discover Unknown Concepts, Not to Act on Known Concepts
Published: Jun 30, 2025
Last Updated: Jun 30, 2025
Authors:Kenny Peng, Rajiv Movva, Jon Kleinberg, Emma Pierson, Nikhil Garg
Abstract
While sparse autoencoders (SAEs) have generated significant excitement, a series of negative results have added to skepticism about their usefulness. Here, we establish a conceptual distinction that reconciles competing narratives surrounding SAEs. We argue that while SAEs may be less effective for acting on known concepts, SAEs are powerful tools for discovering unknown concepts. This distinction cleanly separates existing negative and positive results, and suggests several classes of SAE applications. Specifically, we outline use cases for SAEs in (i) ML interpretability, explainability, fairness, auditing, and safety, and (ii) social and health sciences.