A new study by researchers at Google, Stanford University, DeepMind, and the University of North Carolina at Chapel Hill explores novel tasks that LLMs can accomplish as they grow larger and are trained on more data. The study sheds light on the relation between the scale of large language models and their “emergent” abilities.