4

[2304.15004] Are Emergent Abilities of Large Language Models a Mirage?

 5 months ago
source link: https://arxiv.org/abs/2304.15004
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Computer Science > Artificial Intelligence

[Submitted on 28 Apr 2023 (v1), last revised 22 May 2023 (this version, v2)]

Are Emergent Abilities of Large Language Models a Mirage?

Download PDF

Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due to the researcher's choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous predictable changes in model performance. We present our alternative explanation in a simple mathematical model, then test it in three complementary ways: we (1) make, test and confirm three predictions on the effect of metric choice using the InstructGPT/GPT-3 family on tasks with claimed emergent abilities; (2) make, test and confirm two predictions about metric choices in a meta-analysis of emergent abilities on BIG-Bench; and (3) show to choose metrics to produce never-before-seen seemingly emergent abilities in multiple vision tasks across diverse deep networks. Via all three analyses, we provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models.
Subjects: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as: arXiv:2304.15004 [cs.AI]
  (or arXiv:2304.15004v2 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2304.15004

Submission history

From: Rylan Schaeffer [view email]
[v1] Fri, 28 Apr 2023 17:52:11 UTC (20,235 KB)
[v2] Mon, 22 May 2023 15:56:25 UTC (20,235 KB)

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK