this post was submitted on 19 Feb 2025
1 points (100.0% liked)

Machine Learning

17 readers
2 users here now

This subreddit is temporarily closed in protest of Reddit killing third party apps, see /r/ModCoord and /r/Save3rdPartyApps for more information.

founded 1 year ago
MODERATORS
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/StartledWatermelon on 2025-02-18 14:18:32+00:00.


TL;DR: Uniform pre-layer norm across model's depth considered harmful. Scale the norm by 1/sqrt(depth) at each block.

Paper:

Abstract:

In this paper, we introduce the Curse of Depth, a concept that highlights, explains, and addresses the recent observation in modern Large Language Models(LLMs) where nearly half of the layers are less effective than expected. We first confirm the wide existence of this phenomenon across the most popular families of LLMs such as Llama, Mistral, DeepSeek, and Qwen. Our analysis, theoretically and empirically, identifies that the underlying reason for the ineffectiveness of deep layers in LLMs is the widespread usage of Pre-Layer Normalization (Pre-LN). While Pre-LN stabilizes the training of Transformer LLMs, its output variance exponentially grows with the model depth, which undesirably causes the derivative of the deep Transformer blocks to be an identity matrix, and therefore barely contributes to the training. To resolve this training pitfall, we propose LayerNorm Scaling, which scales the variance of output of the layer normalization inversely by the square root of its depth. This simple modification mitigates the output variance explosion of deeper Transformer layers, improving their contribution. Our experimental results, spanning model sizes from 130M to 1B, demonstrate that LayerNorm Scaling significantly enhances LLM pre-training performance compared to Pre-LN. Moreover, this improvement seamlessly carries over to supervised fine-tuning. All these gains can be attributed to the fact that LayerNorm Scaling enables deeper layers to contribute more effectively during training.

Visual abstract:

Highlights:

We measure performance degradation on the Massive Multitask Language Understanding (MMLU) benchmark (Hendrycks et al., 2021) by pruning entire layers of each model, one at a time, and directly evaluating the resulting pruned models on MMLU without any fine-tuning in Figure 2. Results: 1). Most LLMs utilizing Pre-LN exhibit remarkable robustness to the removal of deeper layers, whereas BERT with Post-LN shows the opposite trend. 2). The number of layers that can be pruned without significant performance degradation increases with model size**.**

**...**LayerNorm Scaling effectively scales down the output variance across layers of Pre-LN, leading to considerably lower training loss and achieving the same loss as Pre-LN using only half tokens.

Visual Highlights:

Don't miss the difference in y-axis scale between the right panel and the other two

The explosive divergence of DeepNorm and MixLN -- which of course wasn't reported in either of the original paper -- tells a cautionary tale on whether the new method can live up to the expecations. The scale of pre-training is still low.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here