this post was submitted on 09 Jun 2025
497 points (96.8% liked)

Technology

71313 readers
3649 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

you are viewing a single comment's thread
view the rest of the comments
[–] Mr_Dr_Oink@lemmy.world 20 points 2 days ago (2 children)

So is it saying essentially that in order to not output garbage, it needs to know first what garbage is?

Is it just me that things this seems like a no-brainer?

It almosr draws parallels to many societal issues. Knowledge is power.

People tend towards intolerance and hatred when they dont understand the thing they are angry at. The more they know the better they behave.

[–] halowpeano@lemmy.world 9 points 2 days ago (1 children)

No it's more of a technical discussion. Many people might believe that in order to avoid toxicity, you just train a model on "good" non-toxic data and then apply toxicity removal techniques to address emergent toxicity that the model might spit out. This paper is saying they found it more effective to train the model on a small percentage of "bad" toxic data on purpose, then apply those same toxicity removal techniques. For some reason, that actually generated less total toxicity. It's an interesting result. A wild guess on my part, but I'm thinking training the model with toxic content "sharpened" the toxicity when it was generated, making it easier for those removal tools to identify it.

[–] MangoCats@feddit.it 3 points 2 days ago

Toxicity is everywhere, you can't recognize that "Drill baby drill" has sexual connotations if you've never been exposed to sexual double entendre like that before.

[–] MangoCats@feddit.it 6 points 2 days ago

Is it just me that things this seems like a no-brainer?

Yes, and no. When raising our children, my wife prefers the "ban the bad stuff" approach. I don't encourage exposure to bad stuff, but when my kid wants to buy and watch a raunchy movie, instead of yelling "NO!" and making him put it back, I let him buy it and we watch it, together, pausing to explain the unrealistic and awful parts and explain how imitating these things in real life can cause problems for you.