this post was submitted on 14 Apr 2025
40 points (97.6% liked)
technology
23688 readers
84 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
An emergent behavior of LLMs is the ability to translate between languages. IE, we taught something Spanish, and we taught it English, and it automatically knows how to translate between them. If we taught it English and dolphin, it should be able to translate anything with shared meaning.
Is it emergent?! I've never seen this claim. Where did you see or read this? Do you mean by this that it can just work in any trained language and accept/return tokens based on the language input and/or requested?
I mean, we don't have to teach them to translate. That was unexpected by people, but not really everyone.
https://www.asapdrew.com/p/ai-emergence-emergent-behaviors-artificial-intelligence
Yeah that article is so full of bullshit that I don't believe it's main claim. Comparing LLM's to understanding built by children, saying it makes "creative content", that LLM's do "chain of thought" without prompting. It presents the two sides as at all equal in logical reasoning: as if the mystical intepretation is on the same level of rigor as the systems explanation. Sorry, but I'm entirely unconvinced by this article that I should take this seriously. There are thousands of websites that do translation with natural language taking examples from existing media and such (duolingo did this for a long time, and sold those results), literally just mining that data gives the basis to easily build a network of translations that seem like natural language with no mysticism
It can translate because the languages have already been translated. No amount of scraping websites can translate human language to dolphin.
Assuming this is an emergent property of llms (and not a result of getting lucky with what pieces of the training data were memorized in the model weights), it has thus far only been demonstrated with human language.
Does dolphin language share enough homology with human language in terms of embedded representations of the utterances (clicks?)? Maybe llms are a useful tool to start probing these questions but it seems excessively optimistic and ascientific to expect a priori that training an LLM of any type - especially a sensorily unimodal one - on non-human sounds would produce a functional translator
Moreover, from deepmind's writeup on the topic: