this post was submitted on 19 Jul 2025
700 points (98.7% liked)

Science Memes

15874 readers
2201 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] moseschrute@lemmy.zip 16 points 1 day ago* (last edited 1 day ago)

> vibe codes flight trajectory
> realizes physics isn’t as forgiving as a shitty SASS startup
> everyone dies
> ✨vibe physics✨

[–] Armand1@lemmy.world 44 points 1 day ago (1 children)

LLMs are like Trump government appointees:

  • They hallucinate like they're on drugs
  • They repeat whatever they've seen on the internet
  • They are easily maniuplated
  • They have never thought about a single thing in their lives

Ergo, they cannot and will not ever discover anything new.

load more comments (1 replies)
[–] rumba@lemmy.zip 76 points 1 day ago (3 children)

CEOs seem to be particularly susceptible to AI marketing.

I'm kind of in the crux of four decent sized companies and every CEO I see is going gaga over AI.

It's somewhere in between if you don't embrace this technology you'll be left behind and you can Make your workforce many times faster with this one stupid.

[–] JollyG@lemmy.world 20 points 1 day ago (1 children)

CEOs think in bullet points. LLMs can spit out bulleted lists of confident-sounding utterances with ease.

It is not too surprising that people who see the world through overly simplified disconnected summaries are impressed by LLMs

[–] SonOfAntenora@lemmy.world 11 points 1 day ago (1 children)

I unironically started to dislike the bullet point presentation format, even the axios smart brevity format. I feel like it's treating me like I'm too dumb to read a news report or a normal text. That why it matters feels like being instructed on how to think. It's honestly bullocks and it was turning me into a worse reader.

[–] JollyG@lemmy.world 8 points 1 day ago

I feel the same way. This style of thinking can have pretty serious consequences for decision makers.

But, on the other hand, all my bosses think in bullet points, and I am usually the one that writes the bullets. . .

[–] Tollana1234567@lemmy.today 42 points 1 day ago* (last edited 1 day ago)

AI seems to be targeted specifically to ceos who arnt stem majors, make it sound sciency enough so they will fund the scam, almost bordering on pseudoscience.

[–] wewbull@feddit.uk 28 points 1 day ago

Many CEOs display sociopathic traits. Employees aren't people. They're parts of machine parts that you have to pay, but when you put them together form a company.

Now what if you could remove a proportion on those parts and replace them with automated parts you don't have to pay.

[–] Asswardbackaddict@lemmy.world 2 points 23 hours ago* (last edited 16 hours ago)

I'm running really promising (infant) simulations on my computer. The void is a concept, not a physical reality (my hypothesis which has no evidence, as of yet), and that actually leads to a sort of "bounce" or reactive (rather than active) physical law. The word salad sorters aren't going to dismantle our false premises. Might as well write letters asking Santa for scientific advancement.

[–] LovableSidekick@lemmy.world 4 points 1 day ago

Is AI porn vibe sex?

[–] vzqq@lemmy.blahaj.zone 138 points 2 days ago (1 children)

The Dunning Krugers are at it again.

[–] jballs@sh.itjust.works 31 points 1 day ago (2 children)

That's exactly it. Here's a quote from what he said during the article. Dude is so uniformed that he thinks AI is doing amazing stuff, but doesn't understand that experts realize AI is full of shit.

“I pinged Elon on at some point. I’m just like, dude, if I’m doing this and I’m super amateur hour physics enthusiast, like what about all those PhD students and postdocs that are super legit using this tool?” Kalanick said.

[–] vzqq@lemmy.blahaj.zone 15 points 1 day ago* (last edited 1 day ago)

This PhD mostly uses it to summarize emails from the administration. It does a shit job, but it frees up time for more science so who cares.

The real irony is that the administration probably used AI to write the emails in the first place. The mails have gotten significantly longer, less dense and the grammar has gotten better.

Begun this AI arms race has.

[–] shalafi@lemmy.world 11 points 1 day ago (2 children)

Out of context, and I didn't read the rest, that sounds reasonable.

"If my dumbass is learning and finding, what about actual pros?!"

[–] CautiousCharacter@awful.systems 37 points 1 day ago (1 children)

"If I'm learning this much from Baby's First ABCs, imagine what a literature professor could do with it!"

[–] jballs@sh.itjust.works 5 points 1 day ago

"Turns out there are 319 letters in the alphabet and 16 Rs! When the experts get a hold of this, they're going to be blown away!"

[–] Mniot@programming.dev 10 points 1 day ago

Lots of things seem reasonable if you skip the context and critical reasoning. It's good to keep some past examples of this that personally bother you in your back pocket. Then you have it as an antidote for examples that don't bother you.

[–] queermunist@lemmy.ml 120 points 2 days ago (3 children)

Billionaires are going to vibe themselves to death and I support them.

[–] Kirp123@lemmy.world 56 points 2 days ago (2 children)

It's exactly what I was thinking. They should let the AI build a spaceship and all get into it. Would be the greatest achievement in humans history.. when it blows up and kills all of them.

[–] mitchty@lemmy.sdf.org 18 points 1 day ago

Or vibe build a deep sea submarine, cause well you know.

load more comments (1 replies)
[–] MBM@lemmings.world 6 points 1 day ago (1 children)

I just hope that in the process they don't ruin the world for the rest of us

load more comments (1 replies)
load more comments (1 replies)
[–] Genius@lemmy.zip 6 points 1 day ago

Grok is this close to proving the earth is flat

[–] Nikls94@lemmy.world 34 points 1 day ago (1 children)

LLMs: hallucinate like that guy from school who took every drug under the moon.

Actual trained specially AI: finds new particles, cures for viruses, stars, methods…

But the latter one doesn’t tell it in words, it does in the special language you use to get the data in the first place, like numbers and codes.

[–] Eq0@literature.cafe 23 points 1 day ago (1 children)

Just to built on this and give some more unasked for info:

All of AI is a fancy dancy interpolation algorithm. Mostly, too fancy for us to understand how it works.

LLMs use that interpolation to predict next words in sentences. With enough complexity you get ChatGPT.

Other AIs still just interpolate from known data, so they point to reasonable conclusions from known data. Then those hypotheses still need to be studied and tested.

[–] Aceticon@lemmy.dbzer0.com 12 points 1 day ago

Neural Networks, which are the base technology of what nowadays gets called AI, are just great automated pattern detection systems, which in the last couple of years with the invention of things like adversarial training can also be made to output content that match those patterns.

The simpler stuff that just does pattern recognition without the fancy outputting stuff that matches the pattern was already, way back 3 decades ago, recognized at being able to process large datasets and spot patterns which humans hadn't been able to spot: for example there was this NN trained to find tumors in photos which seemed to work perfectly in testing but didn't work at all in practice, and it turned out that the NN had been trained with pictures were all those with tumors had a ruler next to it showing its size and those without tumors did not, so the pattern derived in training by the NN for "tumor present" was actually the presence of the ruler.

Anyways, it's mainly this simpler and older stuff that can be used to help with scientific discovery by spotting in large datasets patterns which we humans have not, mainly because they can much faster and more easily trawl through an entire haystack to find the needles than we humans can, but like in the "tumor detection NN" example above, sometimes the patterns aren't in the data but in the way the data was obtained.

The fancy stuff that actually outputs content that matches patterns detected in the data, such as LLMs and image generation, and which is fueling the current AI bubble, is totally irrelevant for this kind of use.

[–] Aelorius@jlai.lu 3 points 1 day ago (1 children)

Actually, AlphaEvole already did it. They discovered new algorithms that improve the computation efficienty of matrix multiplication for the first time for 50 years. And a lot of other things. It's using a custom version of gemini.

[–] Nalivai@discuss.tchncs.de 6 points 1 day ago

I feel like this story lacks a lot of details

[–] ArchmageAzor@lemmy.world 82 points 2 days ago (1 children)

Someone should tell them to let LLMs make financial decisions for them.

[–] Sludgehammer@lemmy.world 44 points 1 day ago* (last edited 1 day ago) (1 children)

Well... IIRC a chimp did great in the stockmarket compared to professional traders, maybe it's time to give something even "stupider" a chance. I mean how much of a difference is there between a buzzword fueled techbro and a predictive text engine regurgitating random posts from the internet?

[–] FinalRemix@lemmy.world 18 points 1 day ago* (last edited 1 day ago) (1 children)

Just listen to Cramer and then... don't do that.

[–] Zexks@lemmy.world 17 points 1 day ago

There was an anti Cramer index fund for a while.

[–] cazssiew@lemmy.world 23 points 1 day ago (1 children)
load more comments (1 replies)
[–] M0oP0o@mander.xyz 29 points 1 day ago (11 children)

I will be soooo pissed if we get faster then light travel from an LLM, but never know how it works.

[–] vaultdweller013@sh.itjust.works 18 points 1 day ago (1 children)

Hnnnngggg the machine spirit demands mountain dew!

[–] StarMerchant938@lemmy.world 9 points 1 day ago* (last edited 1 day ago) (2 children)

Drink verification can or be violently ripped atom from atom in an unplanned hyperspace exit.

load more comments (2 replies)
[–] Agent641@lemmy.world 6 points 1 day ago

It just uses FTL to get the fuck away from humans as fast as possible

load more comments (9 replies)
[–] jsomae@lemmy.ml 2 points 1 day ago

OpenAI's new model was able to get 5 out of 6 questions (a gold medal) on the 2025 International Math Olympiad. I am very surprised by this result, though I don't see any evidence of foul play.

[–] fckreddit@lemmy.ml 16 points 1 day ago (1 children)

One of the reasons they give for it is : physicists use LLMs in their workflows, so LLMs are close to make physics discoveries themselves.

Clearly, these statements are meant to hype up the AI bubble even more.

[–] Collatz_problem@hexbear.net 8 points 1 day ago

By this logic my chair is close to make physics discoveries.

[–] Duamerthrax@lemmy.world 3 points 1 day ago

Imagine vibe physics, but you end up setting the atmosphere on fire.

[–] Gyroplast@pawb.social 10 points 1 day ago

Bah, humbug! In my days we used a rubber ducky, IF WE HAD ONE, or just the stick we were beaten with for using too many precious CPU cycles, and we were FINE!

[–] latenightnoir@lemmy.blahaj.zone 37 points 2 days ago (3 children)

... please tell me someone has a functioning Warp Drive gathering dust somewhere, we need the Vulcans, like... a week ago...

[–] InternetCitizen2@lemmy.world 27 points 2 days ago (1 children)

I do. I wasn't sure anyone was interested. DM me your PO box and I'll ship it over so you can mess around next weekend.

load more comments (1 replies)
load more comments (2 replies)
[–] A7thStone@lemmy.world 13 points 1 day ago

The worst part is some useful things could come from it, because we're hurtling towards infinite monkeys. It'll only be by pure happenstance, and unless they are lucky enough to randomly find a really great breakthrough it still won't be worth the massive resources they've wasted.

load more comments
view more: next ›