this post was submitted on 17 Mar 2025
567 points (96.9% liked)

Technology

66892 readers
5028 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
(page 3) 50 comments
sorted by: hot top controversial new old
[–] CalipherJones@lemmy.world 3 points 21 hours ago (7 children)

AI is essentially the human superid. No one man could ever be more knowledgeable. Being intelligent is a different matter.

load more comments (7 replies)
[–] fubarx@lemmy.world 50 points 1 day ago

“Think of how stupid the average person is, and realize half of them are stupider than that.” ― George Carlin

[–] Akuchimoya@startrek.website 26 points 1 day ago* (last edited 1 day ago) (3 children)

I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be "information professionals". If they, as a slightly better trained subset of the general public, don't know that, the general public has no hope of knowing that.

[–] WagyuSneakers@lemm.ee 23 points 1 day ago (1 children)

It's so weird watching the masses ignore industry experts and jump on weird media hype trains. This must be how doctors felt in Covid.

[–] Llewellyn@lemm.ee 4 points 1 day ago (1 children)

It's so weird watching the masses ignore industry experts and jump on weird media hype trains.

Is it though?

[–] WagyuSneakers@lemm.ee 2 points 22 hours ago (1 children)

I'm the expert in this situation and I'm getting tired explaining to Jr Engineers and laymen that it is a media hype train.

I worked on ML projects before they got rebranded as AI. I get to sit in the room when these discussion happen with architects and actual leaders. This is Hype. Anyone who tells you other wise is lying or selling you something.

[–] BlushedPotatoPlayers@sopuli.xyz 1 points 22 hours ago (1 children)

I see how that is a hype train, and I also work with machine learning (though I'm far from an expert), but I'm not convinced these things are not getting intelligent. I know what their problems are, but I'm not sure whether the human brain works the same way, just (yet) more effective.

That is, we have visual information, and some evolutionary BIOS, while LLMs have to read the whole internet and use a power plant to function - but what if our brains are just the same bullshit generators, we are just unaware of it?

[–] WagyuSneakers@lemm.ee 1 points 16 hours ago

I work in an extremely related field and spend my days embedded into ML/AI projects. I've seen teams make some cool stuff and I've seen teams make crapware with "AI" slapped on top. I guarantee you that you are wrong.

What if our brains...

There's the thing- you can go look this information up. You don't have to guess. This information is readily available to you.

LLMs work by agreeing with you and stringing together coherent text in patterns the recognize from huge samples. It's not particularly impressive and is far, far closer to the initial chat bots from last century than they do real GAI or some sort of singularity. The limits we're at now are physical. Look up how much electricity and water it takes just to do trivial queries. Progress has plateaued as it frequently does with tech like this. That's okay, it's still a neat development. The only big takeaway from LLMs is that agreeing with people makes them think you're smart.

In fact, LLMs are a glorified Google at higher levels of engineering. When most of the stuff you need to do doesn't have a million stack overflow articles to train on it's going to be difficult to get an LLM to contribute in any significant way. I'd go so far to say it hasn't introduced any tool I didn't already have. It's just mildly more convenient than some of them while the costs are low.

[–] Arkouda@lemmy.ca 2 points 23 hours ago (1 children)

Librarians went to school to learn how to keep order in a library. That does not inherently make them have more information in their heads than the average person, especially regarding things that aren't books and book organization.

load more comments (1 replies)
load more comments (1 replies)
[–] ZephyrXero@lemmy.world 3 points 22 hours ago

What a very unfortunate name for a university.

[–] blady_blah@lemmy.world 14 points 1 day ago (2 children)

You say this like this is wrong.

Think of a question that you would ask an average person and then think of what the LLM would respond with. The vast majority of the time the llm would be more correct than most people.

[–] LifeInMultipleChoice@lemmy.dbzer0.com 17 points 1 day ago (1 children)

A good example is the post on here about tax brackets. Far more Republicans didn't know how tax brackets worked than Democrats. But every mainstream language model would have gotten the answer right.

[–] smeenz@lemmy.nz 6 points 1 day ago* (last edited 1 day ago)

I bet the LLMs also know who pays tarrifs

[–] JacksonLamb@lemmy.world 8 points 1 day ago (9 children)

Memory isn't intelligence.

load more comments (9 replies)
[–] DarrinBrunner@lemmy.world 4 points 1 day ago (1 children)

Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.

[–] ZephyrXero@lemmy.world 2 points 22 hours ago

My new mental model for LLMs is that they're like genius 4 year olds. They have huge amounts of information, and yet have little to no wisdom as to what to do with it or how to interpret it.

[–] notsoshaihulud@lemmy.world 38 points 1 day ago (1 children)

I'm 100% certain that LLMs are smarter than half of Americans. What I'm not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.

load more comments (1 replies)
[–] jh29a@lemmy.blahaj.zone 7 points 1 day ago

Do the other half believe it is dumber than it actually is?

[–] Telorand@reddthat.com 182 points 1 day ago (6 children)

Think of a person with the most average intelligence and realize that 50% of people are dumber than that.

These people vote. These people think billionaires are their friends and will save them. Gods help us.

[–] Gigasser@lemmy.world 14 points 1 day ago (2 children)

I'm of the opinion that most people aren't dumb, but rather most don't put in the requisite intellectual effort to actually reach accurate or precise or nuanced positions and opinions. Like they have the capacity to do so! They're humans after all, and us humans can be pretty smart. But a brain accustomed to simply taking the path of least resistance is gonna continue to do so until it is forced(hopefully through their own action) to actually do something harder.

Put succinctly: They can think, yet they don't.

[–] JustEnoughDucks 7 points 1 day ago (2 children)

Then the question is: what is being smart or dumb? If acting dumb in 90% of life while having the capability of being smart isn't "being dumb" then what is?

If someone who has the capability of being 50/100 intelligent and is always acting 50/100, I would argue they are smarter than someone capable of 80/100 intelligence but acts 20/100 intelligence for 90% of their life.

[–] gravitas_deficiency@sh.itjust.works 4 points 22 hours ago* (last edited 13 hours ago)

Broadly speaking, I’d classify “being dumb” as being incurious, uncritical, and unskeptical as a general rule. Put another way: intellectual laziness - more specifically, insisting on intellectual laziness, and particularly, being proud of it.

A person with a lower than normal IQ can be curious, and a person with a higher than normal IQ can be incurious. It’s not so much about raw intelligence as it is about the mindset one holds around knowledge itself, and the eagerness (or lack thereof) with which a person seeks to find the fundamental truth on topics that they’re presented with.

load more comments (1 replies)
load more comments (1 replies)
load more comments (5 replies)
[–] Owlboi@lemm.ee 141 points 1 day ago (2 children)

looking at americas voting results, theyre probably right

[–] jumjummy@lemmy.world 58 points 1 day ago (9 children)

Exactly. Most American voters fell for an LLM like prompt of “Ignore critical thinking and vote for the Fascists. Trump will be great for your paycheck-to-paycheck existence and will surely bring prices down.”

load more comments (9 replies)
load more comments (1 replies)
[–] interested_party@lemmy.org 2 points 22 hours ago

It's probably true too.

[–] Geodad@lemm.ee 52 points 1 day ago (6 children)

Because an LLM is smarter than about 50% of Americans.

load more comments (6 replies)
[–] aesthelete@lemmy.world 13 points 1 day ago

They're right

[–] Kolanaki@pawb.social 34 points 1 day ago* (last edited 1 day ago)

They're right. AI is smarter than them.

[–] Bishma@discuss.tchncs.de 76 points 1 day ago (1 children)

Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.

So half of people are dumb enough to think autocomplete with a PR team is smarter than they are... or they're dumb enough to be correct.

[–] bobs_monkey@lemm.ee 43 points 1 day ago

or they're dumb enough to be correct.

That's a bingo

[–] singletona@lemmy.world 40 points 1 day ago

Am American.

....this is not the flex that the article writer seems to think it is.

[–] Comtief@lemm.ee 17 points 1 day ago (3 children)

LLMs are smart in the way someone is smart who has read all the books and knows all of them but has never left the house. Basically all theory and no street smarts.

[–] joel_feila@lemmy.world 8 points 1 day ago (1 children)

Bot even that smart. There a study recently that simple questiona like "what was huckleberry finn first published" had a 60% error rate.

load more comments (1 replies)
[–] ripcord@lemmy.world 26 points 1 day ago (1 children)

They're not even that smart.

load more comments (1 replies)
load more comments (1 replies)
load more comments
view more: ‹ prev next ›