this post was submitted on 08 Aug 2025
-53 points (20.9% liked)

Technology

73876 readers
3793 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 35 comments
sorted by: hot top controversial new old
[–] Fuzzypyro@lemmy.world 10 points 2 days ago

No matter what they say LLMs are not intelligent. AI is a a scam. It’s predictive algorithms on an incredible scale which in the right applications can be really amazing tools but this promise of agi, sentience and the claims of thoughts, feelings, emotions, hallucinations and yes intelligence… absolutely just a scam.

[–] JollyG@lemmy.world 54 points 3 days ago (5 children)
[–] Passerby6497@lemmy.world 18 points 3 days ago* (last edited 3 days ago)

🅱️lue🅱️e🅱️ry

[–] individual@toast.ooo 2 points 3 days ago

to be fair, that's a hard word to spell

[–] big_slap@lemmy.world 0 points 3 days ago

interesting...

I got the correct number of times.

[–] fuzzy_feeling@programming.dev 56 points 3 days ago

guy selling stuff, says his stuff is the best stuff.

more news at eleven.

[–] WanderingThoughts@europe.pub 28 points 3 days ago

"GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert."

Yeah, feels like. Not actually examples of thinking and doing things at that level.

"These systems, as impressive as they are, haven't been able to be really profitable," ... "There is a fear that we need to keep up the hype, or else the bubble might burst, and so it might be that it's mostly marketing."

That's the painful truth. No profit, a lot of hype and a market in a 2008 financial crisis bubble.

[–] flamingo_pinyata@sopuli.xyz 18 points 3 days ago (1 children)

Salesman gonna sell.

Altman is quite good at it actually. Remember when he was saying how scared he was of his own AI. Or calling for increased regulation because their models are just sooo good that government has to nerf them.

[–] Feyd@programming.dev 11 points 3 days ago (1 children)

It helps that the media propagates everything he says as if it is truth when he's obviously lying like 80% of the time.

[–] squaresinger@lemmy.world 1 points 3 days ago

He's another Musk, who's cars have been running completely driver-less from US coast to US coast since 8 years now.

[–] Deestan@lemmy.world 18 points 3 days ago (2 children)
[–] kennedy@lemmy.dbzer0.com 3 points 3 days ago
[–] davidgro@lemmy.world 1 points 2 days ago

Awesome. My only critique is that microwave ovens actually work really well in their niche. I can't say the same for LLMs.

[–] dhork@lemmy.world 14 points 3 days ago (1 children)

I know too many PhD's for that to impress me

[–] Passerby6497@lemmy.world 4 points 3 days ago

Right? I still remember the bollocking I got from a professor in front of a class about the awful state of classroom equipment, all because the man couldn't find the PHD (push here, dummy) button to turn the computer on....

[–] FishFace@lemmy.world 7 points 3 days ago

You can tell this is marketing fluff, because GPT could already provide "PhD-level expertise" - just in a hit-and-miss fashion that you couldn't rely upon without some other form of verification. So how is this different?

[–] apfelwoiSchoppen@lemmy.world 6 points 3 days ago* (last edited 3 days ago) (1 children)

At carbon footprint levels approaching the US military coming soon!

[–] audaxdreik@pawb.social 4 points 3 days ago

Part of what makes these models so dangerous is that as they become more "powerful" or "accurate", it becomes more and more difficult for people to determine where the remaining inaccuracies lie. Anything using them as a source are then more at risk of propagating those inaccuracies which the model may feed on further down the line, reinforcing them.

Nevermind the fact that 100% is just statistically impossible, and they've clearly hit the point of diminishing returns some time ago so every 0.1% comes at increased cost and power. And, you know, any underlying biases.

Just ridiculously unethical and dangerous.

Geriatric senile PhD on too many painkillers whose area of expertise was a pseudoscience like phrenology before it was rejected, maybe.

[–] Vanth@reddthat.com 3 points 3 days ago (1 children)

They have stolen more PhD level work to dump into the training model?

[–] phdepressed@sh.itjust.works 1 points 3 days ago

Don't need to steal a lot of journals made deals.