this post was submitted on 26 Jul 2023
474 points (99.4% liked)

Technology

38807 readers
191 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] BitSound@lemmy.world 126 points 2 years ago (5 children)

This seems really short-sighted. Why would I go to How Stuff Works when I can just ask the LLM myself?

Maybe there's just no possible business model for them anymore with the advent of LLMs, but at least if they focused on the "actually written by humans!" angle there'd be some hook to draw people in.

[–] chaogomu@kbin.social 85 points 2 years ago (28 children)

The thing is, the LLM doesn't actually know anything, and lies about it.

So you go to How Stuff Works now, and you get bullshit lies instead of real information, you'll also get nonsense that looks like language at first glance, but is gibberish pretending to be an article. Because sometimes the language model changes topics midway through and doesn't correct, because it can't correct. It doesn't actually know what it's saying.

See, these language models are pre-trained, that the P in chatGPT. They just regurgitate the training data, but put together in ways that sort of look like more of the same training data.

There are some hard coded filters and responses, but other than that, nope, just a spew of garbage out from the random garbage in.

And yet, all sorts of people think this shit is ready to take over writing duties for everyone, saving money and winning court cases.

[–] sugar_in_your_tea@sh.itjust.works 16 points 2 years ago (3 children)

Yeah, this is why I can't really take anyone seriously when they say it'll take over the world. It's certainly cool, but it's always going to be limited in usefulness.

Some areas I can see it being really useful are:

  • generating believable text - scams, placeholder text, and general structure
  • distilling existing information - especially if it can actually cite sources, but even then I'd take it with a grain of salt
  • trolling people/deep fakes

That's about it.

[–] ech@lemm.ee 2 points 2 years ago* (last edited 2 years ago)

generating believable text - scams, placeholder text, and general structure

LLM generated scams are going to such problem. Quality isn't even a problem there as they specifically go for people with poor awareness of these scams, and having a bot that responds with reasonable dialogue will make it that much easier for people to buy into it.

load more comments (2 replies)
[–] PipedLinkBot@feddit.rocks 14 points 2 years ago

Here is an alternative Piped link(s): https://piped.video/watch?v=oqSYljRYDEM

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] bane_killgrind@kbin.social 9 points 2 years ago (1 children)

Absolutely. Creating new documentation will always be a human sport.

load more comments (1 replies)
load more comments (25 replies)
[–] AlmightySnoo@lemmy.world 21 points 2 years ago* (last edited 2 years ago)

It's a combination of three things:

1- most people still google things;

2- the more content you have the more organic traffic you're likely to attract from Google;

3- displaying ads on your website makes you money.

Websites full of LLM generated content are just the natural continuation of MFAs (Made For AdSense) and there were lots of tools on sale back then in the 2006~2008 period that promised to automatically create websites for you and fill them with randomized content that is optimized for AdSense.

[–] BanjoShepard@lemmy.world 16 points 2 years ago (2 children)

This reminds me of the short story "The Great Automatic Grammatizator" by Roald Dahl. In the story a machine is invented that can write great stories, but it's creators go around buying the naming rights of authors so people will actually not their books.

load more comments (2 replies)
[–] mrbubblesort@kbin.social 6 points 2 years ago

Correct me if I'm wrong, but isn't AI generated content not copyrightable? Therefore nothing is stopping someone from taking all their content, rebranding it as "how stuff really works" or something, and then start stealing their business & ad revenue.

[–] LoafyLemon@kbin.social 5 points 2 years ago (3 children)

LLM cannot create new concepts, it can only create a mishmash of things it has been fed on.

[–] roguetrick@kbin.social 9 points 2 years ago (1 children)

Isn't that exactly how howstuffworks operates though?

load more comments (1 replies)
[–] Yendor@sh.itjust.works 8 points 2 years ago (1 children)

Humans aren’t much different. 99.9% of what we create is just a remix of existing parts/ideas. It’s why people spend 12-20 years pre-training on all the existing knowledge in the field they’re going to work in.

load more comments (1 replies)
[–] Arbiter@lemmy.world 4 points 2 years ago

Just like Hollywood!

[–] circuitfarmer@lemmy.sdf.org 86 points 2 years ago (4 children)

This is going to happen for a while. Execs who actually have no clue have now been sold on the idea that AI lets them keep making money without paying labor.

It will fail eventually when the execs eventually take the time to learn what AI is capable of and what it isn't capable of.

Who am I kidding? It'll continue indefinitely because there are few consequences for clueless executives.

[–] vezrien@lemmy.world 16 points 2 years ago (2 children)

Execs won’t take the time to learn that, they will learn it only by losing market share to the competition.

[–] SheeEttin@lemmy.world 21 points 2 years ago

By that time they'll already be at the next company.

[–] evatronic@lemm.ee 3 points 2 years ago

"That was two golden parachutes ago, what do I care?"

[–] Justas@sh.itjust.works 5 points 2 years ago

Businesses should automate the executives instead of labor.

[–] worfamerryman@beehaw.org 4 points 2 years ago (1 children)

I see a possibility where these sites eventually become terrible and there is a new person can come in and make content made by humans.

[–] And009@reddthat.com 4 points 2 years ago

Even edited by humans would be better than that

[–] kherge@beehaw.org 4 points 2 years ago

What will probably happen is that people catch on that the content all reads alike and wonder why they shouldn’t just ask ChatGPT directly. Traffic to these sites die down, they panic, and start hiring writers.

[–] lemann@lemmy.one 23 points 2 years ago (1 children)

Someone should create a blocklist for all these new AI-driven websites.

For me personally thee primary appeal of websites are that there's human authors behind the content... otherwise I'd just ask an 'AI' myself.

[–] GiantBasil@beehaw.org 6 points 2 years ago

It would be great to have a list of sites so id know whose links I can just immediately ignore.

[–] KiloPapa@lemmy.world 22 points 2 years ago (1 children)

Considering most articles on the internet that don’t come from legitimate newspapers sound like they’re written by a 6-year-old who gets paid by the word, how much worse could it get?

[–] Gradually_Adjusting@lemmy.world 11 points 2 years ago

Never ask that

[–] kerneltux@lemmy.world 18 points 2 years ago (2 children)

I've read articles that were clearly created using ChatGPT, there was no extrapolation to add context/details to illustrate their points, and parts of it read like it just pulled from a Wikipedia page. The tone felt more robotic than pieces they published 6~8 months ago.

ChatGPT can be useful when it's part of a larger writing process, but I have a feeling that sites that create prompts and paste the output as their articles will slowly die-off because the quality isn't there.

[–] pingveno@lemmy.ml 7 points 2 years ago

We're probing the limits of generative AI right now. I expect a snapback of sorts as people find what does and does not work.

[–] Ser_Salty@feddit.de 6 points 2 years ago

I was checking something on a Fandom "wiki" the other day and I swear to god the summary for a bunch of episodes for several shows was either written or rewritten by AI. You can tell because it uses a bunch of nonsense synonyms, like replacing the name Ray with Beam.

[–] Tygr@lemmy.world 16 points 2 years ago (1 children)

How about instead of all the tracking cookie popups for permission, we force these sites to display a message that the content is AI generated.

[–] Niello@kbin.social 6 points 2 years ago

Why not both?

[–] Hagels_Bagels@lemmygrad.ml 13 points 2 years ago (5 children)

Great. Now people are going to read up a bunch of bs generated by a language model and confidently spread around "hallucinations" as facts.

load more comments (5 replies)
[–] DidacticDumbass@lemmy.one 13 points 2 years ago (4 children)

Bizarre. Not even keep a few editors for... the editing??

I wonder how this will affect the Stuff You Should Know podcsst.

load more comments (4 replies)
[–] HughJanus@lemmy.ml 12 points 2 years ago* (last edited 2 years ago)

Holy shit. Haven't heard of How Stuff Works since like 2002...

[–] Infinity187@lemm.ee 11 points 2 years ago (5 children)

I wonder how Josh and Chuck from SYSK feel about this.

load more comments (5 replies)
[–] Yewb@kbin.social 11 points 2 years ago (1 children)

Creating a market for real human content? Sounds tasty

load more comments (1 replies)
[–] vrighter@discuss.tchncs.de 9 points 2 years ago (2 children)

Used to be one of my favourite sites when I was younger. Haven't visited that site in ages. Holy crap, has it gone to complete shit. Like way worse than I thought possible

load more comments (2 replies)
[–] altima_neo@lemmy.zip 8 points 2 years ago

This seems like a really dumb idea.

[–] worfamerryman@beehaw.org 7 points 2 years ago (1 children)

How long until we can a browser extension that lets us know when we are on a site written by AI?

I don’t mean AI detection, but instead, sites that announce they are laying off editors in favor of AI.

[–] FaceDeer@kbin.social 2 points 2 years ago

If there was such a thing then sites wouldn't announce they're laying off editors in favor of AI.

[–] waterplants@lemm.ee 6 points 2 years ago* (last edited 2 years ago) (3 children)

People really don't understand the current state of LLM, like the pictures generated "Its a really good picture of what a dog would look like, it's not actually a dog". Like a police sketch, with a touch of "randomeness" so you don't always get the same picture.

I'm guessing they will try to solve this issue with some cheap human labour to review what is being generated. These verifers will probably not be experts on all the subjects that the llm will be spitting out, more of a "That does kind of look like a dog, APPROVED".

Let's say I'm wrong, and LLM's can make as good of an article as any human. The content would be so saturated (even a tumblr user could now make as good and as much content as one of these companies), I would expect companies to be joining in on all the strikes 😆.

Funny world we are all going into.

Boas Entradas

load more comments (3 replies)
[–] HawlSera@lemm.ee 3 points 2 years ago

Chat-GPT became far less useful to me when I realized it will actively lie to you. It was too good to be true it turned out. These people will figure it out eventually, Chat-GPT is not an AI, it's a god damn "Chinese Room" (It's a thing in philosophy, look it up)

[–] ProIsh@lemmy.world 2 points 2 years ago

This is fine. Just let us know so we know what shows to avoid.

load more comments
view more: next ›