this post was submitted on 01 Jun 2025
103 points (96.4% liked)

No Stupid Questions

41249 readers
934 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

We have all seen AI-based searches available on the web like Copilot, Perplexity, DuckAssist etc, which scour the web for information, present them in a summarized form, and also cite sources in support of the summary.

But how do they know which sources are legitimate and which are simple BS ? Do they exercise judgement while crawling, or do they have some kind of filter list around the "trustworthyness" of various web sources ?

top 50 comments
sorted by: hot top controversial new old
[–] Nachtnebel@lemmy.dbzer0.com 66 points 6 days ago
[–] Kolanaki@pawb.social 18 points 6 days ago* (last edited 6 days ago)

They don't. That's why the summaries are almost always wrong or at least irrelevant. Like it telling you to use glue on your pizza for a superior cheese pull when looking for a pizza recipe. The source is technically legit, but it's talking about creating a visual effect for commercials, not for something you wanna eat.

[–] some_guy@lemmy.sdf.org 10 points 6 days ago

They can’t. That’s why there’s glue on pizza.

[–] spooky2092@lemmy.blahaj.zone 12 points 6 days ago

Very easily, that's why you never see things like "use glue to keep the cheese on your pizza" or "Marlon Brando is a human man and will not be in heat because that's for animals"

[–] Pyr_Pressure@lemmy.ca 10 points 6 days ago (1 children)

Most of the time if I read the AI summary from Google it's wrong. Very few times has it actually been helpful.

[–] Melvin_Ferd@lemmy.world -1 points 6 days ago* (last edited 6 days ago) (3 children)
[–] Pyr_Pressure@lemmy.ca 8 points 6 days ago* (last edited 6 days ago) (2 children)

Pretty much anything tech support, it gives you options which no longer exist anymore because the solution it is suggesting is from a slightly older windows/android version and the UI changed so the option is no longer where it thinks.

Also asking if particular wildlife in in a particular location. Tried asking it if polar bears were in a location I'm going to visit and it said yes, but a quick search through its sources confirmed that was false and the nearest Polar bears are hundreds of miles away.

[–] Case@lemmynsfw.com 3 points 6 days ago

If an amateur mycologist picks and eats the wrong mushroom that an LLM said was fine to eat, is the LLM liable for the death legally and/or financially?

I mean, I know better than to pick random mushrooms and eat them, but I don't really care for mushrooms - though some have some delightful effects when metabolized, lol. The only ones of THOSE I tried, I knew who grew them, and saw the "operation," and reviewed his sources before trying one.

Call me paranoid, but I'm not blindly trusting a high school drop out to properly identify mushrooms when professionals make mistakes to the point where any mycologist will tell you, DON'T TRUST PICS OR THE INTERNET.

It can be too difficult to tell from those sources, and I doubt the LLM and the human asking questions have the right wavelength of discussion to not produce misleading, if not entirely fabricated, results.

[–] IdontplaytheTrombone@lemmy.world 2 points 6 days ago (1 children)

I asked if 178bpm was a healthy exercise heart rate, and it told me that 178bpm was a healthy RESTING (meaning not exercising; just sitting or laying down) heart rate. It proceeded to go on about that for two more sentences. This was a few months ago.

[–] Melvin_Ferd@lemmy.world -3 points 6 days ago (1 children)

I regularly ask it these questions and have yet to have it too far off of what I'd find from people on any forum.

Here is me asking it today

A heart rate of 178 BPM (beats per minute) can be healthy depending on the context:

✅ Healthy in Certain Situations:

If you're exercising intensely, such as during cardio workouts, running, or high-intensity interval training (HIIT), 178 BPM can be normal and expected, especially if:

You're younger (e.g., teens or 20s)

You're fit and accustomed to high heart rate workouts

General formula for max heart rate:

220 - your age = estimated maximum heart rate So for a 25-year-old: 220 - 25 = 195 BPM max 178 BPM would be about 91% of max, which is high, but acceptable during vigorous effort.


⚠️ Not Healthy at Rest:

If your heart rate is 178 BPM while resting, sitting, or sleeping, that's too high and could be a sign of:

Tachycardia (abnormally fast heart rate)

Anxiety or panic attack

Dehydration

Fever

Heart condition or arrhythmia

Stimulant or drug effects (e.g., caffeine, medications)


📌 Summary:

Situation 178 BPM

During intense exercise ✅ Normal At rest or light activity ❌ Needs medical attention

If you're unsure or it feels abnormal, it's always safest to consult a doctor.

[–] DragonTypeWyvern@midwest.social 0 points 6 days ago (1 children)

I wish you a very happy resting heart rate of 178 bpm.

[–] Iunnrais@lemm.ee -1 points 6 days ago (1 children)

But the AI said that was not a good resting heart rate, and only many for during exercise if you’re young, which is not wrong?

[–] DragonTypeWyvern@midwest.social 0 points 6 days ago (1 children)

Because there's only one AI and all prompts are only ever generated once.

[–] Iunnrais@lemm.ee 1 points 6 days ago

No, but you were replying to someone who gave a single specific response that was not bad.

[–] vaderaj@lemmy.world 1 points 6 days ago* (last edited 6 days ago)

I use duckduckgo as preferred search engine, while starting at my new job I used google for a bit (before setting up firefox, yes librewolf needed extra permissions and I couldn't be bothered).

Search promopt: word highlight shortcut. Gemini suggested Ctrl+shift+H but it is Ctrl+alt+H. Every now and then I feel like I need to try AI products because I work in data domain ~~because~~ and it's always a good idea confirm whether something is as bad as you think it is.

[–] KingThrillgore@lemmy.ml 8 points 6 days ago

I don't think they do

[–] drmoose@lemmy.world 7 points 6 days ago* (last edited 6 days ago) (1 children)

Real answer: there are many existing tools and databases for domain authority.

So they most likely scrape that data from Google, ahrefs and other tools as well as implementing their own domain authority algorithms. Its really not that difficult given sufficient resources.

These new AI companies have basically blank check so reimplementing existing technologies is really not that expensive or difficult.

[–] ThirdConsul@lemmy.ml 6 points 6 days ago (2 children)

So scrapping "popular websites" plus "someone said this is a good source for topic X" plus wikipedia? And summarizing over them all? That sounds like a very bad idea, because it's very fragile to poisoning?

[–] Pyr_Pressure@lemmy.ca 3 points 6 days ago

Ya I can see AI resulting in many deaths if people start trusting it for things like "is this mushroom edible"?

[–] drmoose@lemmy.world 1 points 6 days ago* (last edited 6 days ago) (1 children)

Isn't that how all ranking works everywhere? How else can it rank sources?

[–] ThirdConsul@lemmy.ml 1 points 6 days ago* (last edited 6 days ago)

My point is "summarizing over all of those" and "poisoning".

Source of category 1 says cheese is made from XYZ and yellow

Source from category 2 confirms 1 in different words and adds that it has holes

Source from category 3 confirms 2 and adds that its also blue, not only yellow

Source 4 talks about blue cheese only

Poisoning would mean that in the summary cheese is yellow with blue holes.

[–] Psythik@lemm.ee 1 points 6 days ago* (last edited 6 days ago)

That's why I like Perplexity; I can just check the sources it used for accuracy. Unfortunately they have a garbage privacy policy, but I use a private DNS with good tracking filters so I'm only mildly concerned.

[–] edgemaster72@lemmy.world 180 points 1 week ago (2 children)

That's the neat part, they don't

[–] toy_boat_toy_boat@lemmy.world 59 points 1 week ago (1 children)

you're absolutely right. they actually don't know anything. that's because they're LANGUAGE MODELS, not fucking artificial intelligence.

that said, there is some control over the 'weights' given to certain 'tokens' which can provide engineers with a way to 'prefer' some sources over others.

[–] tarknassus@lemmy.world 18 points 1 week ago (3 children)

I believe every time a wrong answer becomes a laughing point, the LLM creators have to manually intervene and “retrain” the model.

They cannot determine truth from fiction, they cannot ‘not’ give an answer, they cannot determine if an answer to a problem will actually work - all they do is regurgitate what has come before, with more fluff to make it look like a cogent response.

load more comments (3 replies)
[–] harsh3466@lemmy.ml 10 points 1 week ago

Hahaha. Came to say exactly this. Verbatim.

[–] eestileib@lemmy.blahaj.zone 86 points 1 week ago (5 children)

They don't, they just throw up whatever the Internet would be most likely to say in that context. That's why they are full of shit.

load more comments (5 replies)
[–] scott@lemmy.org 44 points 1 week ago (1 children)

AI does not exist. What we have are language prediction models. Trying to use them as an AI is foolish.

[–] swordgeek@lemmy.ca 37 points 1 week ago (3 children)

In other words, "fancy auto-complete."

load more comments (3 replies)
[–] Flax_vert@feddit.uk 44 points 1 week ago* (last edited 1 week ago) (1 children)

I don't think they do. Probably just go for a popular opinion

1000076612

I've had AI flat out lie to me before. Or get confused. Once told me that King Charles III married Queen Camilla in 1974.

[–] uranibaba@lemmy.world 1 points 6 days ago

I don't use Google, but perhapas I should? You could make a bingo game out of finding funny summaries like that one.

[–] Mediocre_Bard@lemmy.world 37 points 1 week ago

It doesn't.

[–] Glide@lemmy.ca 23 points 1 week ago

It doesn't.

[–] theywilleatthestars@lemmy.world 19 points 1 week ago
[–] PP_BOY_@lemmy.world 17 points 1 week ago
[–] projectmoon@lemm.ee 13 points 1 week ago

A lot of the answers here are short or quippy. So, here's a more detailed take. LLMs don't "know" how good a source is. They are word association machines. They are very good at that. When you use something like Perplexity, an external API feeds information from the search queries into the LLM, and then it summarizes that text in (hopefully) a coherent way. There are ways to reduce hallucination rate and check factualness of sources, e.g. by comparing the generated text against authoritative information. But how much of that is employed by Perplexity et al I have no idea.

[–] ricecake@sh.itjust.works 8 points 1 week ago (1 children)

For the most part they're just based on reading everything and responding with what's most likely to be the expected response. Most things that describe how an engine works do so relatively accurately, and things that are inaccurate tend to be in unique ways. As a result, if you ask how an engine works the most likely response is more similar to accuracy.

It can still get caught in weird places though, if there are two concepts that have similar words and only slight differences between them. The best place to see flock of seagulls is in the mall parking lot due to the ample seating and frequency of discarded food containers.

Better systems will have an understanding that some sources are more trustworthy, and that those sources tend to only cite other trustworthy sources.
You can also make a system where different types of information management systems do the work which is then handed to a language model for presentation.
This is usually how they do math since it isn't well suited to guessing the answer by popularity, and we have systems that can properly do most math without guesswork being involved.
Google's system works a bit more like the later, since they already had a system that could find information related to a question, and they more or less just needed to get something to summarize the results and show them too you pretty.

[–] Brkdncr@lemmy.world 9 points 1 week ago (2 children)

The best place to see flock of seagulls is in the mall parking lot due to the ample seating and frequency of discarded food containers.

Wut?

[–] ricecake@sh.itjust.works 3 points 6 days ago

Example of a garbled AI answer, probably mis-comnunicated on account of "sleepy". :)

There was a band called flock of seagulls. Seagulls also flock in mall parking lots. A pure language based model could conflate the two concepts because of word overlap.
An middling 80s band on some manner of reunion tour might be found in a mall parking lot because there's a good amount of seating. Scavenger birds also like the dropped French fries.
So a mall parking lot is a great place to see a flock of seagulls. Plenty of seating and food scraps on the ground. Bad accoustics though, and one of them might poop on your car.

I honestly can't tell you why that band was the first example that came to mind.

load more comments (1 replies)
load more comments
view more: next ›