this post was submitted on 18 Aug 2023
-17 points (35.6% liked)

Technology

63313 readers
4243 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 30 comments
sorted by: hot top controversial new old
[–] halfempty@kbin.social 49 points 2 years ago (3 children)

Far right media considers "Fact Based" to be a "Liberal Bias".

[–] luthis@lemmy.nz 9 points 2 years ago (2 children)

It actually is biased though. UpperEchelon did a video exposing this. I swing to the left myself, but I would prefer if the LLMs were objective.

[–] theywilleatthestars@lemmy.world 31 points 2 years ago (2 children)

Political objectivity is impossible.

[–] luthis@lemmy.nz 7 points 2 years ago (2 children)

I would argue that asking a machine to list known information is not impossible.

Here's a very clear example where chatGPT refused to answer a question regarding Biden but happily answered the exact same question for Trump.

https://youtu.be/_Klkr6PtYzI?t=520

And before anyone starts, NO! I'm not a supporter of the oompaloompa king.

[–] Sekoia@lemmy.blahaj.zone 6 points 2 years ago

Mhm, but with the way LLMs work, it's not possible to actually remove bias since it's baked into the training data. Any adjustment towards "neutral" would be biased by what the adjuster considers neutral.

[–] PipedLinkBot@feddit.rocks 1 points 2 years ago

Here is an alternative Piped link(s): https://piped.video/_Klkr6PtYzI?t=520

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] GigglyBobble@kbin.social 0 points 2 years ago* (last edited 2 years ago) (1 children)

Only if emotions are involved. Of course it's not possible as long as we train our AI with flawed human-generated data though.

[–] theywilleatthestars@lemmy.world 3 points 2 years ago (1 children)

LLM's require human generated data, that's how they work.

[–] GigglyBobble@kbin.social 0 points 2 years ago (1 children)

That's how we make them work today. It is possible to stay politically neutral in a language though. And therefore your generalized statement is incorrect.

[–] Renacles@discuss.tchncs.de 3 points 2 years ago (1 children)

What does politically neutral even mean though? Because it's definitely not centrism.

How would you train AI without biased data anyways? All data carries some bias from whoever generated it.

[–] GigglyBobble@kbin.social 1 points 2 years ago (1 children)

What does politically neutral even mean though?

A conversation being void of any kind of politics is politically neutral. And that's most conversations I have.

[–] Renacles@discuss.tchncs.de 2 points 2 years ago (1 children)

Most people don't agree on what subjects are and aren't political.

[–] PostmodernPythia@lemmy.world 1 points 2 years ago

That’s because people mean very different things by “political.” I use the definition of “a subject related to the (usually human) division of power”, because it makes it rhetorically harder for people to depoliticize their pet causes, so we can actually look at what’s happening.

[–] Phanatik@kbin.social 5 points 2 years ago

It will never be objective if its dataset is something like the internet. It will always be prone to bias because that's the double-edged sword of LLMs, they have to have vast quantities of data and the only place they can get that is the internet which is biased opinions everywhere.

[–] Eggyhead@kbin.social 5 points 2 years ago (1 children)

LLMs are only as “fact based” as the data they source. An LLM in the time of Galileo would have told you the earth is flat.

[–] Yorick@feddit.ch 7 points 2 years ago (1 children)

Weekly reminder that scholars since at least 500 B. C thought the earth was a sphere, so Galileo's LLM would tell him that, and even tell its size with a reasonable accuracy.

[–] Eggyhead@kbin.social 1 points 2 years ago

Galileo’s LLM would tell him that

Assuming the sources used for training were the correct ones.

[–] mishimaenjoyer@kbin.social 0 points 2 years ago

today's fact is tomorrows hate crime saying it - beware!

[–] FarceMultiplier@lemmy.ca 24 points 2 years ago

Reality has a liberal bias.

[–] jflorez@sh.itjust.works 20 points 2 years ago

Reality has a liberal bias

[–] Blamemeta@lemm.ee 11 points 2 years ago

No shit sherlock.

[–] redchlorophyll@lemmy.world 11 points 2 years ago

Big surprise! Call the press!

No wonder, given the ever more ‘curated’ dataset it’s trained in