this post was submitted on 14 Mar 2025
1050 points (99.2% liked)

Technology

66465 readers
4548 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
(page 2) 50 comments
sorted by: hot top controversial new old
[–] MonkderVierte@lemmy.ml 3 points 4 hours ago (3 children)

This is legal, even in the US?

load more comments (3 replies)
[–] JackbyDev@programming.dev 12 points 6 hours ago (1 children)

I didn't even know this was a feature. My understanding has always been that Echo devices work as follows.

  1. Store a constant small buffer of the past few seconds of audio
  2. Locally listen for the wake word (typically "Alexa") using onboard hardware. (This is why you cannot use arbitrary wake words.)
  3. Upon hearing the wake word, send the buffer from step one along with any fresh audio to the cloud to process what was said.
  4. Act on what was said. (Turn lights on or off, play Spotify, etc.)

Unless they made some that were able to do step 3 locally entirely I don't see this as a big deal. They still have to do step 4 remotely.

Also, while they may be "always recording" they don't transmit everything. It's only so if you say "Alexaturnthelightsoff" really fast it has a better chance of getting the full sentence.

I'm not trying to defend Amazon, and I don't necessarily think this is great news or anything, but it doesn't seem like too too big of a deal unless they made a lot of devices that could parse all speech locally and I didn't know.

[–] Wispy2891@lemmy.world 7 points 6 hours ago

It was a non advertised feature only available in the US and in English only

[–] Doctor_Satan@lemmy.world 10 points 6 hours ago

If you traveled back in time and told J. Edgar Hoover that in the future, the American public voluntarily wire-tapped themselves, he would cream his frilly pink panties.

[–] slaacaa@lemmy.world 27 points 8 hours ago
[–] muh_shroom@lemmy.ca 29 points 8 hours ago (2 children)

I can’t believe people are still voluntarily wire tapping themselves in 2025

load more comments (2 replies)
[–] PeterisBacon@lemm.ee 81 points 10 hours ago (2 children)

I have always told people to avoid Amazon.

They have doorbells to watch who comes to your house and when.

Indoor and outdoor security cameras to monitor when you go outside, for how long, and why.

They acquired roomba, which not only maps out your house, but they have little cameras in them as well, another angle to monitor you through your house in more personal areas that indoor cameras might not see.

They have the Alexa products meant to record you at all times for their own use and intent.

Why do you think along with Amazon Prime subscriptions you get free cloud storage, free video streaming, free music? They are categorizing you in the most efficient and accurate way possible.

Boycott anything Amazon touches

[–] daggermoon@lemmy.world 8 points 7 hours ago

They backed out of the Roomba deal. Now iRobot is going down the shitter.

[–] swampdownloader@lemmy.dbzer0.com 33 points 10 hours ago (2 children)

I agree with your sentiment and despise Amazon but they do not own roomba the deal fell through.

[–] KeefChief13@lemmy.world 15 points 8 hours ago

Christ, finally a win

load more comments (1 replies)
[–] SaharaMaleikuhm@feddit.org 7 points 6 hours ago

No way! The microphones you put all over your house are listening to you? What a shocker!
If you bought these this is on you. Trash them now.

[–] impudentmortal@lemmy.world 10 points 8 hours ago (1 children)

How disheartening. I knew going in that there would be privacy issues but I figured for the service it was fine. I also figure my phone is always listening anyway.

As someone with limited mobility, my echo has been really nice to control my smart devices like lights and TV with just my voice.

Are there good alternatives or should I just accept things as they are?

[–] Hexarei@programming.dev 7 points 7 hours ago (1 children)

There aren't any immediate drop in replacements that won't require some work, but there is Home Assistant Voice - It just requires that you also have a Home Assistant server setup, which is the more labor intensive part. It's not hard, just a lot to learn.

load more comments (1 replies)
[–] 52fighters@lemmy.sdf.org 74 points 11 hours ago (4 children)

People are saying don't get an echo but this is the tip of an iceberg. My coworkers' cell phones are eavesdropping. My neighbors doorbells record every time I leave the house. Almost every new vehicle mines us for data. We can avoid some of the problem but we cannot avoid it all. We need a bigger, more aggressive solution if we are going to have a solution at all.

[–] qevlarr@lemmy.world 18 points 7 hours ago* (last edited 7 hours ago)

How about regulation? Let's start with saying data about me belongs to me, not to whoever collected the data, as is currently the case

[–] SaharaMaleikuhm@feddit.org 1 points 6 hours ago

My clunky old bike ain't listening to shit bro. Neither is my android phone using a custom rom.

load more comments (2 replies)
[–] ArchmageAzor@lemmy.world 165 points 14 hours ago (2 children)

Publicly, that is. They have no doubt been doing it in secret since they launched it.

[–] SpaceNoodle@lemmy.world 87 points 14 hours ago (1 children)

Off-device processing has been the default from day one. The only thing changing is the removal for local processing on certain devices, likely because the new backing AI model will no longer be able to run on that hardware.

[–] 4am@lemm.ee 41 points 12 hours ago (1 children)

With on-device processing, they don’t need to send audio. They can just send the text, which is infinitely smaller and easier to encrypt as “telemetry”. They’ve probably got logs of conversations in every Alexa household.

[–] b1t@lemm.ee 38 points 12 hours ago (5 children)

This has always blown my mind. Watching people willingly allow Big Brother-esque devices into their home for very, very minor conveniences like turning on some gimmicky multi-colored light bulbs. Now they're literally using home "security" cameras that store everything on some random cloud server. I'll truly never understand.

[–] deranger@sh.itjust.works 10 points 12 hours ago* (last edited 11 hours ago) (7 children)

Why has no security researcher published evidence of these devices with microphones uploading random conversations? Nobody working on the inside has ever leaked anything regarding this potentially massive breach of privacy? A perfectly secret conspiracy by everyone involved?

We know more about top secret NSA programs than we do about this proposed Alexa spy mechanism. None of the people working on this at Amazon have wanted to leak anything?

I’m not saying it’s not possible, but it seems extremely improbable to me that everyone’s microphones are listening to their conversations, they’re being uploaded somewhere to serve them better ads, and absolutely nobody has leaked anything or found any evidence.

[–] hungprocess@lemmy.sdf.org 13 points 11 hours ago (6 children)

Nobody working on the inside has ever leaked anything regarding this potentially massive breach of privacy? A perfectly secret conspiracy by everyone involved?

I hate to be the bearer of bad news, but...

load more comments (6 replies)
load more comments (6 replies)
load more comments (4 replies)
[–] tal@lemmy.today 18 points 13 hours ago* (last edited 6 hours ago)

If you look at the article, it was only ever possible to do local processing with certain devices and only in English. I assume that those are the ones with enough compute capacity to do local processing, which probably made them cost more, and that the hardware probably isn't capable of running whatever models Amazon's running remotely.

I think that there's a broader problem than Amazon and voice recognition for people who want self-hosted stuff. That is, throwing loads of parallel hardware at something isn't cheap. It's worse if you stick it on every device. Companies


even aside from not wanting someone to pirate their model running on the device


are going to have a hard time selling devices with big, costly, power-hungry parallel compute processors.

What they can take advantage of is that for a lot of tasks, the compute demand is only intermittent. So if you buy a parallel compute card, the cost can be spread over many users.

I have a fancy GPU that I got to run LLM stuff that ran about $1000. Say I'm doing AI image generation with it 3% of the time. It'd be possible to do that compute on a shared system off in the Internet, and my actual hardware costs would be about $33. That's a heckofa big improvement.

And the situation that they're dealing with is even larger, since there might be multiple devices in a household that want to do parallel-compute-requiring tasks. So now you're talking about maybe $1k in hardware for each of them, not to mention the supporting hardware like a beefy power supply.

This isn't specific to Amazon. Like, this is true of all devices that want to take advantage of heavyweight parallel compute.

I think that one thing that it might be worth considering for the self-hosted world is the creation of a hardened network parallel compute node that exposes its services over the network. So, in a scenario like that, you would have one (well, or more, but could just have one) device that provides generic parallel compute services. Then your smaller, weaker, lower-power devices


phones, Alexa-type speakers, whatever


make use of it over your network, using a generic API. There are some issues that come with this. It needs to be hardened, can't leak information from one device to another. Some tasks require storing a lot of state


like, AI image generation requires uploading a large model, and you want to cache that. If you have, say, two parallel compute cards/servers, you want to use them intelligently, keep the model loaded on one of them insofar as is reasonable, to avoid needing to reload it. Some devices are very latency-sensitive


like voice recognition


and some, like image generation, are amenable to batch use, so some kind of priority system is probably warranted. So there are some technical problems to solve.

But otherwise, the only real option for heavy parallel compute is going to be sending your data out to the cloud. And even if you don't care about the privacy implications or the possibility of a company going under, as I saw some home automation person once point out, you don't want your light switches to stop working just because your Internet connection is out.

Having per-household self-hosted parallel compute on one node is still probably more-costly than sharing parallel compute among users. But it's cheaper than putting parallel compute on every device.

Linux has some highly-isolated computing environments like seccomp that might be appropriate for implementing the compute portion of such a server, though I don't know whether it's too-restrictive to permit running parallel compute tasks.

In such a scenario, you'd have a "household parallel compute server", in much the way that one might have a "household music player" hooked up to a house-wide speaker system running something like mpd or a "household media server" providing storage of media, or suchlike.

[–] blackberry@midwest.social 30 points 11 hours ago (1 children)

be aware, everything you say around amazon, apple, alphabet, meta, and any other corporate trash products are being sold, trained on, and sent to your local alphabet agency. it's been this way for a while, but this is a nice reminder to know when to speak and when to listen

[–] 13igTyme@lemmy.world 9 points 8 hours ago

Everyone literally carries a personal recording device.

[–] fubarx@lemmy.world 19 points 11 hours ago (1 children)

So... if you own an inexpensive Alexa device, it just doesn't have the horsepower to process your requests on-device. Your basic $35 device is just a microphone and a wifi streamer (ok, it also handles buttons and fun LED light effects). The Alexa device SDK can run on a $5 ESP-32. That's how little it needs to work on-site.

Everything you say is getting sent to the cloud where it is NLP processed, parsed, then turned into command intents and matched against the devices and services you've installed. It does a match against the phrase 'slots' and returns results which are then turned into voice and played back on the speaker.

With the new LLM-based Alexa+ services, it's all on the cloud. Very little of the processing can happen on-device. If you want to use the service, don't be surprised the voice commands end up on the cloud. In most cases, it already was.

If you don't like it, look into Home Assistant. But last I checked, to keep everything local and not too laggy, you'll need a super beefy (expensive) local home server. Otherwise, it's shipping your audio bits out to the cloud as well. There's no free lunch.

load more comments (1 replies)
[–] MintyFresh@lemmy.world 31 points 12 hours ago (3 children)

Easy fix: don't buy this garbage to begin with. It's terrible for the environment, terrible for your privacy, of dubious value to begin with.

If every man is an onion, one of my deeper layers is crumudgeon. So take that into account when I say fuck all portable speakers. I'm so tired of hearing everyone's shitty noise. Just fucking everywhere. It takes one person feeling entitled to blast the shittiest music available to ruin everyone in a 500yd radius's day. If this is you, I hope you stub your toe on every coffee table, hit your head on every door jam, miss every bus.

load more comments (3 replies)
[–] DirkMcCallahan@lemmy.world 47 points 14 hours ago (4 children)

Today: "...they will be deleted after Alexa processes your requests."

Some point in the not-so-distant future: "We are reaching out to let you know that your voice recordings will no longer be deleted. As we continue to expand Alexa's capabilities, we have decided to no longer support this feature."

[–] ch00f@lemmy.world 18 points 12 hours ago

“We lied and paid a $3M fine.”

load more comments (3 replies)
[–] HawlSera@lemm.ee 1 points 6 hours ago (1 children)

Maybe I misread the actual text, but it sounds like the exact opposite, that it's going to auto-delete what you say.

[–] Cokes@feddit.org 2 points 5 hours ago

They delete the recording from your device....after it has been sent to Amazon to be stored and used limitless.

[–] doingthestuff@lemy.lol 7 points 10 hours ago (2 children)

I honestly have no idea why anyone who cares even 1% about their privacy would have ever bought one of these abominations in the first place. If I ever receive one as a gift I will burn it with fire.

[–] Jinzul@lemmy.ca 3 points 7 hours ago* (last edited 7 hours ago)

I have the things so that I can understand how to protect myself from them. I have a similar thing going on with AI video right now. Hate it but watch the growth to understand it.

[–] trashboat@midwest.social 4 points 9 hours ago

Better yet, crack it open and find a way to load alternative firmware onto it

[–] Clinicallydepressedpoochie@lemmy.world 6 points 10 hours ago* (last edited 10 hours ago)

Amazon employee with no piss breaks listening in on my echo:

"How many fucking cats does this guy have? Just chose one name and call it that!"

Edit: "I don't know Jeff, sell him a fucking dr seuss book or something the guys mental."

[–] yesman@lemmy.world 20 points 14 hours ago

It's always been this way for the cheap speakers. They've no processing power on-board and need the cloud just to tell you the time.

[–] missandry351@lemmings.world 6 points 10 hours ago (1 children)

What happens if I buy one and start playing porn on my computer ?

load more comments (1 replies)
load more comments
view more: ‹ prev next ›