this post was submitted on 01 Jul 2025
2113 points (98.4% liked)

Microblog Memes

8400 readers
2330 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] PeriodicallyPedantic@lemmy.ca 3 points 3 days ago (1 children)

He isn't talking about locally, he is talking about what it takes for the AI providers to provide the AI.

To say "it takes more energy during training" entirely depends on the load put on the inference servers, and the size of the inference server farm.

[–] Jakeroxs@sh.itjust.works 3 points 3 days ago (1 children)

There's no functional difference aside from usage and scale, which is my point.

I find it interesting that the only actual energy calculations I see from researchers is the training and the things going along with the training, rather then the usage per actual request after training.

People then conflate training energy costs to normal usage cost without data to back it up. I don't have the data either but I do have what I can do/see on my side.

[–] PeriodicallyPedantic@lemmy.ca 2 points 2 days ago

I'm not sure that's true, if you look up things like "tokens per kwh" or "tokens per second per watt" you'll get results of people measuring their power usage while running specific models in specific hardware. This is mainly for consumer hardware since it's people looking to run their own AI servers who are posting about it, but it sets an upper bound.

The AI providers are right lipped about how much energy they use for inference and how many tokens they complete per hour.

You can also infer a bit by doing things like looking up the power usage of a 4090, and then looking at the tokens per second perf someone is getting from a particular model on a 4090 (people love posting their token per second performance every time a new model comes out), and extrapolate that.