this post was submitted on 23 Jul 2025
78 points (95.3% liked)

chapotraphouse

13950 readers
695 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS
 

But they conveniently leave out that it costs money to do anything with AI. It's more like "open to anyone with a credit card." The vast majority of people don't have computers powerful enough to run generative AI models locally, and even then, server farms with a billion GPUs will always produce better results

This means that people have to rely on corporate platforms where you buy tokens that you use to get pulls at the various AI slop slot machines, hoping you get something decent. The mechanics more closely resemble a gacha game than any kind of artistic process

By contrast, learning how to draw, animate or make 3D models costs nothing. There's free tutorials and tools everywhere, and you can also just pirate commercial ones if you want

you are viewing a single comment's thread
view the rest of the comments
[–] doublepepperoni@hexbear.net 2 points 4 days ago* (last edited 4 days ago) (2 children)

Really? I might look into it, then. I once tried generating random pictures with Bing (?) since Microsoft handed out some free points, and while it was fun to mess around with, I hated how much it felt like I was playing some freemium mobile game. I also just hate this sort of subscription/microtransaction-based cloud computing SAAS bullshit in general

I might actually enjoy generating stuff on my own hardware

[–] sudo_halt@lemmygrad.ml 4 points 4 days ago* (last edited 4 days ago)

To actually use it properly, install ComfyUI and get in deep. The whole "actually using SD to generate what you want consistently" is a form of art on it's own, it's kind of like learning Krita.

Because generating random bullshit ala ChatGPT is easy. Getting anatomy, object placement, logical consistency in the image and also overcoming model biases or introducing new biases is some complicated shit. For example, there are models that can generate depth layers, models that can dictate object placement, and then since SD output is smol you need to wire it into an upscaler model, etc.

SDWebUI is much simpler, but also much more limited. Same as Android apps that run SD locally, really nothing matches the level of control that ComfyUI has.

All you need is any recent Nvidia GPU with 4GB of VRAM although less VRAM was reported to work. For AMD, since AMD hates GPGPU apparently, only their top cards support ROCm. As for Apple and Intel, I do not know.

If you are stuck with AMD and can't use ROCm, just use KoboldCPP. It will be significantly shittier in every way but atleast it runs.

[–] Le_Wokisme@hexbear.net 2 points 4 days ago

never looked into it beyond "lol minions in heironymous bosch" but i see people with 10+ images of the same subject so i figure there's some ways to get consistent-ish results out of the things that the free thing isn't configured for