this post was submitted on 06 Aug 2025
593 points (98.5% liked)
Comic Strips
18656 readers
1326 users here now
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- AI-generated comics aren't allowed.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
And how does it is differ from AI?
It differs from AI in that it's completely unintelligent and doesn't try or pretend to be intelligent or creative in any way. It leaves all the intelligence and creativity up to the user. It involves no "training" on large quantities of scraped data. It won't do anything it isn't explicitly told to do. The exact placement and pose of every stick figure, the precise layout and size of the individual frames, the exact content of every chunk of text is all explicitly and precisely specified by the user of codecomic. (In a source code file.) Also, a given source code file will only ever produce exactly the same webcomic whereas generally with generative AI, the exact same input can be used to generate a bunch of candidate images from which the user must select the "best."
With something like Stable Diffusion, it does rely on lots of training data and the user's only input into the content of the "generated" output is to throw a word-salad of keywords at it and tell it to "discern roughly something that fits these keywords". The user doesn't specify the exact location of anything in the resulting image. The user doesn't have control over what exact text appears in the resulting image (and typically AI can't even do text that's sensical). At best the user can "influence" what's output by tuning the keywords and hope with their fingers crossed that the Stable Diffusion model does roughly what the user has in mind.
Something tells me you didn't even read what is there.