this post was submitted on 29 Jul 2025
293 points (86.4% liked)

Asklemmy

49762 readers
416 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture. Generative AI is (in most cases) just a fancy way for cooperations to steal art on a scale, that hasn't been possible before. And then they use AI to fill the internet with slop and misinformation and actual artists are getting fired from their jobs, because the company replaces them with an AI, that was trained on their original art. Because of these reasons and some others, it just feels wrong to me, to be using AI in such a manner, when this community should be about inclusion and kindness. Wouldn't it be much cooler, if we commissioned an actual artist for the banner or find a nice existing artwork (where the licence fits, of course)? I would love to hear your thoughts!

you are viewing a single comment's thread
view the rest of the comments
[–] Cowbee@lemmy.ml 8 points 6 days ago* (last edited 5 days ago) (1 children)

Right now, anti-AI rhetoric is taking the same unprincipled rhetoric that the Luddites pushed forward in attacking machinery. They identified a technology linked to their proletarianization and thus a huge source of their new misery, but the technology was not at fault. Capitalism was.

What generative AI is doing is making art less artisinal. The independent artists are under attack, and are being proletarianized. However, that does not mean AI itself is bad. Copyright, for example, is bad as well, but artists depend on it. The same reaction against AI was had against the camera for making things like portraits and still-lifes more accessible, but nowadays we would not think photography to be anything more than another tool.

The real problems with AI are its massive energy consumption, its over-application in areas where it actively harms production and usefulness, and its application under capitalism where artists are being punished while corporations are flourishing.

In this case, there's no profit to be had. People do not need to hire artists to make a banner for a niche online community. Hell, this could have been made using green energy. These are not the same instances that make AI harmful in capitalist society.

Correct analysis of how technologies are used, how they can be used in our interests vs the interests of capital, and correct identification of legitimate vs illegitimate use-cases are where we can succeed and learn from the mistakes our predecessors made. Correct identification of something linked to deteriorating conditions combined with misanalyzing the nature of how they are related means we come to incorrect conclusions, like when the Luddites initially started attacking machinery, rather than organizing against the capitalists.

Hand-created art as a medium of human expression will not go away. AI can't replace that. What it can do is make it easier to create images that don't necessarily need to have that purpose, as an expression of the human experience, like niche online forum banners or conveying a concept visually. Not all images need to be created in artisinal fashion, just like we don't need to hand-draw images of real life when a photo would do. Neither photos nor AI can replace art. Not to mention, but there is an art to photography as well, each human use of any given medium to express the human experience can be artisinal.

[–] patatas@sh.itjust.works 15 points 5 days ago* (last edited 5 days ago) (2 children)

The Luddites weren't simply "attacking machinery" though, they were attacking the specific machinery owned by specific people exploiting them and changing those production relations.

And due to the scale of these projects and the amount of existing work they require in their construction, there are no non-exploitative GenAI systems

[–] Cowbee@lemmy.ml 11 points 5 days ago (1 children)

Yes, I'm aware that the Luddites weren't stupid and purely anti-tech. However, labor movements became far more successful when they didn't attack machinery, but directly organized against capital.

GenAI exists. We can download models and run them locally, and use green energy. We can either let capitalists have full control, or we can try to see if we can use these tools to our advantage too. We don't have the luxury of just letting the ruling class have all of the tools.

[–] patatas@sh.itjust.works -2 points 5 days ago (1 children)

These systems are premised on the idea that human thought and creativity are matters of calculation. This is a deeply anti-human notion.

https://aeon.co/essays/can-computers-think-no-they-cant-actually-do-anything

[–] Cowbee@lemmy.ml 6 points 5 days ago (1 children)

Human thought is what allows us to change our environment. Just as our environment shapes us, and creates our thoughts, so too do we then reshape our environment, which then reshapes us. This endless spiral is the human experience. Art plays a beautiful part in that expression.

I'm a Marxist-Leninist. That means I am a materialist, not an idealist. Ideas are not beamed into people's heads, they aren't the primary mover. Matter is. I'm a dialectical materialist, a framework and worldview first really brought about by Karl Marx. Communism is a deeply human ideology. As Marx loved to quote, "nothing human is alien to me."

I don't appreciate your evaluation of me, or my viewpoint. Fundamentally, it is capitalism that is the issue at hand, not whatever technology is caught up in it. Opposing the technology whole-cloth, rather than the system that uses it in the most nefarious ways, is an error in strategy. We must use the tools we can, in the ways we need to. AI has use cases, it also is certainly overused and overapplied. Rejecting it entirely and totally on a matter of idealist principles alone is wrong, and cedes the tools purely to the ruling class to use in its own favor, as it sees fit.

[–] patatas@sh.itjust.works 1 points 5 days ago (1 children)

Matter being the primary mover does not mean that ideas and ideals don't have consequences. What is the reason we want the redistribution of material wealth? To simply make evenly sized piles of things? No, it's because we understand something about the human experience and human dignity. Why would Marx write down his thoughts, if not to try to change the world?

[–] Cowbee@lemmy.ml 4 points 5 days ago (1 children)

I never for one second suggested that thoughts had no purpose or utility, or that we shouldn't want to change the world. This is, again, another time you've misinterpreted me.

[–] patatas@sh.itjust.works 0 points 5 days ago (1 children)

All I am saying is that, baked into the design and function of these material GenAI systems, is a model of human thought and creativity that justifies subjugation and exploitation.

Ali Alkhatib wrote a really nice (short) essay that, while it's not saying exactly what I'm saying, outlines ways to approach a definition of AI that allows the kind of critique that I think both of us can appreciate: https://ali-alkhatib.com/blog/defining-ai

[–] Cowbee@lemmy.ml 5 points 5 days ago (1 children)

I'm saying capitalism is the issue, not the tool. Art should be liberated from the profit motive, like all labor. Art has meaning for people because it's a deeply human expression, but not all images are "art" in the traditional sense. If I want to make a video game, and I need a texture of wood, I can either make it by hand, have AI generate it, or take a picture. The end result is similar in all cases even if the effort expended is vastly different. This lowers the barrier for me to participate in game making, makes it more time-effective, while being potentially unnoticable on the user end.

If I just put some prompts into genAI, though, and post the output devoid of context, it isn't going to be seen as art at all, really. Just like a photograph randomly snapped isn't art, but photos with intention in message and form are art. The fact that meaning can be taken from something is a dialogue between creator and viewer, and AI cannot replace that.

AI has use-cases. Opposing it in any and all circumstances based on a metaphysical conception of intrinsic value in something produced artisinally vs being mass produced is the wrong way to look at it. AI cannot replicate the aspects of what we consider to be art in the traditional sense, and not every image created needs to be artisinal. What makes the utility of a stock image any different from an AI generated image of the same concept, assuming equivalent quality?

The bottom line is that art needs to be liberated from capitalism, and technology should not be opposed whole-cloth due to its use under capitalism.

[–] patatas@sh.itjust.works 0 points 5 days ago (1 children)

At the risk of misinterpreting you, it seems like you're arguing against the labour theory of value

[–] Cowbee@lemmy.ml 4 points 5 days ago (1 children)
[–] patatas@sh.itjust.works 0 points 5 days ago (1 children)

Are you not implying that human effort is not valuable?

[–] Cowbee@lemmy.ml 4 points 5 days ago* (last edited 4 days ago) (1 children)

Not intrinsically. A mud pie that someone spent 10 hours making does not have 10 hours of value. For equivalent use-values, their value is regulated around the average time it takes to reproduce them across all of industry. A chair that someone spent 15 hours making and a mass produced chair that someone spent 5 hours making, if equivalent use-values, would each be worth 10 hours (if this was the standard in the economy and both firms produced equal quantities).

Labor-power and raw materials are the source of all new value proper. Use-value, on the other hand, is distinct. AI art fundamentally cannot take the place of human art, as human art's use-value is derived from what the artist is saying and how they choose to say it. The process becomes the use. However, something like a texture in a video game is only useful inasmuch as the end user sees a texture and experiences it as a texture, the manner in which the texture was produced does not matter whether it was painted pixel by pixel, distilled from a photo, or was AI generated.

I'm not trying to use the "I read theory" card as some thought-terminating clichΓ©, but I have actually read Capital volume 1, and am about a third of the way through volume 2. What you are describing is closer to the LTV of Smith or Ricardo, not of Marx.

[–] patatas@sh.itjust.works 0 points 5 days ago (1 children)

I appreciate you describing the LTV distinctions between the thinkers, thank you, sincerely!

I think the problem I have with AI - and it sounds like you agree at least partially - is that it positions human creative work, and human labour in general, as only a means to an end, rather than also as an end in itself.

(I think this holds true even with something like a video game texture, which I would argue is indeed part of a greater whole of creative expression and should not be so readily discounted.)

This makes AI something more along the lines of what Ursula Franklin called a 'prescriptive technology', as opposed to a 'holistic technology'.

In other words, the way a technology defines how we work implies a kind of political relation: if humans are equivalent to machines, then what is the appropriate way to treat workers?

Is it impossible that there are technologies that are capitalist through and through?

[–] Cowbee@lemmy.ml 4 points 5 days ago (1 children)

Tools are different in different modes of production. A hammer is capital in the hands of a capitalist whose workers use it to drive nails, but is just a tool in the hands of a yeoman who uses it to fix up their homestead. My driving point is that art and AI images have intrinsically different use-values, and thus AI cannot take the place of art. It can pretty much occupy a similar space as stock images, but it cannot take the place of what we appreciate art for.

Humans will never be equivalent to machines, but products of labor and products of machinery can be equal. However, what makes art "useful" is not something a machine can replicate, a machine is not a human and cannot represent a human expression of the human experience. A human can use AI as a part of their art, but simply prompting art and churning something out has as little artistic value as a napkin doodle by an untrained artist.

[–] patatas@sh.itjust.works 0 points 5 days ago (1 children)

The products of artisanal labour and factory labour might indeed be able to be equivalent in terms of the end product's use value, but they are not equivalent as far as the worker is concerned; the same loss of autonomy, the loss of opportunity for thought and problem-solving and learning and growing, these are part of the problem with capitalist social relations.

I'm trying to say that AI has this social relation baked in, because its entire purpose is to have the user cede autonomy to the system.

[–] Cowbee@lemmy.ml 3 points 5 days ago (1 children)

I'm sorry, but that doesn't make any sense. AI is not intrinsically capitalist, it isn't about cedeing autonomy. AI is trained on a bunch of inputs, and spits out an output based on nudging. It isn't intrinsically capital, it's just a tool that can do some things and can't do others. I think the way you view capitalism is fundamentally different from the way Marxists view capitalism, and this is the crux of the miscommunication here.

[–] patatas@sh.itjust.works 1 points 4 days ago* (last edited 4 days ago) (1 children)

Literally the only thing AI does is cause its users to cede autonomy. Its only function is to act as a (poor) facsimile of human cognitive processing and resultant output (edit: perhaps more accurate to say its function is to replace decision-making). This isn't a hammer, it's a political artifact, as Ali Alkhatib's essay 'Defining AI' describes.

[–] Cowbee@lemmy.ml 2 points 4 days ago (1 children)

AI is, quite literally, a tool that approximates an output based on its training and prompting. It isn't a political artifact or anything metaphysical.

[–] patatas@sh.itjust.works 1 points 4 days ago (1 children)

AI is a process, a way of producing, it is not a tool. The assumptions baked into that process are political in nature.

[–] Cowbee@lemmy.ml 1 points 4 days ago (1 children)

I really don't follow, something like Deepseek is quite literally a program trained on inputs that spits out an output depending on prompts. It isn't inherently political, in that its relation to production depends on the social role it plays. Again, a hammer isn't always capital, it is if that's the role it plays.

[–] patatas@sh.itjust.works 1 points 4 days ago* (last edited 4 days ago) (1 children)

And that social role is, at least in part, to advance the idea that communication and cognition can be replicated by statistically analyzing an enormous amount of input text, while ignoring the human and social context and conditions that actual communication takes place in. How can that not be considered political?

[–] Cowbee@lemmy.ml 1 points 4 days ago (1 children)

The social role of a tool depends on its relation to the overarching mode of production, it isn't a static thing intrinsic to a tool. AI doesn't care about advancing any ideas, it's just a thing that exists, and its use is up to how humans use it. This seems to be all very idealist and not materialist of you.

[–] patatas@sh.itjust.works 1 points 4 days ago* (last edited 4 days ago) (1 children)

If I made a tool which literally said to you, out loud in a tinny computerised voice, "cognitive effort isn't worth it, why are you bothering to try", would it be fair to say it was putting forward the idea that cognitive effort isn't worth it and why bother trying?

If so, what's the difference when that statement is implied by the functioning of the AI system?

[–] Cowbee@lemmy.ml 1 points 4 days ago (1 children)

The existence of AI itself does not imply anything. It's a tool. The social function of AI is determined by the mode of production.

[–] patatas@sh.itjust.works 0 points 4 days ago (1 children)

Want to know how I know that it does?

Because the result is the same over and over and over and over and over again. Every single time!

https://restofworld.org/2025/colombia-meta-ai-education/

[–] Cowbee@lemmy.ml 1 points 4 days ago (1 children)

The AI is not suggesting anything ny virtue of being itself. The social consequences of a given tool depend on the way society is structured, based on the mode of production.

[–] patatas@sh.itjust.works 1 points 4 days ago (1 children)

I dunno what to tell you other than that I have been consistently pointing out that AI is a process, not a tool.

If the result of that process is the same wherever it's introduced, then your model of the world has to be able account for that.

[–] Cowbee@lemmy.ml 2 points 4 days ago

You're ascribing metaphysical messages to objects, which I reject the notion of. AI is just a program, a type of one. The social interpretations of its use depend on the mode of production of society.

I reject metaphysics and idealism in general outright.

[–] FauxLiving@lemmy.world -3 points 5 days ago (2 children)

And due to the scale of these projects and the amount of existing work they require in their construction, there are no non-exploitative GenAI systems

That hasn't been true for years now.

AI training techniques have rapidly improved to the point where they allow people to train completely new diffusion models from scratch with a few thousand images on consumer hardware.

In addition, and due to these training advancements, some commercial providers have trained larger models using artwork specifically licensed to train generative models. Adobe Firefly, for example.

It isn't the case, and hasn't been for years, that you can simply say that any generative work is built on """stolen""" work.

Unless you know what model the person used, it's just ignorance to accuse them of using "exploitative" generative AI.

[–] patatas@sh.itjust.works 8 points 5 days ago* (last edited 5 days ago)

Can you provide a few real-life examples of images made with a model trained on just "a few thousand images on consumer hardware", along with stats on how many images, where those images were from, and the computing hardware & power expended (including in the making of the training program)? Because I flat out do not believe that one of those was capable of producing the banner image in question.

[–] ClamDrinker@lemmy.world 1 points 5 days ago

You are probably confusing fine tuning with training. You can fine tune an existing model to produce more output in line with sample images, essentially embedding a default "style" into every thing it produces afterwards (Eg. LoRAs). That can be done with such a small image size, but it still requires the full model that was trained on likely billions of images.