this post was submitted on 03 Jun 2025
33 points (85.1% liked)
Technology
1155 readers
45 users here now
A tech news sub for communists
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Every six months the tone of these "why won't you use my hallucinating slop generator" get more and more shrill.
I think his point that you basically give a slop generator a fitness function in the form of tests, compilation scripts, and static analysis thresholds, was pretty good. I never really thought of forcing the slop generator to generate slop randomly until it passes tests. That's a pretty interesting idea. Wasteful for sure, but I can see it saving someone a lot of time.
I have to chuckle at this because it's practically the same way that you have to manage junior engineers, sometimes.
It really shows how barely "good enough" is killing off all the junior engineers, and once I die, who's going to replace me?
This is absolutely the crisis of aging hitting the software engineering labor pool hard. There are other industries where 60% or more of the trained people are retiring in 5 years. Software is now on the fast track to get there as well.
This is a great point. I think what is most jarring to me is the speed at which this is happening. I may be wrong but it felt like those other industries, it took at least a couple decades for it to happen, and it feels like tech is doing it in a matter of months?
Nah. It's two different phenomena with the same end point. Those other industries lost young entrants because of the rise of the college pursuit. And yes that took decades. But for software we're still 20 years out at least before the we have a retirement crisis.
Although, we already one back in 2000 when not enough working age people knew COBOL.
Anyway, it's a historical process. It's just one we've seen over and over and just don't learn the lesson.
I'd much rather the slop generator wastes its time doing these repetitive and boring tasks so I can spend my time doing something more interesting.
To me, this is sort of a code smell. I'm not going to say that every single bit of work that I have done is unique and engaging, but I think that if a lot of code being written is boring and repetitive, it's probably not engineered correctly.
It's easy for me to be flippant and say this and you'd be totally right to point that out. I just felt like getting it out of my head.
If most of the code you write is meaningful code that's novel and interesting then you are incredibly privileged. Majority of code I've seen in the industry is mostly boring and a lot of it just boilerplate.
This is possible but I doubt it. It's your usual CRUD web application with some business logic and some async workers.
So then you do write a bunch of boilerplate such as HTTP endpoints, database queries, and so on.
Not really. It's Django and Django Rest Framework so there really isn't a lot of boilerplate. That's all hidden behind the framework
I'd argue that most of the code is conceptually boilerplate, even when you have a framework to paper over it. There's really nothing exciting about declaring an HTTP endpoint that's going to slurp some JSON, massage it a bit, and shove it n your db. It's a boring repetitive task, and I'm happy to let a tool do it for me.
What I'm trying to say is that for Django, especially Django Rest Framework, you don't even declare endpoints.
DRF has a
ModelViewSet
where you just create a class, inherit from MVS and set themodel
to point to your Django ORM model and that's it.ModelViewSet
already has all the implementation code for handlingPOST
,PUT
,PATCH
andDELETE
.There is no boilerplate.
There isn't anything that an LLM would add to this process.
I've used Django before and I disagree. 🤷
Around which parts of Django? Because Django has generic class based views that do exactly the same thing, where all you do is set the model attribute. Then the generic view class you inherited from has the implementation. Especially if you use a
ModelForm
Here's what a typical Django enpoint might look like for handling a json payload with some user information and storing it in the db:
then you'll probably need to add some serializers
then you'll have to add some views
next you have to define URL patterns
This is all just a bunch of boilerplate. And with LLMs, you can just give it a sample JSON payload you want and this stuff just happens.
Going through as I go:
validate_email
does not need to exist, since model fields by default haveblank=False
meaning they must have a value. This should be picked up byModelSerializer
already, since it is using the model.validate_username
doesn't do anything that couldn't be accomplished by usingMinLengthValidator
- and in addition it should not exist inserializers
- it belongs directly in the declaration of the field inmodels
.On the subject of your view code:
That whole code can be thrown out and replaced with:
So, honestly I don't know what to think of your example of "boilerplate" beyond the fact that you don't quite grasp Django and Django Rest Framework well enough to understand how to implement things properly, without repeating yourself. I also think some of your code like
verbose_name
andverbose_name_plural
is not an example of "boilerplate". I would also argue thatordering
is something that should be implemented via OrderingFilter so that the API client can pick different orderings as part of the GET request.I think a full example could be reduced to something like:
So really the only "boilerplate" here is how many of the fields from
UserProfile
you want to expose via the REST API and how many are read only? That's not something an LLM can decide, that's a policy decision.So really, I wonder what it is that you are expecting an LLM to do? Because this small example doesn't have much to it and it does a LOT just by leveraging DRF and leaving you to just define your business logic and objects. If you can't be bothered to even think about your models and what data you are storing, I don't know what to say to that, beyond the fact that you just don't want to code even the easiest things yourself?
The point is that the LLM can produce 90% of the code here, and then you might need to tweak a couple of lines. Even with your changes, there's still very obviously a non trivial amount of boilerplate, and you just can't bring yourself to acknowledge this fact.
Where?
Where is the boilerplate?
That whole code is boilerplate that I originally generated using an LLM from the following query
write a django endpoint for handling a json payload with user information and storing it in the db
. After you making it concise, it's still over 40 lines of code.Three thoughts.
Firstly:
write a django endpoint for handling a json payload with user information and storing it in the db
And yet this LLM failed to consider just using the built in
contrib.auth.User
which already stores this information. What about extending the Auth user model?Secondly:
This means that any of this supposed "boilerplate" (you are not even using the right term, more on that in a second) is just AI slop that doesn't even do a half decent job of what you have been arguing, around getting rid of "boilerplate" that is inconvenient. It is the LLM itself that is creating all this supposed boilerplate!.
Thirdly:
I don't understand what you think "boilerplate" is. Because 40 lines of code where you define a model, the fields that are serialized through the REST API, and the REST API implementation is not boilerplate. Boilerplate is defined as "sections of code that are repeated in multiple places with little to no variation." (Wikipedia). Where is the "repeated in multiple places" in this example? If we even took this example further and extended it to other
model
s - where would the duplication be? The fact that you inherit fromModelViewSet
? That you inherit fromModelSerializer
? There is no boilerplate here! If anything the boilerplate is insideModelSerializer
andModelViewSet
, but that's object oriented design! You're inheriting from those classes so you don't have to do it yourself!.It's boilerplate because it's just lines of code that are spelling out a repetitive pattern that you end up having to mindlessly write over and over. The fact that you continue to refuse to acknowledge this blows my mind to be honest.
And again, I ask you, where is that mindlessly writing over and over code? I have even gone through the trouble of trying to see the problem from your perspective, by looking at
ModelViewSet
andModelSerializer
where if you squint really hard you could maybe make a case that it is repetitive, but in code that is object oriented, you can't just say that "oh inheriting from some big class that does 98% of the actual implementation is boilerplate" - because literally all you are doing is inheriting fromModelViewSet
and setting three whole fields that are specific to your model. Is three lines boilerplate, when they determine how the entire class behaves and if it works or doesn't work? I would argue not.I'm sorry, I should not assume that this sort of code does not require a significant cognitive effort to write from some people.
Ah, here we are again. Now you passive aggressively say that I'm just stupid. So, now, who is doing the "low effort trolling" that you claim anyone who disagrees with you does?
Incredible.
So which is it, is this code that's meaningful and interesting to write that requires cognitive effort from a human, or is it boilerplate?
It's neither boilerplate, nor is it interesting code. So I'm unsure what your point is, or why it is being asked as an either-or type of question where I have to pick one. I would appreciate you explaining it further.
As an aside, I had to spend time taking something that you got out of an LLM to get it to the point where it's small and boring.
I suppose if you want to spend all your mental energy fighting with an LLM and telling it "no, that's not quite right, why did you make more work for yourself when there was a much easier way to do it", that is certainly one way to spend precious mental energy. It does seem to be a common pattern that many people have already shared, where they spend lots of time fixing what the LLM generated, and many report that it sucks all the enjoyment out of the creative process.
At least when I have to do the "no, that's not quite right" with a junior engineer, I am helping someone grow in their career and sharing what I have learned over 20+ years in my craft, and that I am giving back to the next generation, as repayment to the generation that taught me.
LLMs are dead labor. They destroy the future of young engineers and replace with a parody that makes similar mistakes, has to be corrected, just like a junior engineer, but there is no life in it. Just a simulation of one. It destroys joy.
It ultimately doesn't matter whether this type of code falls into your definition of boilerplate. As you admit, it's not interesting code that anybody wants to write. It's not intellectually engaging, it's not enjoyable to regurgitate it time and again, and it needs to be written.
You didn't actually bother reading the article in the submission did you?
That's certainly your opinion, and it's quite obvious that there is absolutely nothing I could say to change it.
Absolutely, coders should be spending time developing new and faster algorithms, things that AI cannot do, not figuring out the boilerplate of a dropbox menu on whatever framework. Heck, we dont even need frameworks with AI.
It's more that the iterative slop generation is pretty energy intensive when you scale it up like this. Tons of tokens in memory, multiple iterations of producing slop, running tests to tell it's slop and starting it over again automatically. I'd love the time savings as well. I'm just saying we should keep in mind the waste aspect as it's bound to catch us up.
I don't really find the waste argument terribly convincing myself. The amount of waste depends on how many tries it needs to get the answer, and how much previous work it can reuse. The quality of output has already improved dramatically, and there's no reason to expect that this will not continue to get better over time. Meanwhile, there's every reason to expect that iterative loop will continue to be optimized as well.
In a broader sense, we waste power all the time on all kinds of things. Think of all the ads, crypto, or consumerism in general. There's nothing uniquely wasteful about LLMs, and at least they can be put towards producing something of value, unlike many things our society wastes energy on.
I do think there's something uniquely wasteful about floating point arithmetic, which is why need specialized processors for it, and there is something uniquely wasteful about crypto and LLMs, both in terms of electricity but also in terms of waste heat. I agree that generative AI for solving problems is definitely better than crypto, and it's better than using generative AI to produce creative works, do advertising and marketing, etc.
But it's not without it's externalities and putting that in an unmonitored iterative loop at scale requires us to at least consider the costs.
Eventually we most likely will see specialized chips for this, and there are already analog chips being produced for neural networks which are a far better fit. There are selection pressures to improve this tech even under capitalism, since companies running models end up paying for the power usage. And then we have open source models with people optimizing them to run things locally. Personally, I find it mind blowing that we've already can run local models on a laptop that perform roughly as well as models that required a whole data centre to run just a year ago. It's hard to say when all the low hanging fruit is picked, will improvements start to plateau, but so far it's been really impressive to watch.
Yeah, there is something to be said for changing the hardware. Producing the models is still expensive even if running the models is becoming more efficient. But DeepSeek shows us even production is becoming more efficient.
What's impressive to me is how useful the concept of the stochastic parrot is turning out to be. It doesn't seem to make a lot of sense, at first or even second glace, that choosing the most probable next word in a sentence based on the statistical distribution of word usages across a training set would actually be all that useful.
I've used it for coding before and it's obvious that these things are most useful at reproducing code tutorials or code examples and not at all for reasoning, but there's a lot of code examples and tutorials out there that I haven't read yet and never will read. The ability of a stochastic parrot to reproduce that code using human language as it's control input is impressive.
I've been amazed by this idea ever since I learned about Markov chains, and arguably LLMs aren't fundamentally different in nature. It's simply a huge token space encoded in a multidimensional matrix, but the fundamental idea is the same. It's really interesting how you start getting emergent properties when you scale something conceptually simple up. It might say something about the nature of our own cognition as well.
You mentioned Markov Chains; for a laymen with regards to mathematics (one would need to brush up on basic calculus) would you know any good books (I was thinking textbooks?) or resources to better understand maths with view to gain a better understanding of LLMs/GenAI later down the line?
A few books that are fairly accessible depending on your math level.
Basic Math for AI is written for people with no prior AI or advanced math knowledge. It aims to demystify the essential mathematics needed for AI, and gives a broad beginner-friendly introduction.
https://www.goodreads.com/book/show/214340546-basic-math-for-ai
Mathematics for Machine Learning is a bit more academic than Hinton's book, and it covers linear algebra, vector calculus, probability, and optimization, which are the pillars of LLM math.
https://www.goodreads.com/book/show/50419441-mathematics-for-machine-learning
Naked Statistics: Stripping the Dread from the Data is phenomenal for building an intuitive understanding of probability and statistics, which are often the most intimidating subjects for beginners.
https://www.goodreads.com/book/show/17986418-naked-statistics
Thank you for taking the time for that reply and reading list; very much appreciated!
no prob
I edited my original comment to leave out partly why I posted it but there's no point in self-censorship as the thread has devolved into tone policing and strawmanning; just further evidence of how privileged Westerners are and those who benefit from Western hegemony.
AI under capitalism is coming for all our jobs due to automation as predictably estimated by multiple learned marxists but instead of taking the scientific socialist approach of understanding why; ie capitalism not technology in itself and we should use the technology to progress socialism for the betterment of humanity, there is bad-faith reactionary Nietzschean takes and so much self-importance on why allowance should be made for their hypocritical sensibilities and that they should be considered part of the vanguard despite wilfull ignorance when I suspect their class conciousness on the world stage is woefully inadequate for those who call themselves marxists.
Well done for continuing to post despite the spiteful ignorance.
(I personally asked about the Maths because AI is coming for my job too and I want to better understand the technology, and I feel for me, this is a more serious path to do this. Marxism is a science and we should not be burying our heads in the sand. Thanks again. Also the dialectical dispatches blog are interesting reads too)
I very much agree with all that. Interestingly, we're seeing exactly the same behavior from liberals when it comes to topics like the war in Ukraine. In both cases, people just want to ignore the basic facts of the situation in favor of a comforting narrative. Yet, it's completely self defeating behavior because material reality can't simply be wished away. We have to engage with the world the way it is to make sound plans and decisions about the future. I understand why people feel threatened by this tech, but pretending that it doesn't work or that you can boycott it out of existence is not rational.
Many degrowth proposals call for some aggregate reduction of energy use or material throughput. The issue with these proposals is that they conflict with the need to give the entire planet public housing, public transit, reliable electricity, modern water-sewage services, etc., which cannot be achieved by "shrinking material throughput". According to modeling from Princeton University (this may be outdated), it suggests that zeroing emissions by 2050 will require 80 to 120 million heat pumps, up to 5 times an increase in electricity transmission capacity, 250 large or 3,800 nuclear reactors, and the development of a new carbon capture and sequestration industry from scratch. Degrowth policies, while not intending to result in ecological austerity, effectively do so through their fiscal commitment to budgetary constraints which inevitably require government cuts.
The reason for the above paragraph is to give an analogy to the controversy of "AI wastefulness". Relying on manual labor for software development could actually lead to more wastefulness long term and a failure to resolve the climate crisis in time. Even though AI requires a lot of power, creating solutions faster (especially in green industries as well as emissions reduced from humans such as commuting to work) could lead to a better and faster impact on reducing emissions.
While an interesting point, it relies on a specific assumption - that LLMs are useful in solving the problems you're talking about. Unfortunately, as we've seen from nearly all other advances in human productivity, we just take surplus labor and apply it to completely wasteful projects:
I could go on. This is what we choose to spend our surplus labor on. So AI time savings just isn't going to save us. AI would have to fundamentally change the way solve certain problems, not improve the productivity of the billions of people who are already wasting most of the careers working on things that make the problems you're talking about worse and not better.
Yes, neural network techniques are useful in scientific applications to fundamentally change how we solve problems. Helping a mid level programmer get more productive at building AdTech, MarTech, video games, media distribution, subscription services, ecommerce, DRM, and every other waste of human productivity relative to the problems you're raising is done by LLMs, which are not useful for protein folding, physics simulations, materials analysis, and all the other critical applications of AI that we need to solve.
I was just implying after revolution and when we live in a socialist society, we would use AI for productive means and not for these wasteful projects. Your list above are mostly projects in a capitalist society that serve the ruling class's interests.
The only real potential benefit of AI in a capitalist society, besides potentially using it to make tools, services, and content for workers and communist parties to fight back against the system, is the proletarianization and hopefully radicalization (toward socialism) of labor aristocrats as the deepening contradictions of capitalism lead to more unrest amongst the working class.
Yeah, so, I am not saying AI is too wasteful to exist. I am saying it's too wasteful to be used for worker productivity in the current global capitalist system. Workers in China, Vietnam, India, Pakistan, Bulgaria, Nigeria, and Venezuela are going to use LLMs first and foremost to remain competitive in the race to the bottom for their wages in the global capitalist system, because that's where we are at and will be for as long as it takes.
This is exactly what I'm talking about. Lots of privileged first worlders have no idea what this means for the globe as well. It's not going to liberate the third world; instead it will mantle a much heavier burden of productivity upon them as wealth extraction of the meager white-collar is kicked into overdrive as databases for these are built in the periphery or even in poor places at home.
I mean look at where Colossus is being built. While we as workers might be able to use AI reasonably and responsibly; it has no more meaning and impact than using paper/plastic straws in comparison to the ruling class running giant data-centers and energy-intensive slop machines to our "open source!" equivalent.
The effect of weakening labor aristocracy is secondary.
First, I am only meaning to provide my perspective, even if it turns out to be not perfect or 100% accurate to reality, and I am willing to be corrected or to learn from others. I'm not just some first worlder set in their ways.
I think you misunderstood some of the points I made, just as I interpreted freagle's point about AI wastefulness to be in a broader sense than it was. I don't believe AI as it exists now in the global capitalist system will liberate the third world. My guess is the ruling class's use of the AI will probably have a greater impact against the working class's interests than the impact the working class will have to counter AI through its own use. We have no control over AI's existence. It's a reality we have to live with. The only way AI would have an improvement on the working class's lives across the entire planet would be for a socialist system to become dominant across the world and the global capitalist hegemony to be overthrown.
I'm not denying the ruling class's use of AI is a much greater detriment to us than any gains we get from the weakening of the labor aristocracy. However, I believe as those people start losing their jobs, communist parties will need to start reaching out to them, educate them, and bring them to our cause so we can develop the numbers and power to overthrow the ruling class, which I believe is the upmost importance. I honestly don't believe most labor aristocrats, especially in the West, will be radicalized until they become proletarianized and their material conditions greatly worsen.
That is exactly my point, friend, wasn't trying to argue that the technology itself is inherently evil. It's that you can't expect people to embrace and understand this as a useful tool to the communist cause when it doesn't even have a clear use-case there besides for coding projects. I understand it's the reality we live with, it's why I'm not suggesting "OPEN REVOLT AGAINST AI!" because it's just the reality we're in now. My point is that people are going to continue having a disgust for it and a backlash against it because it's going to do more harm than good even with a handful of clear use-cases.
Trying to sell the "goods" of this technology when it's doing so much harm already comes off as tasteless.
Agreed.