This is ridiculous. Have people seen the recent AI code review from Audacity?? This whole AI bubble needs to burst already.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Got me curious, spill the tea sister!
To sum up, its the tail as old as time (~2023), an llm being entirely useless for a task that could be done by other tooling perfectly.
Did AI tell you to use "tail" there?
While true; do curl http://copilot/?query=what+is+the+time; sleep 10; done
Bet the AI can’t see through this.
As a heavy AI user on a daily basis...Copilot is hands down one of, if not the worst, in existence.
This will not end well for them.
Copilot is literally ChatGPT
It can't be literally chatgpt and use different letters. One of them has an O.
I asked chatgpt (because this shitpost needs more AI), and it said:
So while their names are different, the core AI technology is the same! It's like having two different brands (say, Ford and Tesla) both using electric motors—the motor is the same technology, just in different vehicles.
But comparing suicidal cars to a Ford is total baloney, so ChatGPT is full of shit.
It's been an honour to waste your time and 72L of water to get the AI to shitpost.
because it uses ChatGPT for its backend
Fine do whatever you want to your shit company stop forcing me to use copilot on everything. This is worse than the failed clippy
Yuuuup this is my company too. They’re monitoring our GH Copilot /Cursor usage and they’re going to apply to our performance reviews
Malicious compliance time, full-on Vibe coding, just accept all changes. Who cares about optimisation, readability, or documentation. You're using AI anything goes.
In the list of things nobody cares about, you forgot "actually do what's asked". Use these tool for a very short while and be amazed at how bad it is to do things that are extremely well known and documented.
They must really want their workforce to be less efficient while dramatically lowering quality and security across the board.
except programmers are gonna continue with what they were already doing, at most putting a script on copilot to get the metrics
don't forget that if you don't turn in the project in time you're fired, the issues always get thrown at the coder, it's never the company's fault
They are banking on the AI will eventually be smart enough that it will replace the workers that fed it.
Hackers are about to have a golden era
The only hope for Microsoft is if Xbox takes over all of Microsoft and transforms the whole company
How very corporate of them: people don't want to do something? Screw finding out why, let's make it mandatory and poof, problem solved!
Windows is already garbage when humans are codeding it.
Apparently no longer optional for their customers either, based on how hard they are pushing it in Office 365, sorry Microsoft 365, no sorry Microsoft 365 Copilot.
The latest change of dumping you into a Copilot chat immediately on login and hiding all the actually useful stuff is just desperation incarnate.
The process to log in to the online portal of Outlook is so bad it's crossed into comical territory. So much friction, only to shunt you to a full screen ~~clippy~~ copilot page.
I'd be curious to know what the usage statistics are for that page. Like, what could a person possibly accomplish there?
I feel like this is going to cause so many problems in the near future. They’re not ready for it and they don’t even know.
Corporate monopoly with overpriced products doing corporate shit
It's called dog-slopping
Dog-fooding, but instead of food, it's a dog eating its own vomit.
At your next job interview ask them if they are results driven or methodology driven. "If I were to take twice as long to do something by using a poorly designed tool will I be rewarded or punished?"
Being judged by a fancy magic 8 ball, the future keeps getting better and better.
Same at my company. The frustrating part is they want us to use coding assistance, which is fine, but I really don't code that much. I spend most of my time talking to other teams and vendors, reading docs, filing tickets, and trying to assign tasks to Jr devs. For AI to help me with that I need to either type all of my thoughts into the LLM which isn't efficient at all or I need it to integrate with systems I'm not allowed to integrate with because there are SLOs that need to be maintained (i.e. can't hammer the API and make others experience worse).
So it's pretty much the same as it's always been. Instead of making a gallon of lemonade out of one lemon I need to use this "new lemonade machine" to start a multinational lemonade business.
Ditto.
But I manage a team of embedded developers. On a specialised commercially restricted embedded platform.
AI does not know a thing about our tech. The stuff it does know is either a violation of the vendors contractual covenants or made up bullshit. And Our vendor’s documentation is supplemented by a cumulative decades of knowledge.
Yet still “you gotta use AI”.
The key highlight being: you don't need more than a gallon of lemonade. I for once wished big corps heard their engineers and domain experts over wall street loving exec's.
Why would they do that? If they're making better quarterly results by listening to Wall St, that's what the system tells them to do.
I had an interesting conversation today with an acquaintance. He has sent his resumé to dozens of companies now. Most of them, but not all, corporate blobs.
He wondered for a while just why the hell no one is even reaching out (he's definitely qualified for most of the positions). He then came to the idea to ask a particular commercial Artificial Stupidity software to parse it. Most of those companies use that software, or at least that's what the vendor says on its website. Turns out, that PoS software gets it all wrong. As in: everything. Positions and companies get mixed up, dates aren't correctly registered, the job descriptions it claims to have understood only remotely match what he wrote. Read: things even the most junior programmer with two weeks of experience would get right.
And it is getting used pretty much by every big firm out there.
Oh and BTW: There is ONE correct answer to the phrase 'using AI is no longer optional' : Fuck you.
That's not AI. That's just ATS. And it's been shit for years. Definitely, definitely, make sure your resume is ATS compatible. Use the scanners.
Any scanner recommendations?
Jobscan.co , the free version
I’m gonna be looking for a new job soon and I’ve been reading stuff like this more & more. Makes me really scared. I guess reaching out to recruiters directly via LinkedIn is more important than ever. I also hope the AI software hasn’t made its way down to small/medium-sized companies yet, since those are the ones I’d rather work for anyways
"Have any of you realized how much money we spent on this?!"
“But the results are objectively much worse than if I just did it myself, sir!”
"No one cares about the quality of your work, only the quantity!"
Someone has to generate the bugs we pay you to fix.
You have 10 minutes to clear your desk and get out. Not a team player!
American employers don't even give you this anymore. You are escorted away by security and someone else empties your shit into a box and hands it to you in the lobby. They are very afraid of sabotage.
Its to use the employees to train AI to replace them and they know it.
Nah its just part of the MLM scheme that is "AI". Its useful because they said it would be useful. Its worth the investment because it cost a lot of money. Once you realize that all these companies care about is revenue and "growth" then it all clicks. It doesnt have to work or be profitable, it just needs to look good to investers.
They will even go as far as firing loads of workers and saying publicly that they "replaced them with AI" while in reality those workers were just doing something that the company was willing to sacrifice. They just replaced something with nothing to make it look like their magic AI can actually do things.
Cory Doctorow put it better than i ever could: https://pluralistic.net/2025/05/07/rah-rah-rasputin/
The whole post is good but i will just quote this section.
The "boy genius" story is an example of Silicon Valley's storied "reality distortion field," pioneered by Steve Jobs. Like Jobs, Zuck is a Texas marksman, who fires a shotgun into the side of a barn and then draws a target around the holes. Jobs is remembered for his successes, and forgiven his (many, many) flops, and so is Zuck. The fact that pivot to video was well understood to have been a catastrophic scam didn't stop people from believing Zuck when he announced "metaverse."
Zuck lost more than $70b on metaverse, but, being a boy genius Texas marksman, he is still able to inspire confidence from credulous investors. Zuck's AI initiatives generated huge interest in Meta's stock, with investors betting that Zuck would find ways to keep Meta's growth going, despite the fact that AI has the worst unit economics of any tech venture in living memory. AI is a business that gets more expensive as time goes on, and where the market's willingness to pay goes down over time. This makes the old dotcom economics of "losing money on every sale, but making it up in volume" look positively rosy.