dinklesplein

joined 4 years ago
[–] dinklesplein@hexbear.net 11 points 3 weeks ago

missed those too, my first thought was 'damn are we really this late to the dsa dating discourse?'

[–] dinklesplein@hexbear.net 8 points 1 month ago (1 children)

source: ML guy i know so this could be entirely unsubstantiated but apparently the main environmental burden of LLM infrastructure comes from training new models not serving inference from already deployed models.

[–] dinklesplein@hexbear.net 18 points 1 month ago

very normal thing to order

[–] dinklesplein@hexbear.net 11 points 1 month ago* (last edited 1 month ago)

i disagree and i think this website tends to fall into jeune-ecoleism whether it be about drones or hypersonic missiles vs surface assets.

for the role you mention specifically, using drones for an air superiority fighter has a couple major problems. firstly is input lag; there is a significant delay from operator input, to the drone reacting because of latency. secondly it’s possible to disrupt the link between drone and controller. These can be solved with autonomous control systems but these will never be as smart or as flexible as a human in the pilots seat. in terms of anti drone airspace denial, all the regular a2/ad countermeasures work against large drones. for small observation drones, large SAMs aren’t worth it and have trouble picking them up on radar but various gun systems can do the job perfectly well. if you want a drone swarm, you need small drones, that means small payloads and short range which makes them counterable by gun systems. electronic warfare, hacking and jamming can be extremely effective against drones, something like 90% of ukrainian drone attacks are unsuccessful.

in ukraine specifically you have to bear in mind that drones make their own propaganda. every drone records its flight, and most are commercial and directly designed to upload and share that footage, so its only a couple clicks from the battlefield to some gore video on twitter. all other elements need to explicitly add a camera somewhere, record that footage, transfer it to a computer, and then upload it. thats another massive filter right there.

so we see a much larger share of drones action in the war than any other weapon(-s system), distorting our view.

weaponised drones further distort it: an unsuccessful attack has no significant cost, you just don't upload it. or for FPV drones where you cant see the aftermath either way, you can still upload it and claim it a success regardless of if you destroyed your target. this further inflates their importance, because people will think they do much more damage than they actually did.

it may well be the case that drones are the future of air superiority but they are not the near future of air superiority and that's a very relevant distinction for most DoDs.

[–] dinklesplein@hexbear.net 7 points 1 month ago

i think the left has been reflexively anti-LLMs because they haven't been up until recently worth the squeeze, they're emblematic of the continued ignorance of the climate crisis, the worst techbros you know are all in on them, and they're antithetical to the intentionalist view of art, and it's a convenient excuse to cut down on labour costs which are all imo perfectly good reasons to dislike LLMs but i think people are a bit blind to how accurate they've become and still have a view of their tendency to hallucinate based on GPT-3.5 and the memes about google's search summary being confidently incorrect, the latter which is representative of the absolute cheapest bottom of the barrel LLM google has (gemini-flash) and not SOTA which admittedly google has not really pushed.

[–] dinklesplein@hexbear.net 6 points 1 month ago* (last edited 1 month ago)

i personally think 'fascism' is a rhetorically useful concept but not particularly so analytically for the current wave of reacitonary ideology. feel like if it was up to me i would largely keep fascism contained to its historical moment, mostly because someone doesn't need to be fascist to be a genocidal freak as you said with the churchill example.

or i guess less radically it's been a bit overextended in its usage - america is a reactionary empire, but it wasn't fascist in character for a while, and this qualitative shift does have even more negative outcomes than the previous paradigm even if this was probably going to be that stage's result regardless.

[–] dinklesplein@hexbear.net 21 points 1 month ago

thinking of some tweet i saw maybe like two weeks ago about how chinese genshin's worldbuilding is because everything is extremely bureaucratic and un-feudal

[–] dinklesplein@hexbear.net 24 points 1 month ago (1 children)

good catch actually, idk what's censored but i should have added CSA. my bad.

[–] dinklesplein@hexbear.net 30 points 1 month ago

i think what stands out the worst personally is just gaiman's recurring disposition of being all 'uwu sowwy for sexually assaulting you can we talk 🥺'

[–] dinklesplein@hexbear.net 6 points 3 months ago

is it over if i was somehow expecting to see destiny on this list

[–] dinklesplein@hexbear.net 5 points 4 months ago

yeah, i mean if it makes you anxious then it clearly is worse for you! i don't want to come off as minimising your struggles, just that examination methods should probably be more flexible in general.

[–] dinklesplein@hexbear.net 28 points 4 months ago (4 children)

i'm not sure that i agree that oral exams are inherently bad, i just think they need to be taken with the instructor having a spirit of charitability and recognising that students can't remember every little detail. evidently this wasn't the case with you but the typical exam paper format isn't very good for neurodivergent students either in a very different way, like i'd always do awfully in exams by my standards so obviously i'd be more inclined to think that format is worse than oral.

 

"This whole saga makes me genuinely embarrassed to follow this stuff. I just want a sub that has an informed opinion on AI, this is worse than crypto bro bullshit."

"Every hype-man who posts vague tweets and hype posts should be ridiculed. Every clown who posts screenshots of said tweets on this sub should be ridiculed."

""AGI is coming out next year" - this sub for the last 4 years"

some self aware highlights.

 

Musk is suing OpenAI. Musk's legal teams' argument in two premises is:

  1. OpenAI's 'contract' as stated in their founding agreement was to make any AGI system for the benefit of humanity.
  2. GPT-4 is an AGI system ∴ OpenAI, by licensing GPT-4 exclusively to Microsoft, has effectively breached this agreement by making the first AGI system beholden to corporate interests. Musk's team also alleges that OpenAI is effectively an Microsoft subsidiary at the moment.

OpenAI deserves the lawsuits, but alleging that GPT-4 as a base model is anywhere close to AGI is probably not the angle to put it lightly.

Some other arguments:

OpenAI has comitted promissory estoppel by moving away from the open source non-profit model Musk initially invested in.

(This means that OpenAI has breached a promise enforceable by Law)

OpenAI has committed a breach of fiduciary duty by using Musk's funding on for-profit projects against the initial understanding of that funding's usage - letting Microsoft on OpenAI's Board of Directors and not open-sourcing GPT-4 are their examples of this.

(OpenAI had a legal responsibility to act in the best interests of their clients, which they failed)

OpenAI has engaged in unfair business practices by convincing Musk they would commit to the 'Founding Agreement'

(I think this is self explanatory)

DAMAGES

Musk wants:

A. Court to order OpenAI to follow their 'Founding Agreement' which means cutting the Microsoft connection and open sourcing.

B. A judicial ruling that GPT-4 constitutes AGI, and any followup models related to it.

C. Return of all money Musk invested into OpenAI that was spent on 'for-profit' projects.

D. General damages to be determined by court.

Personally, I think banging hard on the 'GPT-4 is AGI angle' is a really mistaken line of argument and it's a huge weakspot in their case. OpenAI can probably be sued for a lot of things, as we're seeing with NYT, so it's not like this is their only angle of approach. I want to see them get sued in court just in a vindictive sense, but I don't think this is how you do it.

 

bts stans hiring a truck calling for the sacking of the zionist ceo of their label, truly iconic

view more: next ›