It will first and foremost be used for evil. Autonomous kill bots. Mass surveillance. Shit like that.
Everything else it might be used for is PR meant to make you accept the first part.
It will first and foremost be used for evil. Autonomous kill bots. Mass surveillance. Shit like that.
Everything else it might be used for is PR meant to make you accept the first part.
just like the AI now except it is only used where it makes sense
Or maybe AGI turns out to be harder than some people thought. That might be simultaneously the prospect, and the reason for the bubble to burst. That hypothetical future is similar to today, minus some burnt money, plus a slightly more "intelligent" version of ChatGPT which can do some tasks and fails at other tasks. So it'd continue to affect some jobs like call center agents, artists and web designers, but we still need a lot of human labor.
Or maybe AGI turns out to be harder than some people thought.
Yes. It seems very unlikely to arise from current LLMs. AGI-Hypers keep expecting signs of independent reasoning to arise, and it keeps not happening.
I'd be surprised if current-day LLMs reach AGI. I mean it's more a welcome side-effect that they do things like factual answers more often than not. They don't have a proper state of mind, they can't learn from interacting with the world while running, and they generally substitute a thought process, reasoning etc with a weird variant and it all needs to happen within the context window. I believe once it comes to a household robot learning to operate the toaster and microwave, that won't scale any more. It'd be a bit complicated to do that out-of-band in a datacenter, or fetch the required movements and information from a database. I guess we can cheat a bit with that to achieve similar things, but I'd question whether that's really AGI and suitable for any arbitrary task in the world. So I'd expect several major breaktroughs to happen before we can think of AGI.
it might actually look intelligent