Agreed. The AI is rolling already, we can't stop it now. All we can do is make sure that this technology benefits everyone, not just coroprations.
Technology
This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!
Seriously, the average person has two FAR more immediate problems than not being able to create their own AI:
-
Losing their livelihood to an AI.
-
Losing their life because an AI has been improperly placed in a decision making position because it was sold as having more capabilities than it actually has.
1 could be solved by severe and permanent economic reforms, but those reforms are very far away. 2 is also going to need legal restrictions on what jobs an AI can do, and restrictions on the claims that an AI company can make when marketing their product. Possibly a whole freaking government agency designated for certifying AI.
Right now, it's in our best interest that AI production is slowed down and/or prevented from being deployed to certain areas until we've had a chance for the law to catch up. Copyright restrictions and privacy laws are going to be the most effective way to do this, because it will force the companies to go back and retrain on public domain and prevent them from using AI to wholesale replace certain jobs.
As for the average person who has the computer hardware and time to train an AI (bear in mind Google Bard and Open AI use human contractors to correct misinformation in the answers as well as scanning), there is a ton of public domain writing out there.
The endgame, though, is to stop scenario 1 and scenario 2, and the best way to do that is any way that forces the people who are making AI to sit down and think about where they can use the AI. Because the problem is not the speed of AI development, but the speed of corporate greed. And the problem is not that the average person LACKS access to AI, but that the rich have TOO much access to AI and TOO many horrible plans about how to use it before all the bugs have been worked out.
Furthermore, if they're using people's creativity to make a product, it's just WRONG not to have permission or to not credit them.
Losing their life because an AI has been improperly placed in a decision making position because it was sold as having more capabilities than it actually has.
I would tend to agree with you on this one, although we don't need bad copyright legislation to deal with it, since laws can deal with it more directly. I would personally put in place an organization that requires rigorous proof that AI in those roles is significantly safer than a human, like the FDA does for medication.
As for the average person who has the computer hardware and time to train an AI (bear in mind Google Bard and Open AI use human contractors to correct misinformation in the answers as well as scanning), there is a ton of public domain writing out there.
Corporations would love if regular people were only allowed to train their AIs on things that are 75 years out of date. Creative interpretations of copyright law aren't going to stop billion- and trillion-dollar companies from licensing things to train AI on, either by paying a tiny percentage of their war chests or just ignoring the law altogether the way Meta always does, and getting a customary slap on the wrist. What will end up happening is that Meta, Alphabet, Microsoft, Elon Musk and his companies, government organizations, etc. will all have access to AIs that know current, useful, and relevant things, and the rest of us will not, or we'll have to pay monthly for the privilege of access to a limited version of that knowledge, further enriching those groups.
Furthermore, if they're using people's creativity to make a product, it's just WRONG not to have permission or to not credit them.
Let's talk about Stable Diffusion for a moment. Stable Diffusion models can be compressed down to about 2 gigabytes and still produce art. Stable Diffusion was trained on 5 billion images and finetuned on a subset of 600 million images, which means that the average image contributes 2B/600M, or a little bit over three bytes, to the final dataset. With the exception of a few mostly public domain images that appeared in the dataset hundreds of times, Stable Diffusion learned broad concepts from large numbers of images, similarly to how a human artist would learn art concepts. If people need permission to learn a teeny bit of information from each image (3 bytes of information isn't copyrightable, btw), then artists should have to get permission for every single image they put on their mood boards or use for inspiration, because they're taking orders of magnitude more than three bytes of information from each image they use for inspiration on a given work.
Many things in life are a privilege for these groups. AI is no different.
I'm not sure what you're getting at with this. It will only be a privilege for these groups of we choose to artificially make it that way. And why would you want to do that?
Do you want to give AI exclusively to the rich? If so, why?
I think he was just stating a fact.
For something to be a fact, it needs to actually be true. AI is currently accessible to everyone.
I've been thinking along your line. My concern has been that dictatorships would violate the western copyright and would thus go further than the west and especially europeans, who are heading to very strict laws. It's a nightmare scenario.
And your concern on the rich only makes sense to me, too.
You have not clearly defined the danger. You just said "ai is here". Well, lawyers are here too and they have the law on their side. Also the ai will threaten their model, so they will probably have no mercy anyway and will work full time on the subject.
Wealthy and powerful corporations fear the law above anything else. A single parliament can shut down their activity better than anyone else on the planet.
Maybe you talk from the point of view of a corrupt country like the USA, but the EU parliament, which BTW doesn't host any GAFAM, is totally ready to strike hard on the businesses founded on AI.
See, people doesn't want to lose their job to a robot and they will fight for it. This induces a major threat to the ai: people destroying data centers. They will do it. Their interests will converge with the interest of the people caring about global warming. Don't take the ai as something inevitable. An ai has a high dependency on resources and generates unemployment and pollution, and a questionable value.
An AI requires:
Energy
Water
High tech hardware
Network
Security
Stability
Investments
It's like a nuclear powerplant but more fragile. If an activist group takes down a datacenter hosting an ai, who will blame them? The jury will take turns to high five them.
I don't think the EU is so lawless as to allow blatant property destruction, and if it is, I can't imagine such a lack of rule of law will do much for the EU's future economic prosperity.
I'm probably just a dumb hick American though.
Wow, you have this all planned out, don't you?
If that's what Europe is like, they'll build their data centers somewhere else. Like the corrupt USA. Again, you'll be taking away your access to AI, not theirs.