Skip to main content

The EU vs AI

We keep seeing the massive tech companies complain about the EU actually attempting to maintain some form of control on AI development. I think we can all agree that their steps are not ideal, but the fact they are being made is the important matter.

Meta suggesting that their new LLaMa models will not be available in the EU, and now Apple saying that their new AI-based features will not be available in the EU. I'm sure ol Musky has weighed in on it somewhere too but I won't insult my own intelligence by searching for it.

On the surface, they're fair comments. But if you dig a little deeper (and I mean literally do more than scratch the surface and subscribe to clickbait) you may be able to see where the EU is coming from. The following are the corner stones of the EU's AI Act:

Unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI).

Most of the text addresses high-risk AI systems, which are regulated.

A smaller section handles limited risk AI systems, subject to lighter transparency obligations: developers and deployers must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes).

Minimal risk is unregulated (including the majority of AI applications currently available on the EU single market, such as AI enabled video games and spam filters – at least in 2021; this is changing with generative AI).>

None of these things actually sound that bad. Yes, there are some hoops to jump through, but they exist for a reason.

The big tech companies want, the thing that the big tech companies have always wanted, is your data. Absolutely all of it. And AI has given them the biggest excuse to make their largest grab for it in their history. Desktop integrations that stream everything you do with your computer to their AI algorithms to build your profile and make selling your data more profitable. These things sound horrifying to me, and if they don't horrify you then you may wish to consider digging deeper into the subject.

However, I know people won't. Many people have already become reliant upon AI. I've certainly used it to enhance my workflow, using it to aid in scripting and to generate documentation for the more boring parts of my work. And we certainly need to be able to use this innovation in the future of the EU.

But allowing AI innovation to continue at the pace it has with no controls is going to be disastrous. AI is already contributing to job losses, something which (as per takeaway 4 of this summary document the EU AI act is attempting to restrict.

It will be interesting to see what happens to these people when management realises how much of their job can be replaced by the AI they're so desperate to see unrestricted. Or what happens the next time there's a major security breach and their desktop streaming data is compromisedon top of everything else.

So maybe my takeaway from this will be singular:

"Those clamouring for unrestricted AI have no right to cry when they're fired because of it." - DNR