Artifical Intelligence: Intellectual property and the climate

by Per

We know it under many names, Claude, Chat (GPT), Gemini, the Clanker or just simply LLM’s. It is something that has permiated every facet of modern life.

From a customer support chatbot that is only marginally more helpful than its non intelligent predecessor, to millions and millions of AI slop videos that you can encounter on your feed. It is something that has definitely stirred the pot, but how positive or negative should we actually be about this not so novel technology?

What is an AI?

As any good person who has had academic writing, it is first smart to give some context for what you are actually discussing. So as a person who has had a whole of 1 course about AI and whose brain is structured like an LLM, I will explain what an artificial intelligence actually is.

Simply put, an artificial intelligence is something that can ‘think’ and ‘evolve’. This covers all AI’s, but the kind that we interact with is called a Large Language Model, or LLM. These types of AI’s are simply just a statistics equation where it guesses based on the input and datapoints, what the next option is. For example, if you ask it to give you the notation of notes. It will start off with Do, based on that Re, then afterwards Mi until you have the whole thing. By having a massive dataset and tweaking the parameters, you slowly converge to the actual solution. Very much a high level version of beuning.

Due to the fact that it is not perfect, but good enough, it tends to give out extremely wrong information or ‘hallucinating’ as it is otherwise called. The reason it has become so popular is because it was able to be commercialised.

This combined with the fact that it can handle natural language (I.e. the stuff that you and me communicate in) has made LLM’s a massive thing in the last few years.

Now that we have had some scoping about what an AI is, we should talk about the elephant in the room, how good or bad is AI actually?

The good, the bad and the ethically dubious.

For the people that actually follow things pertaining to AI, you may know that a large amount of water is used to power data centres and that they take up a shitload of electricity to use. According to the UN, the usage of AI will use about five billion liters of water per year by 2027. To give some scope, they compare this to Denmark, which uses about four to six billion cubic metres of water annually. This is kind of really bad, because water is already a very scarce resource with about one percent of all water on earth being fit for human consumption.

Aside from this, power usage of AI, together with the usage of rare elements, really doesn’t make a good case for the widespread usage of LLM’s. Besides the resource impact that large language models have on the intellectual property side, it is also not a good situation. To actually have an effective LLM, you need to train it first.

In the case you have a specific thing you want to train it for, like for example suggesting food combinations, you throw a bunch of food combinations at it. This is called training data. Where the problems occur with training data is that all of it is scraped from the internet. The acquisition of this data is without permission from the original author. That is a massive breach of intellectual property rights. Sadly, due to the fact that it can not really be specifically indicated, and no laws to keep this in check, an artist or writer or creator of another type can not do much.

So we have covered resource usage and the fact that it violates intellectual property rights. Does the pain train stop there?

Sadly not, dear reader. It also has an effect on our cognitive capacities, although not necessarily in the form of the AI brainrot videos you see on social media. It is more in the form of taking away reasoning and critical thinking. Let’s say that you want to think about what you want to eat for dinner but you have no inspiration. Some of you may decide to go back to a basic dish that you can always eat, some of you might have your own method of figuring it out, but I have encountered the class of people that without a beat ask an LLM for suggestions. Is this necessarily bad?

No, you need to find inspiration somewhere. Even though you find that via ChatGPT instead of a recipe blog, you still looked. The problem occurs when you decide to do it for everything in life. There are people who sadly blindly follow LLM’s when they suggest something, sometimes even with fatal consequences. Like a man in Belgium who decided to take his own life when he asked an LLM what to do to decrease his carbon footprint.

The sadness just keeps going, does this mean that AI is an hellish beast that only poisons our brains, our lives and the earth? Not really. Having a thing that thinks like a person with processing speeds which surpasses us significantly, is an amazing thing. There are many examples where the usage of AI is a positive thing.

For example, in Japan someone used the training data of pictures of bread to see if it could classify them, it was not the best at that. As a consequence, however, it was able to recognise various types of cancer cells with a very high accuracy rate. This is but one example of the usage of AI and there are many more fun cases of AI implementation.

My take on AI

To me the popularisation of AI is less so a crisis on it’s own and more a symptom of the general overconsumption that is currently happening in the modern world. If you look at it, the total water usage of AI per prompt is about the same as is required for 10 milligrams of beef. The flagrant abuse of intellectual property right combined with the cognitive effects are things that are not necessarily new.

Brainrot did not become a thing just because AI is a thing. It became worse because the production rate went up. For this reason, I still think you should avoid it like the plague, but don’t feel like you should abstain completely as there are better ways to make an impact on this world.

Dear reader, thank you for reading my article. I know it is a massive word soup, but still I wrote this all myself. If you ever want to have a discussion about personal responsibility in modern life you can always yap.

Alright that is enough yapping, Cheers k bye,

Per ‘Napoleon’

p.s. ChatGPT pronounced the French way sounds like saying ‘Cat I farted’ in French, and if you decide to anglicise your pronunciation of Chat, you will say ‘Vagina, I farted’.


Comment below to give your opinion! Articles can be sent to redakkie@koornbeurs.nl or just the Redâkkie members themselves. Note that the uploading article button on top is broken!

Leave a Reply