Link to Part 2
Note: The substantive content of this series has not been generated or assisted by AI. Some AI art and text has been generated in the process of building this and future posts. This has been done purely for illustrative purposes and does not invoke the name of real artists unless for the purposes of direct comparison. Where text or art is AI generated it will be noted. If any artist would like to have the comparative work that uses their name taken down, please contact me and I will do so.
AI, as it currently stands, is a shallow and broken system, riddled with theft, legal issues and abuse, and represents a further paradigm of busywork-generating grift mills that will favour low effort garbage at the expense of the wellbeing of real people. It is already causing material harm to a lot of people, and the benefits it provides can’t be realised for the majority because of the endless grinding machine of corporate profiteering that’s running on overdrive in an attempt to stop its own collapse.
HI I’M HENRY NEILSEN AND TODAY WE’RE GONNA TALK ABOUT THE INTERNAL SCREAMING I TRY TO HOLD IN WHENEVER I GO ON THE INTERNET THESE DAYS.
Introduction
I was going to put some snooty and high-minded discussion of the luddites, and the way they weren’t actually anti-tech, but skilled workers who protested the automation of their jobs, but I decided on the above meltdown because a) it’s funnier, b) I don’t want to pretend I’m a luddite scholar, and c) I think this is going to be long enough as it is without that kind of pretentious bullshit. Nevertheless, do your own research on the luddites, and decide in your own time whether or not this series of articles can be considered my lobbed shoe into the textile machine that is AI.
Since the introduction of things like MidJourney, DALL-E, ChatGPT, and any other of the myriad lists of weird names that will be released between now and when this article goes live, I’ve had a distinct unease about this new wave of consumer friendly AI tools. I know enough to make myself dangerous when it comes to the way that these tools work, and when I saw my friends starting to post cyberpunk or glossified images of themselves on Instagram, that unease clamped its iron grip around my mind and hasn’t let go for several months now.
Since then, I’ve done quite a lot of reading, I’ve talked to a lot of people, I’ve seen the fallout from the things that AI is doing, what it promises to do, I have gone onto the forums of AI enthusiasts and detractors alike. I have lurked, I have watched, I have observed, and this is where I’ve landed:
The current bout of AI tools are flawed in ideation, inception, implementation, intention, and results. The output is shallow, and of just enough use for the get-rich-quick scumbags of the world to deliver swathes of spam into spaces filled with people already working at the brink of poverty. Where it’s not already actively harmful, it represents the vanguard of a seismic shift in society that if not carefully managed will be extremely bad for humanity as a whole.
Yeah. That’s where I stand, and that’s the opinion I’m going to be writing from. If you’re looking for “fair and balanced”, go watch a white person walk a tightrope.
This set of articles is going to provide sources as it goes on, but I’m not going to give even scant eyeballs to the creeps who are hungrily eyeing the new technology, circling like sharks, ready to tear at any opportunity to rip people off. I want to talk about what this tech is, and what it means for honest to god, hard working people.
But let’s back up first.
Let’s talk about what AI “is”
The important thing to note here is that “AI” is a marketing buzzword. Once venture capital firms realised that Sam Bankman-Fried (among others) had pulled the wool over their eyes, and that Crypto was basically just “the internet, but stupider”, they poured billions of dollars into AI startups, and the first of them are bearing weird, misshapen fruit with too many fingers now. The similarities to the crypto boom don’t end there. In the same way that cryptography and deflationary monetary systems existed prior to the crypto booms, neural networking and language programming has existed for a long time prior to the advent of these new, shiny websites.
Perceptrons and multi-layer perceptrons were being researched in the late 1970s and 80s but were slower than other heuristic methods at the time. Hell, NLP (Natural Language Processing, which ChatGPT and its ilk are models of) has been in the pipeline since the 1950s. This is not new technology. The techniques have been around for ages. Hell, I’ve used genetic algorithms, another type of machine learning, back at uni in 2015, and that was just old tech being implemented in a way that an undergrad in architecture could follow. The core mechanics of this technology are old. Ancient, in computer science terms, and the reason they’re able to be leveraged to the degree they are today is, on the whole, simply because we have a shedload more computers than we did in the past.
The two main kinds of AI that are in use on a large scale and in use for the public are, broadly speaking, GANs and NLP. Image based generators like MidJourney use GANs, while ChatGPT is NLP.
A GAN is a Generative Adversarial Network. I’m not going to go into a heap of detail, but basically it’s two neural networks that point at each other. One of these produces the images while the other “checks its work”. More specifically, it guesses whether or not the image has been produced by a human. So, Machine A either produces an image, or it draws an image produced by a human from a box. It sends this image to Machine B, which looks at it and makes a decision. Did a fleshy human produce this art, or did Machine A do it? It sends a guess out, and if it’s correct, then Machine A’s learning model is reinforced (Good robot, you tricked your sibling into thinking you did a human art!). If it’s incorrect, then the learning model gets adjusted, so it can realise that no, this picture is not human art (Bad robot, you just made that up! Draw another picture and make it more real).
Repeat this several hundred thousand times, add some other stuff like convolutional layers, gaussian denoisers, something that can read prompt inputs, and a whole ass dataset of carefully labelled and tagged human data (who labels and tags it, I wonder? oh, we’ll get there) and you’ve got your very own picture making machine. Ta-da!
It’s not magic, and you need to understand that it’s definitely not “thinking” in the sense that we understand it. It’s a reinforcement model, and it does what it does by calculating a FUCKLOAD of maths.
Seriously, even the simplest non-GAN, toy model neural network has about 800 inputs, each individually tweaked and weighted to approach allow it to guess what number has been written on a 28x28 pixel grid. For processes the size of the current stable of image generators, we’re talking billions of parameters, all tuned in a black box. All of its inputs are converted to numbers and the art, both human and generated, is reduced to a complex, interlocking vector space that can be bent and folded based on input queries. By definition, anything produced by the system was placed into the system. Keep that in mind.
ChatGPT is a bit different. As already stated, it’s a NLP processor, and I’m less familiar with the way these actually work. In a similar sense though, ChatGPT’s input relies on enormous amounts of text data, which has to come from somewhere. At least part of that somewhere is known as “The Pile”, an enormous repository of books and literature, blog posts, facebook comments, open source papers, and whole heaps of other stuff. It is over 800GB in size, which means it’s bigger than the hard disk space of the laptop I’m using to write this.
ChatGPT scours through this text, and it works out a context for the phrases it’s looking at. From there, it figures out what’s statistically likely to come next.
Don’t get me wrong, it’s fucking clever, and the results, at first blush, are fairly convincing. But did you notice that little “Statistically likely” there? That’s a bit of a worry isn’t it? No? Don’t worry, we’ll get to that, too.
I think it’s worth mentioning that the sheer amount of data these systems require is an indicator of how inefficient they are, and the methods do not engender anything approaching “intelligence”. Genetic algorithms are hideously slow, and they aren’t guaranteed to converge on a result, let alone the right result. NLP programs are faster, but they aren’t trained in truth, only what’s likely based on the data they’ve been fed. Sure, sometimes that leads to the correct answer, but that’s by coincidence rather than design. More often, and more worryingly, chatbots will just make shit up when the answer isn’t stored natively in their innards. Similarly, GAN’s can only generate things that are statistically likely to fool another computer into thinking it was made by a human, and doesn’t understand that it’s making anything other than noise that lights up a boolean switch somewhere.
So, we’ve got a basic grounding of what these machines are now. They’re not intelligent, they’re not trained to tell the truth, they are by definition regurgitative, and they’re relying on the subjective inputs of millions and millions of snippets of frail humanity to be able to produce anything at all. So I suppose, the next question is:
What are people using it for?
(Aka “what are the next parts of this probably never-ending series gonna be about?”)
I’m not going to pretend there aren’t legitimate use cases for AI, as there is for any new technology. I know that some people with disabilities have found ChatGPT useful for communication. It may be useful for this. As stated, I’ve used genetic solvers to find solutions to complex problems. There are, without doubt, legitimate reasons for these things existing.
That’s not why I’m here. I’m here because of what people are using it for, which includes but is not limited to:
Finding an artist they like and loading that person’s name into an AI prompt for free versions of their art.
Auto-generating AI “stories” and spamming the submission boxes of online magazines until they’re forced to shut down.
Loading pay-per-view sites like Medium with endless spam in an attempt to get an iota of cash.
Teenagers are inputting homework questions and copy-pasting the results. Voila, education!
Asking ChatGPT to diagnose disease because they don’t trust the medical community.
Practically worshipping the AI because they think that it’s one step away from Skynet or some other Artificial General Intelligence (AGI).
Claiming that “women are now unnecessary” because they can generate sexual imagery without having to, you know, pay a sex worker, or even talk to a real human being.
Asking the AI ethical questions, then getting real mad when it turns out that it’s been trained to not say racial slurs (again, treating the machine as though it’s smarter or more sentient than it actually is).
Artificially transplanting the face of a person they find attractive into porn scenes to create artificial porn of that person. Optional: send that porn to the person in question.
This, from what I can tell, is just a smattering of what individuals are doing. Like any new technology with sufficient steam behind it, companies are also jumping on board. Selling it as a “productivity tool”, these practices are implementing AI tools as a way of reducing workload. This, however, is not turning into a life of fewer worries and lightened stress on a workforce coming out of a pandemic and suffering from stagnating wages in a time of record profits and high inflation. No no, not that, but it is already resulting in layoffs in companies as a way of reducing staffing costs. So in a world where the fully employed are still being forced to live in tents due to high housing costs, entire sections of the workforce are being wiped out by chatbots, which, let’s be perfectly clear, still aren’t very good.
Join me next time, and we’re going to talk about how AI writing and art is grindingly mediocre, and why that’s not likely to change with the current technology we have available to us.
https://ko-fi.com/henryneilsen - Make a donation!
https://linktr.ee/henryneilsen - All my other stuff!