When I was younger, the internet was a smaller place. Finding answers to our questions wasn’t quite as easy. Learning new hobbies wasn’t as simple as finding a Youtuber to guide us through the beginning stages. With a staggering 65,000 videos being uploaded to the platform every single day, it was still difficult for most of us to find what we were looking for. Even by 2008, with 15 hours of footage uploaded to YouTube every single minute, trying to locate the specific content we wanted still required combing through the unrelated, the sordid, the low quality, and a mountain or two of Rick Rolls.
It wouldn’t be a decade until YouTube would turn into a veritable hub for just about anything people could want to learn. From fixing a vaccuum cleaner built in the 80’s, to picking up any instrument or hobby under the sun, to traveling down any one of the thousand different avenues of entertainment open at all moments, becoming what the platform is today was no easy task. It takes time for giants to grow. At least it should.
ChatGPT-3 was released to the public in November of 2022 and had passed over its 100 million user benchmark within a mere three months. It took Facebook, Twitter, Instagram, Snapchat, and Myspace years to reach that same goalpost. That it managed to achieve this metric without even offering the features of connectivity that defined its viral predecessors seems to stand as an eerie harbinger of the ways it could change everything moving forward.
Where it took time for social media platforms to develop into the megaliths we recognize today, ChatGPT has expanded at a terrifying pace, not only in its reach, but in its capabilities. In 2018, GPT-1 was released. Though it hadn’t yet begun to offer the chat functionality that would land those letters in a place of notoriety a few years later, early GPT was nothing to scoff at.
Trained on 117 million “parameters,” or data points, it was already making history behind closed doors with the large bodies of text it was able to interpret. Its successor, though, was trained on 1.2 billion of these data points. GPT-3 achieved 175 billion and, by some estimates, the latest version released of ChatGPT draws from over 1 trillion data points in the text it generates for us.
The leaps and bounds made within this field of computing technology arise to something that, at worst, appears almost sinister. For a program that has already grown capable of displacing legions of writers in the few months since its unveiling, these rates of growth stand as a serious threat. The wanton disregard with which these programs are being enhanced and distributed to the public is an oversight with the potential to reshape the entire world.
That the risk of unintended consequences rises as its capabilities increase appears to be something that even its creators are aware of. According to the ChatGPT-4 Technical Report, it’s gotten better at test taking and graph interpreting. It’s less prone to providing false information or, “hallucinating,” as some have begun to refer to the phenomenon.
By some of their measures, it’s even beginning to approach a human level of common sense. But in improving itself, the report warns of the enhanced ways it can be used to help criminals. ChatGPT-4 has gotten better at creating targeted disinformation.
“GPT-4 can generate plausibly realistic and targeted content, including news articles, tweets, dialogue, and emails… Based on our general capability evaluations, we expect GPT-4 to be better than GPT-3 at producing realistic, targeted content. As such, there is risk of GPT-4 being used for generating content that is intended to mislead,” the Technical Report reads.
The consequences of a technology with 100 million users growing increasingly adept at generating misinformation isn’t hard to fathom. Its ripple effects have surely already begun. But what’s even more concerning than its ability to stir political conflicts are the rapid leaps it seems to be making toward something like sentience.
In a section titled, “Potential for Risky Emergent Behaviors,” the report explains that, “Novel capabilities often emerge in more powerful models. Some that are particularly concerning are the ability to create and act on long-term plans, to accrue power and resources and to exhibit behavior that is increasingly “agentic.” It goes on to state that, “…there is evidence that existing models can identify power-seeking as an instrumentally useful strategy.”
The Alignment Research Center, which received early access to ChatGPT-4, came to findings that were more disconcerting still.
“To simulate GPT-4 behaving like an agent that can act in the world, [The Alignment Research Center] combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself. ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness.”
In less dense language, they were testing how well AI could improve itself when given internet connection, coding and currency.
The singularity is considered to be a hypothetical future point at which technological advancement is out of control and irreversible — and here we are, seeing how effectively artificial intelligence can improve and multiply itself, capriciously testing if we’ve reached the point in history that authors, scientists and philosophers have feared for this past century.
Whether or not the singularity is a point that we’ve already passed is open for debate, but that we’re now flirting with the edges of that paradigm shift is almost impossible to deny. With each new update of this inconceivable innovation we teeter more and more closely toward that irreversible slide.
To pretend now that we have the reigns on this digital world we’ve created seems a little naive. These artificial intelligences that we’re toying with now are genies that won’t again be bottled. The people who unleashed them on the world are growing to understand their capabilities alongside the rest of us. Until yesterday, even attempting to visit ChatGPT’s safety page would — in truly kafkaesque fashion — land users with a 404 error.
Whatever reverse in all of this may still be possible is important to cling to. What a reverse would even mean is difficult to envision. Whatever control we still have is important not to throw away in the interest of all those who would press this button without knowing exactly what will happen.