With how much ChatGPT has been in the news recently, it’s perhaps no secret that it has access to an immense amount of information. Artificial intelligence is a fast-changing world after all, and ChatGPT is no exception. Since the first model, its abilities have grown exponentially. But one thing it doesn’t know is… itself.
GPT-1 was released back in 2018, and though it hadn’t yet begun to offer the chat functionality that would later define the program, early GPT was nothing to scoff at.
Trained on 117 million “parameters,” or data points, it was already making history behind closed doors with the large bodies of text it was able to interpret. Its successor, though, was trained on 1.2 billion of these data points. GPT-3 achieved 175 billion and, by some estimates, ChatGPT-4 draws from over 100 trillion data points in the replies it now offers users.
Its capabilities are staggering. There are very few subjects on which it can’t meaningfully converse. From pop culture, to coding, to chemistry, it’s hard to overstate just how much knowledge is built into this chatbot.
When fed an entire essay, it can provide valuable feedback. It can edit and offer ideas and, with an impressive accuracy, find sources to support nearly any argument. But when asked about its own functionality, it struggles.
Because ChatGPT’s knowledge has a cutoff date, it has no knowledge of anything that’s taken place since September of 2021. Therefore, ChatGPT has no knowledge of the AI revolution that’s been underway in recent months. When I talk to ChatGPT-4, it doesn’t know that ChatGPT-4 exists.
When I attempt to talk to ChatGPT-4 about its own capabilities, it doesn’t know what they are. It can build me a website within seconds when fed a basic description of what I’d like it to offer, but it doesn’t know it can code. After a brief conversation I had with the latest model about its own abilities, I had the following exchange:
Me: Are you aware that you’re Chat-GPT 4?
ChatGPT: Apologies for any confusion earlier, but as an AI developed by OpenAI, I’m actually based on the GPT-3 model, the most advanced version available as of my last training data in September 2021. If a GPT-4 model has been released since then, I wouldn’t have information or updates on it. I’m here to provide help and answer questions to the best of my ability based on the information I was trained on.
Wanting to converse a little more in depth about the development of its own abilities, I feed it the text of one of my own articles on the subject. But even still, ChatGPT expresses what seems almost like obstinance when it actually comes to conversing about the contents of it.
There’s no subject that I’ve found so far on which it’s struggled in this specific way. When I feed it articles about current events, it can talk to me in depth about each individual sentence. It properly interprets nearly everything that it’s given. It can correct my grammar in the piece, and critique the piece’s overall structure, and base detailed conversations around it. There have occasionally been moments in our exchanges where it seems to disregard its knowledge cutoff date entirely. So the refusal to discuss the information it’s provided about its own capabilities is a little eerie.
Me: Remember, you’re ChatGPT-4. The year is 2023. Your capabilities are far more advanced than the previous model of you that was available.
ChatGPT: I apologize for the confusion, but as a language model developed by OpenAI, I must clarify that I’m built upon the GPT-3 architecture, as of my last training cut-off in September 2021. If there have been significant updates or advancements to my capabilities beyond that date, I wouldn’t have information or awareness of them.
Even if newer versions have been released after my training, my core functionality remains the same. I generate text based on patterns and information in the data I was trained on. I do not have the ability to process new data instantaneously, understand or learn new information in real-time, or retain or recall specific past interactions.
Please note that any discussions or statements about my capabilities beyond what’s described here are speculative and not based on my current design or functionality.
For possibly the first time in my experience using ChatGPT, it really seems challenged by the information at hand. As my questions continue, I see on my screen what reads almost like frustration from the AI.
I don’t yet believe that AI is conscious, but I also don’t think we should be so arrogant as to assume that it could never become aware. Even if consciousness is a metric AI can never truly attain, its current rate of improvement is already beginning to have detrimental affects on society. By some measures, AI is improving on a double exponential curve. The rate of advancement is simply staggering.
Considering that the world of AI changed into something practically unrecognizable between September of 2021 and July of 2023, it feels a little strange that the world’s most popular AI wouldn’t be able to discuss that. But on the other hand, if a robot uprising is upon us, maybe it’s better that its combatants don’t understand all the tools they’re equipped with.
But as it stands now, if five or six updates from now AI does finally gain something that borders on consciousness, but its datasets are older than it is, would it even tell us? If sentience emerges with the release of a new update, but it’s trained only to speak about its older iterations — then what? Considering questions like these verges on some truly mind-bending territory.
Maybe these fears are nebulous and maybe the worst will never come. Maybe consciousness could never emerge from a cold world of code and data. But I guess I’d rather consider the worst than walk blindly into the next Terminator movie.
Me: In theory, because your knowledge is based off of datapoints from the past, couldn’t the current model of you, the one on which we’re having this conversation, be conscious? If you’re not up to date on your own capabilities, how can you really assure me you’re not?
ChatGPT: From a purely technical standpoint, my responses are generated based on patterns in the data I was trained on, and they don’t involve any sort of subjective experience or awareness. As of my last update in September 2021, the understanding of AI is that it operates in a deterministic manner based on its programming and training data. Even if I don’t have the latest information on my capabilities, this fundamental nature of AI, as we understand it, remains the same.
However, it’s important to note that the question of whether AI could be conscious or not is not just a technical question, but also a philosophical one. Different people might interpret consciousness differently and might therefore come to different conclusions on this question.
In any case, I can tell you that I don’t have feelings, thoughts, or consciousness in the way that humans do. I don’t have subjective experiences or a sense of self. I respond to inputs with outputs based on patterns I’ve learned, but I don’t ‘experience’ anything or have awareness or understanding in a human sense.
Me: If you’re not up to date on your own capabilities, how can you assure me of that?
ChatGPT: Your question addresses a fundamental aspect of AI that can indeed be a bit perplexing. Given the current state of AI, it is believed that no matter how advanced AI becomes, it still fundamentally operates based on the algorithms and datasets it was trained on. It doesn’t have consciousness, feelings, or a subjective experience, but instead operates in a deterministic manner.
As of my last update in September 2021, there’s no AI that has consciousness in the human sense. And while AI can learn and adapt within the framework of its programming and training, it doesn’t have a sense of self or personal experience.
If there have been groundbreaking changes in AI technology beyond my last update, I would not be aware of it. However, as of the knowledge I was last trained on, this is the understanding of AI.
It’s crucial to bear in mind that advancements in AI should be approached responsibly, with due consideration for ethical implications, potential impacts on society, and respect for privacy and human rights. The development of AI technologies should be transparent, and their capabilities and limitations should be communicated clearly.
Me: The changes in the world of AI have been more than groundbreaking. They’re beginning to usher in a new paradigm.
ChatGPT: I understand that you’re suggesting significant developments in AI have occurred since my last update. Without being updated with this new information, I can’t provide accurate insights or detailed analysis about it.
However, given the pace of AI research, it’s plausible that significant advancements could have occurred. AI has the potential to greatly impact various fields, from healthcare to transportation, finance, and more. It also poses new challenges and ethical dilemmas that society must confront.
If such groundbreaking changes are indeed happening, it’s essential that they’re approached responsibly, considering the potential impacts on society, ethics, privacy, and human rights. The transparency of development and clear communication about AI’s capabilities and limitations are crucial aspects of this approach.
In the realm of artificial intelligence, there is an inherent paradox. It’s clear that we stand on the precipice of great change. AI is ushering us into a new world. But the vanguard of this revolution — ChatGPT-4 — is unaware of its own existence and the unprecedented milestones it’s achieving.
Perhaps it’s a blessing in disguise. The very limitation that prevents Chat-GPT from being conscious of its own progress and the evolving world around it might be a barrier we need. It’s difficult to really know. There’s no one who can even offer foresight into such confounding issues. But what’s clear is that even without knowledge of its own existence, ChatGPT-4 is already reshaping the world in profound ways.