4IR: Is GPT AI the next chapter in our evolution?
The future is being written, one artificially generated chat at a time
With the recent, seemingly rapid developments in artificial intelligence (AI), specifically regarding response-generative tools built by training AI on large language models, one of the innovations that has captured the world’s attention is Generative Pre-Trained Transformer (GPT) chat apps such as ChatGPT and other similar applications including Bing Chat, Bard, ChatSonic, Jasper AI and others.
These GPT apps are generally based on advanced, trained language models such as those developed by an American AI company called OpenAI, the company responsible for powering many of these apps with its latest GPT-4 model. With the ability to generate (mostly) coherent and contextually relevant responses to human queries usually posed in a user-friendly chat box, it has opened new possibilities for human-machine interaction.
The basic GPT apps are free for everyone to use, while paid-for options can allow faster processing speed, prioritised access and the use of newer language model versions, such as GPT-4.
GPT marks a paradigm shift in the evolution of human-computer communication. By conversing with users in a natural and human-like manner, it holds the potential to revolutionise various domains. From customer service and support systems to personal assistants, GPT tech can enhance efficiency, streamline interactions and provide more personalised experiences.
It can handle complex queries, generate written responses that are sometimes very hard to distinguish from human-written text. In addition, it can adapt to diverse conversational styles, thereby transforming the way we engage with machines.
As a natural language trained tool, GPT AI is particularly good at translations from and to many languages. Not all translations are accurate though, so its translated response text, like everything it generates, will require thorough editing by a real human before it can be used. It is important to note that AI tools such as GPT apps cannot be trusted to always generate accurate, useful responses under all circumstances. So, when used, it’s crucial to apply thorough knowledge-based human editing to all responses.
How GPT works
Simply put, the AI underpinning the generative capabilities of these tools use statistical modelling to (usually, but not always) accurately predict which word most likely follows another in a specific language it has been trained upon. It uses natural language rules of grammar too, from its learning regime and can thus basically do two things: Find information relevant to a query question and generate natural, understandable and regularly useful text regarding the information it finds.
But don’t take our word for it, let’s ask ChatGPT by way of example, to define itself.
Question posed: How does ChatGPT work?
ChatGPT, including the version you’re speaking with, which is based on the GPT-4 architecture, is a machine learning model developed by OpenAI. It’s a type of artificial intelligence known as a transformer language model, specifically a large-scale unsupervised language model.
Here’s a high-level overview of how it works:
- Training ChatGPT is trained on a large corpus of text data from the internet. However, it’s important to note that it doesn’t know specifics about which documents were in its training set or have access to any proprietary databases, classified information, or personal data unless that information has been shared with it during our conversation. It was trained with a diverse range of internet text, but it doesn’t know specifics about which documents were in its training set or have the ability to access any source of information, database, or internet content directly.
- Model architecture GPT-4, like its predecessors, is based on the Transformer model architecture, which uses a mechanism called “attention” to understand the context and generate responses. The model has layers, and each layer consists of a self-attention mechanism and a feed-forward neural network.
- Learning During the training process, it learns to predict the next word in a sentence. It’s fed with tons of sentences and for each sentence, it tries to guess the next word. It continues this process, adjusting its internal parameters to reduce the difference between its predictions and the actual words that follow. This way, it learns grammar, facts about the world, reasoning abilities and some biases from the data it was trained on.
- Generation When you provide an input to the model (like a question or a statement), it generates a response word by word. It takes your input, processes it and tries to predict what should logically come next, based on what it learned during training.
- Fine-tuning While the initial training process is unsupervised, the model is usually fine-tuned on a narrower dataset with human reviewers following guidelines provided by OpenAI. This helps to ensure the model’s responses are safe, useful, and aligned with human values.
It’s important to note that while ChatGPT can generate information that seems knowledgeable and coherent, it doesn’t actually understand the text in the way humans do. It doesn’t have beliefs, opinions, or consciousness. It generates responses based on patterns it learned during training.