Transformer or terminator?
You decide.

Wednesday, 28 June 2023


Transformer or terminator?
You decide.

4IR: Is GPT AI the next chapter in our evolution?

The future is being written, one artificially generated chat at a time

With the recent, seemingly rapid developments in artificial intelligence (AI), specifically regarding response-generative tools built by training AI on large language models, one of the innovations that has captured the world’s attention is Generative Pre-Trained Transformer (GPT) chat apps such as ChatGPT and other similar applications including Bing Chat, Bard, ChatSonic, Jasper AI and others.

These GPT apps are generally based on advanced, trained language models such as those developed by an American AI company called OpenAI, the company responsible for powering many of these apps with its latest GPT-4 model. With the ability to generate (mostly) coherent and contextually relevant responses to human queries usually posed in a user-friendly chat box, it has opened new possibilities for human-machine interaction.

The basic GPT apps are free for everyone to use, while paid-for options can allow faster processing speed, prioritised access and the use of newer language model versions, such as GPT-4.

Enhanced communication

GPT marks a paradigm shift in the evolution of human-computer communication. By conversing with users in a natural and human-like manner, it holds the potential to revolutionise various domains. From customer service and support systems to personal assistants, GPT tech can enhance efficiency, streamline interactions and provide more personalised experiences.

It can handle complex queries, generate written responses that are sometimes very hard to distinguish from human-written text. In addition, it can adapt to diverse conversational styles, thereby transforming the way we engage with machines.

As a natural language trained tool, GPT AI is particularly good at translations from and to many languages. Not all translations are accurate though, so its translated response text, like everything it generates, will require thorough editing by a real human before it can be used. It is important to note that AI tools such as GPT apps cannot be trusted to always generate accurate, useful responses under all circumstances. So, when used, it’s crucial to apply thorough knowledge-based human editing to all responses.

How GPT works

Simply put, the AI underpinning the generative capabilities of these tools use statistical modelling to (usually, but not always) accurately predict which word most likely follows another in a specific language it has been trained upon. It uses natural language rules of grammar too, from its learning regime and can thus basically do two things: Find information relevant to a query question and generate natural, understandable and regularly useful text regarding the information it finds.

But don’t take our word for it, let’s ask ChatGPT by way of example, to define itself.

Question posed: How does ChatGPT work?

Unedited response:

ChatGPT, including the version you’re speaking with, which is based on the GPT-4 architecture, is a machine learning model developed by OpenAI. It’s a type of artificial intelligence known as a transformer language model, specifically a large-scale unsupervised language model.

Here’s a high-level overview of how it works:

  1. Training ChatGPT is trained on a large corpus of text data from the internet. However, it’s important to note that it doesn’t know specifics about which documents were in its training set or have access to any proprietary databases, classified information, or personal data unless that information has been shared with it during our conversation. It was trained with a diverse range of internet text, but it doesn’t know specifics about which documents were in its training set or have the ability to access any source of information, database, or internet content directly.
  1. Model architecture GPT-4, like its predecessors, is based on the Transformer model architecture, which uses a mechanism called “attention” to understand the context and generate responses. The model has layers, and each layer consists of a self-attention mechanism and a feed-forward neural network.
  1. Learning During the training process, it learns to predict the next word in a sentence. It’s fed with tons of sentences and for each sentence, it tries to guess the next word. It continues this process, adjusting its internal parameters to reduce the difference between its predictions and the actual words that follow. This way, it learns grammar, facts about the world, reasoning abilities and some biases from the data it was trained on.
  1. Generation When you provide an input to the model (like a question or a statement), it generates a response word by word. It takes your input, processes it and tries to predict what should logically come next, based on what it learned during training.
  1. Fine-tuning While the initial training process is unsupervised, the model is usually fine-tuned on a narrower dataset with human reviewers following guidelines provided by OpenAI. This helps to ensure the model’s responses are safe, useful, and aligned with human values.

It’s important to note that while ChatGPT can generate information that seems knowledgeable and coherent, it doesn’t actually understand the text in the way humans do. It doesn’t have beliefs, opinions, or consciousness. It generates responses based on patterns it learned during training.

Access to information

Currently GPT AI apps are not always directly connected to the live internet. Google’s Bard is an exception, as it is directly connected to the live internet. ChatGPT (using GPT-4), can also now connect to the live internet by way of certain plugins that can browse looking for answers. One of these plugins is OpenAI’s first-party web-browsing plugin, which enables ChatGPT to pull data from the web.

The generally used regular ChatGPT app is not connected to the live internet though. Its generative responses are currently thus only valid for what it was trained on up to the end of 2021. How well Google’s AI products such as Bard and other AI tools that are connected to the internet perform in comparison to ChatGPT remains to be tested thoroughly in the real word, but these apps will most likely become a major competitor to first mover ChatGPT.

But even though it’s based on what was available on the internet up to end-2021, GPT models such as its fourth-generation iteration nevertheless has access to a vast language model database and an uncanny ability to understand and generate text. GPT AI can therefore serve as valuable tools for accessing existing information and to generate text responses from it.

It can assist users in many ways, such as retrieving data from diverse sources, summarising articles and offering insights on a wide range of topics. This accessibility to information has the potential to bridge the knowledge gap and democratise access to education, making learning more inclusive and empowering individuals worldwide.

Software development is one specific profession that can use GPT effectively to assist with de-bugging (correcting) code in seconds, usually saving a huge amount of programming time. GPT cannot (yet) code complex software, but it’s great as a simple starting point and can assist knowledgeable programmers with useful prompts and suggestions when they get stuck.

Schools, universities and all other educational institutions are irrevocably changed and influenced by these kinds of generative tools. The very nature of access to generative text tools impacts education as we know it. Possibly not always in a good way, but like all of us, these institutions will have to find a way of working effectively within this new world.

Cognitive assistance

GPT AI’s almost “cognitive” capabilities extend beyond mere information retrieval. As the models continue to learn, it could be utilised in various fields requiring analytical thinking and problem-solving skills. In healthcare, for instance, it could assist doctors in diagnosing patients, suggest treatment options and stay updated with the latest research. Similarly, in research and development, it could aid scientists in data analysis, hypothesis generation and experimental design. By acting as a cognitive assistant, GPT tools could amplify human intelligence and augment our problem-solving capabilities.

Ethical considerations

While GPT AI’s potential is exciting, it also raises ethical concerns. As an AI language model, GPT models learn from vast amounts of data, including online content. This raises the risk of biases, misinformation and potentially harmful responses. Efforts must be made to address these concerns, ensuring the development of responsible AI systems. Transparency, accountability and ongoing research are crucial to mitigating biases and ensuring that AI models such as GPT-4 are trustworthy and unbiased.

The future of collaboration

As GPT tech continues to advance, we can envision a future in which humans and AI work together seamlessly. GPT AI’s ability to seemingly comprehend context, provide suggestions and generate responses can foster collaborative efforts in fields such as content creation, software development and even design. With AI as a collaborative partner, humans can leverage its capabilities to augment their own expertise, leading to increased productivity and innovation.


The emergence of generative pre-trained transformer artificial intelligence technology represents a significant step forward in human-machine interaction. Its potential to enhance communication, facilitate access to information and act as an assistant or tool holds immense promise for the future. However, it is crucial to approach its development and deployment ethically, addressing biases and ensuring responsible use.

By embracing GPT capabilities we can and should harness its potential to create a future where humans and AI collaborate synergistically, drive innovation and address complex challenges.

The advent of generative AI trained on large language models could become a more impactful driving force of advancement than the internet itself, over time. This means we currently have little sight of what the future holds in this regard. But what we do know is that our entire society will most likely be transformed in a fundamental way.

Visit and tune in to the next UJ Cloudebate™ to learn more.

What is GPT?


Prof Habib Noorbhai (Host)

Habib Noorbhai, Ph.D. (UCT) is a Professor (Health & Sports Science) and Director of the Biomedical Engineering and Healthcare Technology (BEAHT) Research Centre at the Faculty of Health Sciences, University of Johannesburg (UJ). In line with the 4IR mandate, he is involved with inter-disciplinary work sports sciences, healthcare technology and innovation spaces, and has a particular interest on how health sciences students can advance their learning using methods and tools through extended realities (XR).

Prof. Noorbhai holds an NRF Y-rating and continues to conduct research within the cricket sciences. He has published over 50 papers in local and international journals (research articles, conference proceedings and book chapters). He also serves as a reviewer for local and international health and sports science journals, as well as an Associate Editor for the BMJ Sport and Exercise Medicine journal.

In 2015, he was nominated among Mail and Guardians top 200 young South Africans. He also won the Mr South Africa title in 2017, using the title to spearhead change within health, sport and education. He was later inducted as a Golden Key International Honorary Member at UCT in 2018 for his contributions to academia, leadership and service. In 2021, he was awarded the Distinguished Vice-Chancellor’s Award for Teacher Excellence at UJ.

Twitter: @Prof_HN

Prof Margaret Blackie

Prof Margaret Blackie holds a PhD in education from Stellenbosch University and a PhD in chemistry from the University of Cape Town. Whilst teaching organic chemistry at Stellenbosch she juggled research in medicinal chemistry and in tertiary STEM education. In June 2022, she joined the Centre for Higher Education Research, Teaching and Learning at Rhodes University and now focuses on education research. She was the recipient of the South African Education Medal in 2020 and was awarded the Stellenbosch University Teaching Award in the Distinguished category in the same year. Her research interest is in knowledge building in tertiary STEM education.

Dr. Herman Myburgh

Dr. Herman Myburgh serves as the Head of the Metaverse Research Unit at the Institute for Intelligent Systems. His research primarily encompasses using 4IR technologies, specifically extended reality, and augmented intelligence, with a particular emphasis on healthcare and tertiary education sectors. Driven by a steadfast belief that African challenges necessitate uniquely African solutions, Herman is pioneers innovative, enduring, and occasionally unorthodox solutions to pressing regional issues.

Prof Danny Oppenheimer

Danny Oppenheimer is a professor at Carnegie Mellon jointly appointed in Psychology and Decision Sciences who studies judgment, decision making, metacognition, learning and causal reasoning, and applies his findings to a diverse array of domains, such as charitable giving, consumer behavior, education, electoral outcomes, and how to trick students into buying him ice cream.  He is the author of over 50 peer-reviewed articles and books including “Democracy Despite Itself: Why a System that shouldn’t work at all works so well” and “Psychology: The Comic Book Introduction”. He has won awards for research, teaching, and humor, the latter of which is particularly inexplicable given his penchant for truly terrible puns.