Cloudebate™
Transformer or terminator?
You decide.
Wednesday, 28 June 2023
MORE INFO
Transformer or terminator?
You decide.
4IR: Is GPT AI the next chapter in our evolution?
The future is being written, one artificially generated chat at a time
With the recent, seemingly rapid developments in artificial intelligence (AI), specifically regarding response-generative tools built by training AI on large language models, one of the innovations that has captured the world’s attention is Generative Pre-Trained Transformer (GPT) chat apps such as ChatGPT and other similar applications including Bing Chat, Bard, ChatSonic, Jasper AI and others.
These GPT apps are generally based on advanced, trained language models such as those developed by an American AI company called OpenAI, the company responsible for powering many of these apps with its latest GPT-4 model. With the ability to generate (mostly) coherent and contextually relevant responses to human queries usually posed in a user-friendly chat box, it has opened new possibilities for human-machine interaction.
The basic GPT apps are free for everyone to use, while paid-for options can allow faster processing speed, prioritised access and the use of newer language model versions, such as GPT-4.
Enhanced communication
GPT marks a paradigm shift in the evolution of human-computer communication. By conversing with users in a natural and human-like manner, it holds the potential to revolutionise various domains. From customer service and support systems to personal assistants, GPT tech can enhance efficiency, streamline interactions and provide more personalised experiences.
It can handle complex queries, generate written responses that are sometimes very hard to distinguish from human-written text. In addition, it can adapt to diverse conversational styles, thereby transforming the way we engage with machines.
As a natural language trained tool, GPT AI is particularly good at translations from and to many languages. Not all translations are accurate though, so its translated response text, like everything it generates, will require thorough editing by a real human before it can be used. It is important to note that AI tools such as GPT apps cannot be trusted to always generate accurate, useful responses under all circumstances. So, when used, it’s crucial to apply thorough knowledge-based human editing to all responses.
How GPT works
Simply put, the AI underpinning the generative capabilities of these tools use statistical modelling to (usually, but not always) accurately predict which word most likely follows another in a specific language it has been trained upon. It uses natural language rules of grammar too, from its learning regime and can thus basically do two things: Find information relevant to a query question and generate natural, understandable and regularly useful text regarding the information it finds.
But don’t take our word for it, let’s ask ChatGPT by way of example, to define itself.
Question posed: How does ChatGPT work?
Unedited response:
ChatGPT, including the version you’re speaking with, which is based on the GPT-4 architecture, is a machine learning model developed by OpenAI. It’s a type of artificial intelligence known as a transformer language model, specifically a large-scale unsupervised language model.
Here’s a high-level overview of how it works:
- Training ChatGPT is trained on a large corpus of text data from the internet. However, it’s important to note that it doesn’t know specifics about which documents were in its training set or have access to any proprietary databases, classified information, or personal data unless that information has been shared with it during our conversation. It was trained with a diverse range of internet text, but it doesn’t know specifics about which documents were in its training set or have the ability to access any source of information, database, or internet content directly.
- Model architecture GPT-4, like its predecessors, is based on the Transformer model architecture, which uses a mechanism called “attention” to understand the context and generate responses. The model has layers, and each layer consists of a self-attention mechanism and a feed-forward neural network.
- Learning During the training process, it learns to predict the next word in a sentence. It’s fed with tons of sentences and for each sentence, it tries to guess the next word. It continues this process, adjusting its internal parameters to reduce the difference between its predictions and the actual words that follow. This way, it learns grammar, facts about the world, reasoning abilities and some biases from the data it was trained on.
- Generation When you provide an input to the model (like a question or a statement), it generates a response word by word. It takes your input, processes it and tries to predict what should logically come next, based on what it learned during training.
- Fine-tuning While the initial training process is unsupervised, the model is usually fine-tuned on a narrower dataset with human reviewers following guidelines provided by OpenAI. This helps to ensure the model’s responses are safe, useful, and aligned with human values.
It’s important to note that while ChatGPT can generate information that seems knowledgeable and coherent, it doesn’t actually understand the text in the way humans do. It doesn’t have beliefs, opinions, or consciousness. It generates responses based on patterns it learned during training.

PANELISTS

Prof Habib Noorbhai (Host)
Habib Noorbhai, Ph.D. (UCT) is a Professor (Health & Sports Science) and Director of the Biomedical Engineering and Healthcare Technology (BEAHT) Research Centre at the Faculty of Health Sciences, University of Johannesburg (UJ). In line with the 4IR mandate, he is involved with inter-disciplinary work sports sciences, healthcare technology and innovation spaces, and has a particular interest on how health sciences students can advance their learning using methods and tools through extended realities (XR).
Prof. Noorbhai holds an NRF Y-rating and continues to conduct research within the cricket sciences. He has published over 50 papers in local and international journals (research articles, conference proceedings and book chapters). He also serves as a reviewer for local and international health and sports science journals, as well as an Associate Editor for the BMJ Sport and Exercise Medicine journal.
In 2015, he was nominated among Mail and Guardians top 200 young South Africans. He also won the Mr South Africa title in 2017, using the title to spearhead change within health, sport and education. He was later inducted as a Golden Key International Honorary Member at UCT in 2018 for his contributions to academia, leadership and service. In 2021, he was awarded the Distinguished Vice-Chancellor’s Award for Teacher Excellence at UJ.
Twitter: @Prof_HN

Prof Margaret Blackie
Prof Margaret Blackie holds a PhD in education from Stellenbosch University and a PhD in chemistry from the University of Cape Town. Whilst teaching organic chemistry at Stellenbosch she juggled research in medicinal chemistry and in tertiary STEM education. In June 2022, she joined the Centre for Higher Education Research, Teaching and Learning at Rhodes University and now focuses on education research. She was the recipient of the South African Education Medal in 2020 and was awarded the Stellenbosch University Teaching Award in the Distinguished category in the same year. Her research interest is in knowledge building in tertiary STEM education.

Dr. Herman Myburgh
Dr. Herman Myburgh serves as the Head of the Metaverse Research Unit at the Institute for Intelligent Systems. His research primarily encompasses using 4IR technologies, specifically extended reality, and augmented intelligence, with a particular emphasis on healthcare and tertiary education sectors. Driven by a steadfast belief that African challenges necessitate uniquely African solutions, Herman is pioneers innovative, enduring, and occasionally unorthodox solutions to pressing regional issues.

Prof Danny Oppenheimer
Danny Oppenheimer is a professor at Carnegie Mellon jointly appointed in Psychology and Decision Sciences who studies judgment, decision making, metacognition, learning and causal reasoning, and applies his findings to a diverse array of domains, such as charitable giving, consumer behavior, education, electoral outcomes, and how to trick students into buying him ice cream. He is the author of over 50 peer-reviewed articles and books including “Democracy Despite Itself: Why a System that shouldn’t work at all works so well” and “Psychology: The Comic Book Introduction”. He has won awards for research, teaching, and humor, the latter of which is particularly inexplicable given his penchant for truly terrible puns.