OPENAI's GPT-3 large language model (LLM) is the latest technological wonder that has filled people with equal parts anxiety and hope. Teachers are worried that students will use this business model based on artificial intelligence (AI) to cheat in their examinations. Others hail it as a first step in doing research, somewhat like an updated version of Wikipedia.
On the business front, many companies woo customers and investors by claiming it is prime technology knocking on their door. Prof. Gary Smith is a finance professor and statistician who has written several books on AI and data science. He concisely said: “While GPT-3 can string words together in convincing ways, it has no idea what the words mean.”
This artificial intelligence program seems to produce the kind of writing that looks naturally coherent and intelligent. Its anagram is GPT-3, which stands for Generative Pre-trained Transformer 3, and it does relatively simple work. GPT-3 offers articulate conversations and writes essays, stories and even research papers.
This has led some people to think that GPT-3 proves how computers can be smarter than people. Over the years, people have interacted with computer programs, like the Eliza program. Some were convinced Eliza had human-like intelligence and emotions and shared with it their secrets and feelings. Scientists call this the Eliza effect. Professor Smith said, “We are vulnerable to this illusion because of our inclination to anthropomorphize — to attribute human-like qualities to non-human, even inanimate objects like computers.”
Because it is a programmed AI program, it works by predictions, very much like your predictive text messages on your mobile phone that approximates spellings and words. For example, it can predict that the word “fall” will be followed by “down,” a feat that comes from statistical calculation that these two words are often strung together. Because there is just text and no context, GPT-3 could make statements that seem to ring with authority, but are completely false.
Large language models are trained to identify the likelihood of word sequences — what words will follow a particular word. It can regurgitate a statistical prediction model at a fast clip and even generate what seems like a convincing statement. But Professor Smith said that the GPT-3 does not use calculators and therefore makes wrong answers in numerical questions. It also does not involve logical reasoning, or try to distinguish between fact and falsehood.
In response to the examples of wrong answers posted on the internet, Sam Altman, co-founder and CEO of OpenAI, tweeted: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now. it's a preview of progress; we have lots of work to do on robustness and truthfulness.”
Being a work-in-progress, it should not be used to make important decisions in hiring selections, loan approvals, investment decisions, health care advice, criminal sentencing and military operations. Another danger is that LLMs might just worsen disinformation campaigns. As LLM-generated disinformation dominates the internet, the text of future LLMs might be filled with disinformation. Since LLM pairs words without context, this exponentially increases the likelihood that the future texts that the LLMs generate will be false.
The US Association of National Advertisers chose “AI” as the Marketing Word of the Year in 2017. Professor Smith said: “One way to push back against the misimpression that computers are intelligent in any meaningful sense is to stop calling it artificial intelligence and, instead, use a more accurate label, such as faux intelligence or pseudo-intelligence.”
For our part, we propose that teachers give check-up quizzes and exams that do not deal with mere memorization of dead facts and figures. Rather, the questions should involve critical thinking and logical fallacies, and should follow the cognitive questioning in Bloom's Taxonomy. They should also check every part of the students' research paper, from topic choice to annotated bibliography to the first draft. This should avoid the widespread fear of students cheating in their exams and research papers.
*****
Credit belongs to : www.manilatimes.net