You are on page 1of 1

Despite the faster turnout of answers that ChatGPT give, there are still a lot of underlying

problems within the actual answer that makes ChatGPT very impractical for students. The main

one being that ChatGPT isn’t relying on the internet, but on its language model. Compared to

other AI-enabled assistants like Siri or Alexa that uses the internet, ChatGPT (GPT-4) is

dependent on its training as a language model. GPT is trained by data up to 2021 and the answers

it gives are only based on a logic, where a “token” that most likely fits the description is selected

based on its training. In simple terms, the answers are derived from a series of guesses, and this

leads to wrong answers, even if they seem true. OpenAI, the developers of the famous assistant,

already knows this and addresses this by telling users that the assistant writes something that

may seem plausible but are actually WRONG. This limitation of the assistant, hallucinating

between fact and fiction, is dangerous to students seeking correct information (e.g. Medical

Advice or History). Mentioned earlier as well is that it is only trained up to 2021 information,

which would be difficult for students trying to learn new information, especially on researches

and relevant information. Alongside this, the nature of ChatGPT is also being trained by the

users, which means that information may be learned by the assistant which are purposely coaxed

for different reasons.

You might also like