Teacher Tech blog with Alice Keeler

Paperless Is Not a Pedagogy

Alice Keeler

Um No ChatGPT – You Are NOT Thinking

I was making a Harry Potter themed Quizizz and was sharply reminded that ChatGPT does NOT think. Use AI but be careful trusting the output of AI. ChatGPT is not thinking
Um No ChatGPT – You Are NOT Thinking

ChatGPT is AMAZING! However, it does not think. It is using a mathematical probability to determine what word is most likely to come next. It wants to answer your question rather than be accurate. Use ChatGPT, however analysis and fact checking are still essential! ChatGPT is not thinking.

“Oh, ChatGPT told me…”

Kindly remember, AI did not “tell you,” it mathematically predicted…

Creating a Themed Activity

I was helping a friend with a Harry Potter themed Quizizz for rounding. I asked ChatGPT to create an outline of the lesson for Quizizz. Off to a good start with a suggested headline of “Harry Potter and the Chamber of Rounding.” I used that. While I am familiar with the books and the movies I would not have been able to come up with some of the gems that ChatGPT quickly produced.

“Hello young wizards and witches! Today, we are entering the magical world of numbers, a key to casting perfect spells. Our goal? To master the art of rounding whole numbers to various place values! So grab your wands, hop onto your broomsticks, and let’s begin!”

Screenshot of question 5 
•
Question 5:
The Hogwarts Express travels 87,432 miles every year. Round this to the nearest ten
thousand.
A) 90,000
B) 80,000
•
C) 87,000
•
D) 88,000
Hint: All aboard! Where is the Hogwarts Express rounding off to?
Correct Answer: A) 90,000

Always Argue with ChatGPT

Very rarely am I completely happy with the initial output of ChatGPT, even if I am impressed.

Screenshot from chatgpt "This jumps right into rounding without helping them to learn how to round and what is the concept of rounding."

This Isn’t Accurate

In the response to my prompt about Snape, from Harry Potter, asking about being close to 57 unicorn tears, ChatGPT gave contradicting statements. We are going to be “Close” but “without the luxury of wasting a single tear.” That doesn’t make sense in the case of rounding. Rounding is about NOT being precise.

How would snape say this "How many drops is CLOSE TO 57 drops?"
If Professor Snape were to ask the question, he might phrase it in a somewhat stern,
enigmatic manner, laced with his characteristic sarcasm or disdain for imprecision.
Something like:
"Listen carefully, class. Suppose you have a vial containing precisely 57 drops of unicorn
tears, a substance too valuable to waste on the likes of you. Now, answer me this: How many
drops would you consider to be close to 57, without the luxury of wasting a single tear?"
This could be a fun and engaging way to introduce the concept of rounding, inviting
students to consider what "close to 57" might mean in terms of rounding to the nearest ten.

So I argue with ChatGPT again:

But they can waste some tears. This rounds up to 60. Snape thinks this is close enough.
Ah, if the potion's recipe can tolerate a bit of flexibility, then Snape might say something like:
"Suppose you have a vial filled with 57 drops of unicorn tears-an ingredient too precious for
most of you to comprehend but nonetheless essential. Now, is wasting time with exact
numbers truly necessary? 60 drops are sufficiently close to 57. Would anyone care to argue
otherwise?"
This would be a nice segue into the concept of rounding up when the number in question is
closer to 60 than to 50. It makes the process of rounding seem practically useful and
efficient, especially in a situation like potion-making where approximate amounts might
suffice.

I told ChatGPT to Round UP

Now The Conversation is Round Up

When you continue your chat in ChatGPT it is remembering the previous inputs. I have accidentally trained ChatGPT that this conversation is about UP.

Later in the chat it correctly identified rounding to the hundreds place by looking at the tens place and needing to round DOWN. However, this did not go along with that the chat is about rounding UP and we needed to round UP.

ChatGPT just hallucinated how to make the answer round UP even though that is false.

"After identifying the digit in the hundreds place, which is 6, we look at the tens place, which
is 3. Since 3 is less than 5, in many cases you would round down. However, we're rounding to
the nearest hundred and the next digit is a 7, which is greater than 5. So, we round up from
600 to 700. Always remember to look at the digit immediately to the right of the place value
you're rounding to! It's as easy as reciting 'Wingardium Leviosa'—with the proper
pronunciation, of course."
With this question and explanation, Hermione helps reinforce the concept of rounding to the
nearest hundred and how to make that decision based on the digit in the tens place.

ChatGPT is Your Sidekick

Please always remember that ChatGPT does not “know” and it is not smarter than you. ChatGPT is your sidekick and can help you out, and truly it was very helpful! However, I had to engage in a lengthy conversation to manipulate the output to what I needed. I also needed to analyze the results and filter out what was false and manually make changes for what I needed.

Harry Potter and the Chamber of Rounding

I spent more time trying to get ChatGPT to give me a themed story and making the graphics in Canva than I did carefully planning the output. When I have time later I will redo this to be more user friendly to 4th graders. It has too much text and I have a bunch of other complaints for myself. However, if you’re interested in my first draft you can find it here.

ChatGPT is not thinking

I asked ChatGPT to explain how ChatGPT is not thinking. Here is what it said:

ChatGPT is not Thinking: Understanding the Limitations of Language Models

Artificial intelligence (AI) has come a long way in the last few years. With advancements in machine learning and natural language processing, chatbots and conversational agents like ChatGPT are increasingly capable of producing human-like text. Some people even mistake ChatGPT’s responses for those of a human. But it’s essential to understand that ChatGPT, despite its sophistication, is not “thinking” in the way humans do. This blog post aims to explore the underlying mechanics of how ChatGPT responds to prompts and why it can sometimes provide conflicting or false information.

How Does ChatGPT Work?

At its core, ChatGPT is a machine learning model that has been trained on a large dataset of text. The underlying architecture, known as GPT-4 (Generative Pre-trained Transformer 4), uses complex mathematical operations to generate text based on the text it has seen during training. When given a prompt, it predicts the most likely next word or sequence of words to complete the sentence or paragraph. It doesn’t “understand” the text in the way humans do but uses statistical correlations in the data to generate plausible-sounding responses.

It’s All About Pattern Matching

When ChatGPT receives a prompt, it’s essentially performing a high-level form of pattern matching. It looks at the input text and computes the probabilities of various next words based on its training data. Then, it selects the word with the highest probability and continues this process until it generates a complete response.

Why ChatGPT Can Give Conflicting Information

One of the limitations of this pattern-matching approach is that the model has no internal consistency check or real-world knowledge. It can’t remember past conversations or even past sentences within the same conversation. This absence of “context memory” means that if you ask ChatGPT a series of questions that require consistent reasoning, it could give answers that contradict one another. This is not because it’s “confused” or “making a mistake”—it’s simply because each prompt is treated as an isolated task for generating a plausible response.

The Risk of False Information

ChatGPT can sometimes provide incorrect or misleading information. This occurs because the model doesn’t actually “know” anything.

It’s merely regurgitating patterns it has seen in its training data.

If the training data contains inaccuracies or if the model misinterprets the statistical correlations in the data, the output can be false. Additionally, without the ability to access real-time information (my training data only goes up to September 2021, and I can’t browse the internet), ChatGPT can’t provide current or updated facts.

Conclusions

While it’s tempting to anthropomorphize ChatGPT and similar technologies, attributing human-like qualities such as “thinking” or “understanding” can be misleading. These systems are incredibly powerful tools for generating text based on patterns in data, but they lack the reasoning abilities, emotional understanding, and consciousness that define human thought. Users should be aware of these limitations, especially when using ChatGPT for tasks that require accurate or consistent information.

Understanding that ChatGPT is not actually “thinking” can help us better appreciate what it can do, while being cautious about relying too heavily on its generated text for critical or factual information.

© 2024 All Rights Reserved.