The ChatGPT Thread Wiki

This is a wiki of threads related to ChatGPT and Japanese learning.

Testing GPT-4.0 accuracy rate on Japanese language

Practicing Japanese with ChatGPT

ChatGPT is an INCREDIBLY powerful study tool which will revolutionise learning in longterm (with example)

Have you played with ChatGPT for Japanese learning?

Using ChatGPT to practice Japanese?

Using ChatGPT to create mnemonic devices

Using ChatGPT to make studying 1000x easier

GPT for vocabulary and grammar explanations

8 Likes

But why?

6 Likes

While each user has a different application, the threads kind of turn into the same thing. So I thought maybe having a central place to see what or why people are discussing ChatGPT might help people (like me specifically) avoid replying similar things made in other threads because I get caught in the same loop.

9 Likes

Every time I use ChatGPT for something Japanese-related, I find errors. But my use is always to streamline things where the time saved is more than the time spent finding/fixing issues.

I’ll be among the first in line to note the deficiencies in using ChatGPT and other similar large language model technologies. Not just for Japanese, but for anything.

Yet, as a means of streamlining, there’s a lot of potential.

It’d be nice to see the various ways people are utilizing ChatGPT with Japanese that they are finding success with, keeping in mind the flaws that may come with it.

9 Likes

I read somewhere recently a post by a university lecturer about how he set his undergrad class an assignment to get ChatGPT to write an essay for them, and then critically review said essay. To his surprise, every single member of the class found that ChatGPT was just flat-out making things up that were verifiably wrong - he was only expecting maybe half the class to find that.

Unfortunately, I can’t remember where I read that, and my search fu isn’t turning anything up. (Might have even been a post on Facebook consisting of a series of screenshots from Twitter.)

16 Likes

I assumed it was a roundabout way to let all the thread creators know what had already been posted without posting in their threads each time a new one was made.

6 Likes

That is the initial reason before making the thread but when I put the links to the other posts in a separate thread I realized everyone was posting different applications, but users generally respond in the same way. So I thought maybe making a wiki would serve to point out the thread is not just speaking to ChatGPT as a whole and maybe sharpen comments to more appropriately address the OP’s use case.

EDIT: Maybe it can work both ways for posts and comments.

3 Likes
For anyone who's not familiar with this aspect of ChatGPT, here's a chat I had based on a question a relative asked ChatGPT.
5 Likes

This is completely wrong, isn’t it? I haven’t looked into how it works myself, but it has been described to me as a predictive text generator (or what to call it). That means it guesses/predicts what the next word/sentence should be and it actually have zero idea if the information it gives is accurate or not. Actually, it probably have zero idea it is giving information at all. It is just writing text that it predicts fits the prompt you gave it.

I haven’t tested it myself. I know it is a far stronger/more useful chat bot than those you get on company websites but I’ve been forced to use those enough that I wasn’t interested to start and then I learned that it doesn’t actually know anything, just predicts what would sound right.

But I’ll admit I can be a bit curmudgeonly about these things.

15 Likes

Considering it includes the qualifier “to the best of my knowledge”, is it wrong? Because what it knows is how to sound natural, so to the best of its knowledge… :slightly_smiling_face:

4 Likes

That feels like a technicality though. Sure to the best of the bot’s knowledge it provides accurate information, but when its knowledge is zero and its idea of what is accurate is zero, the statement might be correct but it is inaccurate in the impression it gives. It suggests the bot knows things, which it does not.

8 Likes

The problem with this is that it quickly becomes a profound philosophical question known as the Chinese room. It boils down to how you define “understanding”. That’s not really a technical argument at all, basically. Any AI, no matter how sophisticated, will always be a long series of simple rules and algorithms, so you’ll always be able to say that it just uses complex algos to “guess the next word/sentence”:

Searle’s thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally “understand” Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position “strong AI” and the latter “weak AI”.

If the machine can speak coherently about a topic, answer questions and expand on it, how can you say that it doesn’t understand it?

If this very comment was written by an AI instead of some flesh creature, would it matter? Does consciousness play a role? What’s consciousness anyway? And how do you know that I’m conscious? What’s understanding? What’s intuition? How do I know that I truly understand something? I don’t understand how my own brain works, but if that’s the case how can I be sure that my own cognitive process is reliable enough to self-validate? Maybe I’m wrong but it’s at such a fundamental level that I’m not even able to see that I’m wrong.

We all have blind spots on our corneas that are always in our peripheral field of view, yet we don’t see them because our brain compensates for them passively. Could there be other situations where our brains “fill the blanks” and create knowledge that isn’t there, of prevents us from understanding things that are in plain sight?

Maybe understanding/consciousness spontaneously arises when a process that “guesses/predicts what the next word/sentence should be” becomes complex enough?

7 Likes

Eeeh, what? Not entirely sure how you got to that from my comment, but I wasn’t questioning understanding or not.

The problem I was trying to highlight was the fact that it does not know fact from fiction, and that it in fact makes up facts. Meaning that while it can have conversation with you, you can’t trust a thing it says to be correct, because it will assert fact and lie with the same conviction.

Just like a human. (Which to me, makes ChatGPT utterly useless, because I’ll get as much fact and fiction while doing general web searches, but there I can pick sources that I know I can trust.)

But for language learning purposes, we can’t take anything it says as fact. It could say て-form is a way to make loanwords in Japanese or give the correct definition, and it would say both with the same conviction.

That is what I was getting at. Not whether it can be considered conscious, or have an understanding. I took one philosophy course in high school and realized I did not find that kind of philosophizing interesting. I don’t care if it is conscious or not, I care about what it can do/how it works, and whether it is useful. If someone wants to debate whether it truly understands something or not, more power to them. :woman_shrugging:


Whether this will change and ChatGPT will be able to start differentiating on accuracy of what it is saying, and add “maybe” “perhaps” “possibly” and use those correctly in the future. Or some other change comes to it. Then I’d be revisiting my position on ChatGPT, but as it is now, my stand is as above.

6 Likes

I was mainly responding to this part of your original comment:

Presenting things this way is literally the “weak AI” side in the Chinese room thought experiment. It’s a philosophical argument, not a technical one.

Note that more practically I agree with you that ChatGPT shouldn’t be trusted for language study right now because it lies too much and too convincingly.

3 Likes

How is it philosophical when I am talking about its current technological limits? Considering it presents fact and fiction with the same conviction and there seems to be no algorithms or not good enough ones to stop this from happening… why is that philosophy suddenly rather than technical?

I guess it could be both. But asserting that what I was talking about is philosophy and not technical is incorrect in my opinion.

1 Like

Not getting into the philosophy, but in a real technical sense these LLMs are literally generating their output one word at a time, as I understand it – you feed in the prompt, and it produces a word. Then you feed in “prompt + word 1” and it outputs word 2; feed in prompt + word 1 + word 2" to get word 3; and on and on. This is a remarkably restricted way of doing things, since it means there’s no internal persistent memory as the sentence is generated and that the AI is not considering the end of the sentence when it picks word 1 in it. It’s easy to imagine AI designs that do work in a much more holistic manner and which do have persistent memory, and intuitively you might think that would be necessary to produce good natural language output (given that humans seem to work that way). The real surprise IMHO is that such a simple one-word-at-a-time model works so well.

On “understanding” – I’m more or less in the “if it quacks like a duck it is a duck” school of ‘understanding’; but I’m happy to say that a system that will blithely hallucinate completely non-existent scholarly papers when you ask it for an argument with references has not in fact truly understood either the mountain of text it was trained on or the question it was asked.

9 Likes

It is incorrect in your opinion? So correctness is to some extent subjective?

Interesting, I wonder where this line of thinking could lead us…

I was merely asserting that you seemed to have a different opinion on what I said so I added in my opinion to reflect this. Excuse my courtesy of allowing that just because I see things differently doesn’t meant you are wrong. Please get off my back now, sir. :slight_smile:

2 Likes

What you describe here seems more like a Markov chain generator, neural nets parse thousands and thousands of tokens as their input and the models themselves embed a massive amount of knowledge.

I mean what you say may be true in a very simplified sense, but I think it’s just too simplistic to be useful as a mental model of how modern AIs function. Or to put it another way, if you’re willing to simplify neural nets to that extent, you could probably simplify the human brain to be the same thing.

In particular I want to point out that it’s simply impossible to generate English that way. Like try it: write the start of a sentence, say 3 words, then hide the first one and give it to a different person to add the 4th word. Then hide the 2nd word and give it to another person to add one more word etc… You’ll almost certainly never end up with a coherent, or even grammatically correct sentence even though it’ll be generated by humans. Yet ChatGPT expresses itself in almost flawless English.

That’s why I think we can’t escape this notion of consciousness/understanding. I think what we all “feel” is that true intelligence is to be able to critically evaluate our own output and our own thought process recursively, identify unknown unknowns and contradictions, and it feels like AI currently lacks this “higher brain”. But where is the limitation exactly? Is it just because it lacks processing power or is there some actual “module” that’s missing? And what would that be?

We live in interesting times.

2 Likes

No, LLMs really are word-at-a-time (strictly, token at a time). They take the whole prompt-plus-output-so-far as their input, yes, but the output is always just a single “next word”. Here’s Stephen Wolfram:

And the remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word. (More precisely, as I’ll explain, it’s adding a “token”, which could be just a part of a word, which is why it can sometimes “make up new words”.)

I understand your scepticism – as I say, it’s very surprising that this works.

This is true, but that’s because your input window is only 2 words. ChatGPT has an input window thousands of tokens long. If you asked humans to generate a sentence with that much prior lookahead they would be able to produce grammatical output too, even under a word-at-a-time constraint.

10 Likes