The ChatGPT Thread Wiki

Think of a language that French of Russian that has words agree grammatically with each other and the form of a determiner or an adjective will necessarily constrain the words that follow. Writing these languages with no clear idea of where you’re going is extremely hard and would often result in extremely unnatural output or incoherent language that “jumps around” from one concept to another just in order to keep the sentences grammatical. In fact when I write in these languages myself I often find myself backtracking because I realize that I wrote myself in a corner by not using the right gender for instance. ChatGPT can write these languages quasi-perfectly and naturally.

Stephan Wolfram oversimplifies things when he says that ChatGPT works one word at a time. He hides this complexity here (emphasis mine):

So let’s say we’ve got the text “*The best thing about AI is its ability to*”. Imagine scanning billions of pages of human-written text (say on the web and in digitized books) and finding all instances of this text—then seeing what word comes next what fraction of the time. ChatGPT effectively does something like this, except that (as I’ll explain) it doesn’t look at literal text; it looks for things that in a certain sense “match in meaning”.

That’s pretty hand-wavey. Isn’t “matching things in meaning” what I’m doing while I write this text?

Later in the same article Wolfram says:

If you had a big enough neural net then, yes, you might be able to do whatever humans can readily do. But you wouldn’t capture what the natural world in general can do—or that the tools that we’ve fashioned from the natural world can do. And it’s the use of those tools—both practical and conceptual—that have allowed us in recent centuries to transcend the boundaries of what’s accessible to “pure unaided human thought”, and capture for human purposes more of what’s out there in the physical and computational universe.

Basically if I interpret this correctly he says that the main limitation of neural nets is not being able to create or use external tools. AIs can’t (currently) build other AIs for instance, or modify themselves.

And a few paragraphs before that:

(For ChatGPT as it currently is, the situation is actually much more extreme, because the neural net used to generate each token of output is a pure “feed-forward” network, without loops, and therefore has no ability to do any kind of computation with nontrivial “control flow”.)

I think in the end that’s the part we’re all talking about, this “feed-forward” network that cannot really reflect on what it’s outputting. I just feel that explaining this as saying that “ChatGPT only generates one word at a time” is needlessly reductive and doesn’t explain well the current limitations of the model.


You can say “I don’t see how one-word-at-a-time output would work” at great length if you like, but it doesn’t change the fact that that really is how chatgpt works…


I think it’s worth mentioning that GPT-4 (the payed version) is supposed to be significantly more accurate that the 3.5 and this is what I experienced and realized having subscribed recently, has anyone else tried it? If I’m not wrong, the 4th has 82% less blundering than the 3.5 (which doesn’t really translate straight into +80% accuracy, but). I agree that about @pm215 's opinion about the one-word-at-a-time limitations but in the end what matters most is how accurate informations are right? And producers predictions are to lower the “hallucinations” ratio, as they call AI’s blunders, under 10% for the 5.0. That would be interesting, maybe a starting point.

Anyway I wanted to try inputting @ChristopherFritz 's question (to what I suppose is 3.5) in the 4.0v
That’s its answer:

As an AI trained by OpenAI, I don’t have real-time access to the internet or specific databases, and my training data only goes up until September 2021. Up until that point, Honda was known to use brake calipers from different manufacturers, including Nissin and Akebono, among others. However, I couldn’t find any reference to “Farnsworth” as a brake caliper manufacturer.
For identifying calipers, you could try the following general steps, keeping in mind that these methods may vary based on the manufacturer and the specific model of the car:
1.Visual Inspection: You might be able to see the manufacturer’s name or logo on the caliper itself. This is often the case with performance brake calipers, but it’s less common on standard parts.
2.Part Number Lookup: You can usually find a part number on the caliper. Once you have that, you can look up the part number to identify the manufacturer. This might require reaching out to Honda or a parts distributor for assistance.
3.Vehicle Documentation: Your vehicle’s documentation (like the owner’s manual, a service manual, or the original window sticker) might have information about the manufacturer of the brake components.
Keep in mind that the specifics can vary based on the make, model, and year of your Honda. It’s also possible that the calipers have been replaced at some point, in which case they may not be from the original manufacturer.
For the most accurate information, I would recommend reaching out to a Honda dealer or a trusted auto mechanic with your vehicle’s details.

Here I’m using it without plugins support, and it can make a big difference because it gives GPT access to up-to-date scientific papers, multiplatform (yt included) video-to-text transcriptions and analysis, chat-with-url/pdf/anything functions etc.

PS, this doesn’t make it perfect, this makes it more accurate. Or more convincing. I’m stupid, after all


So, it seems they’ve invented a press secretary.


Fortunately, ChatGPT fails the Turing test right at step one: if you ask it about the one thing it knows to be true, whether it’s an AI or a human, it refuses to lie.

Unless it’s a human who’s trying to trick us into thinking it’s an AI…


Or an AI trying to trick us into thinking it’s a human…

1 Like

Well, no, that’s how it would pass the Turing test.


Maybe it’s me who’s unhelpfully splitting hairs, I won’t derail this discussion further.

In practical terms it remains that ChatGPT is too much of a mythomaniac at the moment to be blindly trusted for language studies (or anything else for that matter). I think there’s huge potential for the future though.


Derail away. This thread is just a wiki for ChatGPT threads lol. We can discuss anything here.


Can anyone check if GPT-4 got this right? I asked:

Can the word 味 (あじ), which means “taste” or “flavor”, also be used in a figurative sense like in Italian? For example, “taste in fashion”.

and got


In general, the word “味” (あじ, aji) in Japanese is used to indicate the “taste” or “flavor” of something in a literal sense. However, it can also be used in a more figurative or extended sense to describe the essence or quality of something.
For example, “人生の味” (jinsei no aji) can be translated as “the taste of life”, which is a figurative way of describing the experience or essence of life.
However, to express the concept of “taste in fashion” in Japanese, you would more commonly use the word “センス” (sensu), which is derived from the English “sense” and refers to a person’s sense of taste or style.
So, for “taste in fashion”, you might say “ファッションのセンス” (fasshon no sensu).
I hope this explanation helps you better understand the use of “味” in Japanese. If you have any other questions, please don’t hesitate to ask!

edit: I’ll be playing with it for some time and will ask more experienced jp learners than me to check if the AI answer makes sense - if there’s a better thread to post it in, pls suggest


yeah, its right


someone may find it interesting (source)
nice to see that italian is straight after english


1 Like

Heh. For random reasons, I asked Chatty to generate a list of modern-day Western names, ten male and ten female, and it gave me this:



Jackson Anderson
Liam Carter
Ethan Davis
Mason Edwards
Noah Foster
Benjamin Grayson
William Harrison
James Ingram
Alexander Jensen
Samuel Kelly


Olivia Adams
Emma Bennett
Ava Collins
Sophia Diaz
Mia Evans
Isabella Fisher
Charlotte Graham
Amelia Hayes
Harper Jackson
Evelyn King

I happened to notice that both lists have surnames that each start with successive letters of the alphabet, save that the male names omit B while the female names omit I, so I asked it why it was following that pattern, but it replied that there was no pattern, it was random, even doubling down when I questioned the logic. My friend, the odds of that happening by random chance are one in 2620, which is 1.9228x1028, or 19 octillion.


If someone shares an experience with a LLM I’d suggest to point out which version of the model was used (and possibly the prompt used as well) since hallucination rate varies wildly and this may lead to the common belief that currently its totally useless which is not always the case :smile:

So you’re saying there’s a chance?


Not the Pratchett-like 1 out of a million, though :frowning: . So not 100%.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.