Testing GPT-4.0 accuracy rate on Japanese language

That was a while back. I don’t have the text nor is it guaranteed to behave the same way now since DeepL is being continuously improved.

1 Like

Something weird just happened again, I’m at work and I’m using gpt-4 to brainstorm on how to improve organization and structure of my workload. I just tell it in detail how I organize my work omitting sensitive informations and then correct the bullshit it suggests. It’s actually useful and while its answers are for an 80% mostly redundant and occasionally blunders, it came up with some nice suggestions a couple of times. Its suggestions are also completely bound by the informations I provided and cannot come up with novel creative solutiond though, and that’s completely not a surprise.

Anyway it started outputting its answer to one of my questions. It stopped midway through, and said “I apologize for the interruption, I will continue where I left off” hahahah! Why is this happening

1 Like

Hmmm what do you mean by novel creative solutions? Afaik, that’s where ChatGPT shines in general, because it creates novel text based on learned inputs.

1 Like

I’m aware and I agree, but what I think is: one thing is producing text, one other thing is producing text that presents a solution directly applicable to a concrete problem. Actually it did that a couple of times as I mentioned and I was surprised to realize that the suggestion was actually valid, but I see two obstacles mainly: first, my description of a complex problem is going to be imperfect and omit many important details involuntarily (but also for matters of time optimization). Also the more complex is the interaction, the more technical limitations have a role. Second, its output will be inevitably logically constrained by the information it was trained on, here’s why I believe that the biggest utility of AI today other than automizing ver low complexity tasks is bias recognition and brainstorming in general. It’s great to go through it’s suggestions and think “that’s wrong… that’s also wrong… that’s obvious… that true… oh wait, since that is wrong, that other thing may be true” if you get what I mean

1 Like

Is this part not coincidental? If it gets enough input with valid solutions, it would be able to provide valid or at least somewhat plausible solutions. GPT-3 was able to do that as well, according to one of my colleagues.

This part is definitely true. As a brainstorming tool ChatGPT is pretty decent :slight_smile: . It definitely is able to give one some ideas.

2 Likes

Another specific thing I found it to be useful: I recently took over someone else at my job, that guy left a mess, everything was totally disorganized. He also provided tons of random know-how since apart from an organization standpoint he had been doing that for many years. I wrote down a lot, but really a lot of stuff in a notepad and asked gpt to organize it in a tutorial/walkthrough fashion and it actually produced a very nice document. I made some effort into making it understand what writing style I wanted it to use exactly but the content was surprisingly coherent with the original. I wonder how the hell dose it understand how to put things in the right order of importance

1 Like

It doesn’t :smiley: . Yours was just a fairly regular use case - assembling a coherent doc from notes.

I’m just saying I’m impressed with the result, because it reorder efficiently parts of those notes logically even though it wasn’t obvious from context

Also forgive me for using the banned word “understands”, I’m fully aware ot doesn’t

1 Like

Hahaha, it’s not banned. It just seemed from your phrasing as if you changed your mind on LLMs.

1 Like

I’m in fact constantly changing my opinion on things, only fools are sure of their own beliefs. It’s as stupid as imperfection of possessed informations

2 Likes

I think there is a great logical vastness between these two extremes. One can change their mind without being immediately impressionable.

2 Likes

I agree, even though I think it’d be better not to talk about impressionability, which can easily come through emotion-led bias (eg. what receives most appreciation in social interactions, often is what successfully leverages biases that can be the basis of misbeliefs). In the end if you’re logical enough and are careful with emotions overtaking rationality (not saying they mutually exclude but we know that humans are not good at statistical thinking), anywhere on the certainty-doubt continuum may be a valid position with the constant exception of complete certainty.
Does this make sense?

1 Like

A quick comparison of Japanese-to-English translation of ChatGPT-4, Bing, Bard and Deepl.
Quite impressed by Bard, it was a complete disaster at launch but it looks that the latest version are getting closer and closer to GPT-4.

Btw chatgpt just launched custom instructions.

It’s basically exposing the system role from the API (the system level instruction to guide the model’s behavior throughout the conversation). I wonder what’s the best way to tune this system instruction for Japanese Learner. :thinking:

2 Likes

Interesting, I wonder how they’re training it.

As for custom instructions for GPT-4, definitely handy and looking forward to try it but in the end I guess it’s the same of prompting yourself your instructions previous to every conversation

1 Like

Does anyone know if this is true or made up
(Yes I’m doing my reviews haha)

It’s ChatGPT. Take a random guess.

The true etymology is that it comes from the idiom 皮肉骨髄 - literally “skin, flesh, bones, marrow”, but figuratively it refers to the stages of understanding a matter, from skin-deep to getting right to the marrow of it. Which lead to 皮肉 gaining a meaning of “having only a superficial understanding”. Which by some etymological magic turned into “irony”, but I’m not entirely clear on how that step came about.

(One Redditor pointed out that, coincidentally enough, the English word “sarcasm” has a similar origin - it comes from the Greek word “sarkazein” meaning “tear flesh”.)

5 Likes

Interesting, thanks for the explanation.

Well if you check the previous posts here I shared a bunch of gpt grammar explanations and basing on users feedback it was at least 85% right 8-9 times out of ten…
But I see that this is not exactly the same of making specific grammar questions in context and I didn’t train it previously

“Sar” is also the prefix used in biology and medicine for things referring to muscles.

I don’t think it makes sense to ask ChatGPT for things one can Google on their own and are far more likely to be correct.

3 Likes

Did you forget this is a post made to value gpt accuracy haha

If you’re having fun, that’s all that matters :slight_smile:

1 Like