Testing GPT-4.0 accuracy rate on Japanese language

Having interest in Machine Learning I looked through quite a few posts in the relevant subreddit a bit back and a lot of people in the industry were saying a masters is highly recommended and people usually end up moving into the position at the start within a single company rather than applying for ML engineer and getting directly hired in from the outside right out of college. Are you looking more into research and development or application? The former will require a lot more math I imagine, but I cant really give much more info on either of them because I just don’t know.

It can change, as you saw me nudge it as well during training. They are just optional as in you can choose to not include them and operate only with weights. The times you would not want to have biases is a bit complicated, but they exist. But the standard networks in place today all have weights that I know of. There are weightless networks as well iirc but they require a ton of memory and I don’t really know anything about them lol.

4 Likes

Hmmm, I think it would have value in the sense that you can kinda feel out what you’re getting yourself into. ML is a deceptively iterative and creative process from my small experience with it. I kinda had this image of like people do some crazy math and hard wiring and then train a network and boom it’s smart. In reality, it’s a lot of experimentation and that surprised me. At higher levels, the nature may change though. Ilya sutskever said that a lot of what he does is more on trying to understand why the model has developed the way it develops and how it’s trying to learn. Because when you understand the why and how, it’s a lot easier to know what you can do to help it out らしい.

Point is, I think if you’re going to devote the time, it would be helpful to kinda see more about what it is in the first place. So yeah maybe a bit of doing programming or CS would be good like right now. You’re on the user side of things right now, so kinda taking a step over to the developer side of things and seeing how you like the scenery change might be a good idea. Python is definitely the easiest language I’ve had to learn (between that Java, js, and c, ++) and it happens to be good for ML (I use pytorch, personally) so honestly getting started with a little python might not be a bad idea. Even general concepts like lists/arrays will be very helpful when you get into tensors and their shapes/dimensions since they’re basically just lists lol.

Error in line 2:
Less x2 < is not a recognized action

You got your first compiler error! Try changing it so that you’re ordering me to do something rather than just evaluating which one is bigger. After all, you never told me to do anything :). Try using words like “swap” and “3rd number” or “5th number”

4 Likes

Nice, your opinion on things is always welcome! (on everything, also feel free to express your interpretation of the meaning of life if you want)

I’m reading the other messages as soon as work allows me to!

1 Like

I mean, yeah.

That’s more or less what we learned in CS.

Except, some of the stuff was only surface level, and other was very deep.

We did very little hardware stuff, like even hardware theory wasn’t that important at that level.

I couldn’t tell you from experience, but from what I’ve heard, it does sound like it, yes.

I wouldn’t say there’s that many complex concepts until you get into your topic of interest.

The problem with self-studying is knowing what to study, like even which materials to use and stuff, so good luck on that.

Like which subjects? I guess graphing, modeling, and even programming 3D stuff, but..

Well, you do need them to understand calculus, but what do you need to understand calculus for, practically speaking?

We had calculus classes, but it’s not like in other engineerings where you actually apply that all the time. There’s certainly an overlap between math and CS, it just depends how much you want them to overlap.

Yeah, that’s pretty spot-on. It’s also not that complex, imo. :rofl: At least back then, and at the level we were at.

That does sound more like you’re interested in academics, which is fair.

I was mostly interested in the logic and problem-solving, which is why I ended up not pursuing it as a career, cuz I just liked the puzzles. :sweat_smile:

But all the theory stuff is pretty interesting.

I agree with Iinchou. :+1:

3 Likes

Out of curiosity, did you mention to ChatGPT that you’re interested in machine learning and/or AI? What was the exact prompt?

Calculus maybe not, but getting deeper than surface-level into 3D graphics and related algorithms requires a boatload of math, especially when it comes to vector and matrix operations. Which are not easy to follow, I think :sweat_smile:

From experience, understanding the basics of calculus doesn’t require trigonometry, but applying it in real life does. The moment one has to calculate surfaces in 2D (and possibly 3D?) space. But that was in high school and I don’t remember much of it, and never needed calculus again since.

3 Likes

Ideally it’s research. I dreamed about making scientific research since I was 5-6yo , and in every moment of my life I would drop everything to pursue that dream. It just unfortunately never happened for reasons I couldn’t control.
So if you ask me, as ridicule as it may sound said by someone that has not even started, I’d like to do research.

I’ve heard that he’s brilliant

So my original guess of “we don’t truly know what happens into an AI machine” was right? :joy:

The main concern about AI development today is that interpretability is being outpaced by capability and the gap is expected to increase exponentially らしい

I forgot to mention, but I had learnt python basics when I was 16yo. I stopped studying for summer vacation and never got back at it, forgot literally everything about it except that I loved it!

I will follow your advice and do math + CS basics at the same time, but I will do an only math month first because I feel like it will need some structure and a good start.

Oh right :nerd_face: is “3rd” number relative to it’s starting position or to the beginning of the scale (1)? In the first case I suppose I have to specify direction

1 Like

Now I’m curious about how much math

But the math list (other than the mere basics to approach CS) does contain the mathematics that is proper of an university CS curriculum right? There is no math in Phase 2, I suppose it’s because everything was put inside Phase 1 and Phase 2 only has the remaining subjects?

I’m aware, but I suppose it’s not even an insurmountable obstacle with all the MOOCs

I don’t know :sob:

I want them to overlap in the measure necessary to reach my aforementioned goals

I don’t remember - I prompted it to provide something like “a study plan that, assuming zero preexistent knowledge in mathematics, probability, statistics and logic, lists all the arguments that are required to know to study and effectively comprehend CS, LLMs and AI in general” but I didn’t mention machine learning. I asked it to “ensure that to study a concept in the list I do not need knowledge that is acquired by studying another concept that is further in the list” and to “ensure that there are no superfluous concepts (it never took off anything and I guess it’s a difficult request to elaborate) and that no important concepts are lacking in any part of the list”.
I iterated these prompts and made crossed controls on phase 1&2, phase 2&3 and phase 1&3. But I’d like to know if anything I want to study is lacking in phase 3 so probably I will test this with something like “given my general interest in the fields of machine learning, artificial intelligence and large language models, is there any additional concept that I may want to study, for any reason, that is not present in phase 3?”

1 Like

Can you ask it why?

1 Like

One thing to remember about ChatGPT is that it might not understand the conditions in your request fully, because in the end it’s just a language model. For instance, given above conditions, I would be totally lost and it would take me a lot of thinking to compile such a list. So it might’ve just generated a plausible looking CS studies curriculum. For instance, to understand LLMs and AI (a branch of machine learning in the end), you don’t need Java, most of C++, definitely neither Web development nor network architectures, for instance.

1 Like

I did and the answer was quite convincing actually… I did this 4 days ago and accidentally cancelled the chat since then but this afternoon I’m going to do it again and with your assistance we can improve the process

1 Like

I was thinking about it right now, perhaps a prompt in the form of a list of commands/requirements would work better?
Like:

Create a study plan according to the following rules:

  1. Zero knowledge bla bla
  2. Goal is X
  3. Propaedeutical bla bla
  4. Not lacking anything
  5. Anything

And perhaps there are more specific controls I can make to ascertain that it meets my request like cross checking single elements in phase 3 with the rest, I expect this to have a result because it’s a more specific question

I mean, you can try different things, but you need to be able to verify that the output it gives makes sense and not is just plausibly sounding, because

2 Likes

There’s a very interesting function of the prompt perfect plug-in, it normally functions by prompting “perfect + ” and it already provides noce results (even though I believe the results are not the most efficient possible). With this concern I asked it to change its result slightly and realized that you can communicate directly with the plug-in and give it directions to improve the resulted prompt. I suggest you trying that, it’s enough to specify that you want changes of a prompt result straight after using the plug-in

I don’t think its ridiculous to want to do something, really. I’m more the kind of person to judge people by their efforts in these sorts of cases. I’ve gotten small glimpses of what truly determined humans are capable of in my journey studying japanese, and after seeing that sort of potential I stopped thinking that there are ridiculous dreams. Just people who didn’t want it enough.

He is a very intelligent man in the deep learning world.

It depends on what your definition is and what specifically we are talking about. For example, for convnets its very easy to check activation maps of final layers, or see which parts of the network are activated most with certain patterns. We can also look at all the weights of them, for example. We can check every kernel (think of it like a filter) and check what they’re outputting and changing the images.

The tricky thing is that we don’t really know exactly why it extracts some features or even what kind of stuff it will extract before we start training. We have an idea, but we can’t predict what patterns the network will learn. We, as humans, tend to want to imbue things with knowledge like they’re our children, but ML is very different. For CNNs we just define a cost function that basically tells our model how wrong it was, and give it a certain structure. We don’t teach it explicitly that in order to recognize tims face make sure the person has brown hair thats this color. We let the model figure out for itself what patterns it needs to look for to be right the most. The problem is that the patterns it finds come in the form of all the weights and biases of the network and those aren’t interpretable on their own.

Im not sure about LLMs, but I imagine its a similar case. Clearly we can understand them on some level, especially since illya says thats what he spends his time doing, but it seems like there is a lot of stuff we don’t 100% get.

Well, not just interpretability, but the main issue is alignment. There have been talked about theoretical ideas to prove alignment without needing to fully understand how they work. If we could ensure alignment, interpretability isn’t something to be concerned about in and of itself. Its just that people tend to equate the two, since it makes the most sense. There are a whole bunch of issues that come even with an aligned super intelligent AI though. Thats a deep rabbithole, though, and I gotta get to my studying lol.

Relative to starting position. I need to know which numbers to swap, after all. Telling me their position would help with that.

4 Likes

How does the “perfect +” impact the results? That the model just adheres more closely to your prompt?

1 Like

Are you role-playing as a computer? :eyes:

3 Likes

beep boop bop >:)

4 Likes

I believe that this is were the problem lies and I imagine that understanding this is why many experts are pushing for slowing down capacity augmentation, so that there is more time for interpretation.

According to someone, that is exactly where human extinction’s risk lies :joy:

Yeah, and not only, because first you need to make sure that the machine wants what you want, and then you have to make sure that you want the right thing…

I think differently here, and I don’t know if I’m wrong because I’m ignoring some fundamental technical concept, but logically, you cannot be sure of not being tricked into believing it is aligned if there is not perfect interpretation. Your thoughts?

Assuming that I don’t have to specify “don’t do nothing”, then I’d make it “swap to #2; swap to #3; swap to #4; swap to #5” (# = position)

Hard mode - come up with a set of instructions that works regardless of which particular numbers are in the list :slight_smile:

2 Likes