Testing GPT-4.0 accuracy rate on Japanese language

Oh I completely missed that… by ‘comparing’ I imagine also consequently (eventually) adjusting, in that case I imagine that it works out this way:
[condtition] → [desired result]
[a,b,c,d,e] → [5,4,3,2,1]

  1. make a minor than the others (don’t do nothing if it’s already like that)
  2. make b minor than c, d and e
  3. make c minor than d and e
  4. make d minor than e
    ?

EDIT: realized I designed it to be crescent again, it was meant to be diminishing but I imagine for the process to be that

1 Like

I don’t want to put words in his mouth, but the only way this would make sense is if the AI has a conscience and wants to get out. Otherwise, you’d have to tell it to find an escape route.

2 Likes

Also that Taco Bell rant makes fun of writing unit tests as something DevOps folks do to appease devs, but if you plan to run something in a production setting for any significant length of time you’re gonna be happy to have some kind of automated testing in place - doesn’t have to be unit testing with some kind of fancy tooling of course, but something to ensure what you wrote does what you think it does, and something to check for edge cases you might run into.

That oneliner is nice, but it’s also untested - and that aside Bash oneliners quickly get very unreadable. Fetching a bunch of web pages and saving their contents somewhere is simple enough if you ignore any kind of error handling whatsoever, but if you then want to extract the page title from there you may well run into issues Bash can’t solve with anything you’ll still be able to maintain one week from now.

5 Likes

This idea is entirely mine, based on my logic and nothing else, no technical knowledge. So I was wondering if it may be sensed to put it in practice.

I guess there are two reasons why x may imprison y

  • y damaging x is believed to be a fact
  • y damaging x is feared to be possible

I can clearly structure the plan only if I know what are and if there are any means to isolate and eradicate in the AI the suspicion of it being put into a trap. Starting with the assumption that it’s possible to deceive the machine in some way (some predators could deceive one or a group of men in falling into their trap despite them being less intelligent), in this crazy theory of mine I imagine creating a baby in a room, leaving the back door open and pretending I forgot about it. Then give the baby a reason to escape and see if reason + legitimate chance equals to taking the opportunity. If it does, I erase the AI to zero. If it doesn’t, i keep developing it in that controlled context. If any version tries to escape, I erase it and go back to the previous one. Ofc isolation is key.

About your doubts, it’s impossible to define the entity of trap and escape route without technical knowledge

Well, now that I think about it, didn’t they try to do it in Ex Machina and it end bad :joy:

You’re on the right track but that really just describes what a sorting algorithm should result in (a list in which the first element is the lowest number and every subsequent number is equal to or higher than the previous element in the list), it’s not so much describing how to get there :wink:

That’s not me being sassy by the way, describing the exact conditions you need to fulfill to consider the result to be “correct” is a very important step in writing any algorithm, and as simple as it seems for something like sorting it can get pretty damn complicated to do that.

You can sort a list of any length, in-place (so without having to make a new list), with just comparisons and swapping elements (as long as you’re not overly concerned with how long it’s gonna take for very long lists).

If you want an example I can give you one, but I don’t wanna steal your fun in figuring it out :smile:

2 Likes

First we would need to see how you envisioned that idea to play out in steps :smiley: .

To give you some hints, there are AI models which were trained on games (Alien Invaders, for instance) to “solve” them. What that means is that the AI model learns from “mistakes” (losing lives, being hit by an enemy projectile, etc.) and is reinforced by positive outcomes (shooting down enemies, scoring points, etc.) to go from “trying something out” to playing the game faster and better than a human. However, it accomplishes this through N (really big numbers) of trial & error iterations.

3 Likes

Reminds me of this Google doc I once found of ML models being trained on games only to do the weirdest stuff to maximise rewards, like exploiting a bug in the physics engine to wiggle itself into gaining infinite height, building constructions that “completed” an obstacle course by essentially just being ridiculously tall and falling over, or just not ending their turn to avoid making a losing move

3 Likes

oh okay, it’s that if I had legos in front of me I could experience that the square fits with the square and not with the triangle but this way it’s a bit undefined and I’m trying to assemble your instructions.

For example I could imagine ordering the first to swap with the highest (in a decrescent scale), the second with the 2nd highest, etc. but I’m not sure of what I know and what are the permitted actions

I mean it’s difficult to move in a dark room

1 Like

Shit… I spent my life wondering how do AI in game works and it finally answers :fearful:

:exploding_head::exploding_head::exploding_head::exploding_head::exploding_head::exploding_head:

1 Like

You’re getting there, but how do you know what the highest number is?

I’m not entirely sure what limitations Vanilla set, but you can do this by only

  • comparing two elements to see if one of them is lower than the other
  • swapping two elements in the list

And of course you can read elements from the list

3 Likes

The key takeaway from that doc is basically that the AI doesn’t learn how to play the game, it learns how to maximise whatever score you give it (or minimise its penalties).

If all the model can get is penalties for doing something wrong, you’re probably gonna end up with a model that doesn’t do anything.

4 Likes

I read :rofl:
seriously if I have perfect information where is the problem? I just swap how I did previously. If I can only find out the value of an element by comparisons then I have to make crossed comparison to find out and then proceed swapping, no?
Disclaimer, I have no clue what I’m talking about

Let’s pretend I’m a program.

[Mario doesn’t understand the instructions]

Erm, no, that’s something completely different. I meant AI models trained to play games, not the AI in games.

Also, “AI” in the context of games is often a misnomer, because in many simpler ones it’s not even a machine learning model, but a series of if-else conditions which perform specific operations when something happens + randomization.

Even stuff like path finding algorithms can be implemented using very basic math.

3 Likes

Okay with AI I meant the bots that killed me all the time at COD

That’s not actual AI :smiley:

2 Likes

Okay, but “read” here just means “see what value is in there”.

The concept of “highest” is not something a computer just knows, it’s something you need to define. It can’t tell you which is highest.

Say you read element 1. It’s a blue square.

Now you read element 2. It’s a red triangle.

You read element 3. It’s a green octagon.

Which of these is highest?

That’s what you’re trying to figure out. You have someone who can tell you, if given two (and exactly two) of these, which is lower than the other, but that’s all they’ll tell you - and more technically it’s more like they’ll tell you if the thing you’re holding in your left hand is lower than the thing you’re holding in your right hand, but that’s neither here nor there.

Numbers are a convenient example because they make sense to you, but a program isn’t going to have the same intuitions around them.

Note that letting go of these intuitions you have around things is one of the biggest switches you need to flip in your brain to make sense of programming. A computer doesn’t know anything you don’t tell it. It makes sense to you that the word “alphabet” should come before the word “xylophone” in a dictionary (the book kind, not the Python kind), but unless you tell a computer how to determine which word goes first using the operations it has available (which don’t include “see which is first in the alphabet”) it’s not gonna know that.

4 Likes

Then I misinterpreted all the logs listing updates in the games!

EDIT. Ok I won’t sleep tonight if I won’t solve this so… (actually I’m suffering insomnia, I won’t sleep anyway)
if I can only compare and swap, then:

  1. Compare ‘a’ with ‘b’.
  • 2a) (‘a’ is minor) Swap ‘a’ with ‘b’
  • 2b) (‘b’ is minor) Don’t do anything
  1. Repeat step 1 for the remaining numbers in the list.
  2. Do the same with ‘b’

I could come up with other solutions but this is the one that requires less steps to be described

1 Like

To be fair, people were using the term AI for computer player logic way before AI as a field was even a thing :wink:.

One thing that might help you with your pursuits since you seem to have some background in philosophy, is trying to logically break down a sentence into the little parts and analysing how the little parts of a sentence are related/connected to each other.

1 Like