Oh, I scored it like I was marking a test. First part: one correct observation, bad example, no rendaku, one mark. Second part: basically correct, three marks. Third part: not relevant, zero marks. Total score: four.
I don’t understand what this means, could you explain further? I’m not sure I ever approached a 1-10 vote with this logic (not that I got what your logic exactly was here)
Interesting article from a Turing Award.
Summarized conclusion: analysis suggests that no current AI systems are conscious, but also shows that there are no obvious barriers to building conscious AI systems.
Thanks for tagging, that confirmed my thoughts and it’s basically how I use it (in referemce to the part about gpt)
It also gave me the idea of trying to follow some japanese account on 𝕏 , I think it could be a nice learning source especially when I have no time to sit and read something serious
Maybe this is more appropriate to post in my computer science thread but, since the subject was widely discussed here as well, and since this is properly about checking GPT responses validity, here we go:
I’m not sure it interpreted your question correctly. When you say “padding around the RGB triples grid” that makes me think padding outside the pixel array - its answer however is purely about padding inside the pixel aray, at the end of each row.
Is that what you were asking about or did it answer the wrong question?
In case that’s unclear:
The file format is essentially
Metadata
[padding]
Pixel array
[padding]
Metadata
And the pixel array is essentially
Row of pixels
[padding]
Row of pixels
[padding]
Row of pixels
[padding]
Row of pixels
Extended as far as needed, of course.
Where I interpret your question as you asking about the padding in the first block, but the actual answer is about the padding in the second block.
Ah right, in that case the answer is about the correct padding and it checks out to me.
The padding is added to the end of each row, for the reasons GPT mentions - every row must start on an address that is a multiple of 4 for efficiency reasons, with no padding at the beginning, only at the end. So you get:
I love how it doubles down on ほぼく, but when you start to go “… are you really really sure of that?” it falters, and decides to just erase the whole reading from existence.
Heh, it summarised a long text and decided to omit some details.
Ouch, this one’s pretty bad. I haven’t seen it make mistakes like that before. @mariodesu I think we used to get better results on questions about kanji readings, right?
Sounds almost like a Seven Ocean translation of School of the Elites, lol.
I think so. I keep seeing posts on 𝕏 from people claiming that it got performance downgraded, I think it’s possible, also because I imagine they’re prioritizing safety of use over accuracy
I guess that’s fair. But if the model can’t do simple dictionary look-ups + extra context, it becomes significantly less useful for these types of questions.
Not that it should be used for such in the first place
I got access to the new multimodal GPT-4V today, which can accept images as input, so I tried to give it a screenshot of a N1 mockup test. Unfortunately it made so many errors in scanning the text that it couldn’t answer correctly…
How did you access the image function? I got access to the voice mode by using a VPN which was able to speak in Italian even before the language support was added haha
@Arzar33 would you mind me sharing that screenshot on twitter?
Anyway, recently I realized that the only reason GPT4 is still as inaccurate in foreign languages like Japanese (and it occasionally produces some senseless sentences in Italian as well) is because the current models are not trained on enough data, or that’s my guess