How do Japanese read?


#22

Try to move a finger along the page sliiiiiiightly faster than you read with vocalisation and see how it goes? (Personally I subvocalise a word or two in a sentence because that’s when I catch up with myself when reading. Like a mental headshake.)


#23

That is an interesting point. Let me restrict my hypothesis further.

Let’s take a symbol, like △.

Like a kanji, △ can have many meanings, for instance triangle (geometry), warning (traffic sign), trine (astrology), etc.

Like a kanji, △ can have more than one phoneme (pronunciations) and each phoneme is linked to one or more semantic concepts. Incidentally each semantic concept may be linked to more than one phoneme and each phoneme may be linked to more than one kanji and the same is true for symbols as well.

So instead of seeing the relationships between symbol (or kanji), pronunciation, and concept as starting from symbol or kanji, going through pronunciation, and landing in concept (sorry I couldn’t find the crow feet 1:many):
{symbol | kanji} ⫷ pronunciations ⫷ concepts
I wonder whether our brain can go from symbol or kanji to concept without going through an intermediate pronunciation:
{symbol | kanji} ⫷ concepts

After all, we match a pronunciation to a symbol or kanji only after evaluating the context where we find it. At that point we have assigned a concept possibly before choosing a pronunciation.

If that is more possible for a language that is written with little drawings with intrinsic meaning whether alone or in a cluster (like Japanese) than for a language that is written with little drawings (letters) with no intrinsic meaning when alone and only with meaning when in a cluster, then I wonder whether our brain can read the first language potentially faster than the second one.

I know it’s a stretch, but I also wonder whether with the former we activate more brain synoptic links than the latter.


#24

I once sat in a cafe with a Japanese friend who was a literature major at the time. We were both just reading fiction in our respective languages, but at some point I realized how many more pages she had read. I started asking how she read.

One thing was to have her read while using her finger to indicate where she was, and I could see her natural reading speed was much faster. I asked a lot of questions at the time, but my main memory now is simply that she didn’t need to sub-vocalize at all because she could get the meaning of words just at a glance. It struck us that Japanese (and Chinese) people tend to have a faster reading speed because of how compact the words are and because the image has one less layer of brain work to pass through (the abstraction layer that is phonetic letters).

A year or two after that I took a rapid reading course and this all clicked into place for me. My education had not trained me to read like that up to that point, where ideographic languages kinda get that for free (well, piggybacks on education without additional emphasis).


#25

Meh, I don’t think this post is very well, please disregard.


#26

Tough one. When I’m absorbed in reading English, I’m not aware of any subvocalization, but that doesn’t mean it isn’t there somewhere in my mind. When I read through the responses to this thread, I’m hyper-aware of it, mainly because that’s the subject at hand. (Similarly, I don’t usually notice that I have constant tinnitus until I start thinking about it at which point (like right now) it becomes seriously annoying.)

As to Japanese, I did notice that during the JLPT test last December, I was unconsciously moving my lips as I tried to get through the reading passages, and that’s something I haven’t done in English since I was a small child.

I’ll just add that in Japanese, it’s quite possible to encounter a word written in kanji whose meaning you can guess, but whose reading you don’t know. So in that case, do you subvocalize your guess, or do you just understand the meaning without attempting to say the word in your head?

Update: I asked my wife (Japanese native) about it, and she said that she sounds words out in her head as she reads. So that’s a data point of one at least.


#27

If subvocalization leaves you with an irritated throat, it seems like you’re doing it wrong? It’s really no more that a voice in your head, not a hum from your throat, though apparently there can be small movements in the larynx when measured with lab equipment.

From Wikipedia: "This inner speech is characterized by minuscule movements in the larynx and other muscles involved in the articulation of speech. Most of these movements are undetectable (without the aid of machines) by the person who is reading.

On the other hand, while the Wikipedia article makes it sound as though some muscle movement is always present in subvocalization, I don’t know that there’s a way to objectively measure whether or not someone in some sense “hears” the words as they read or imagines hearing the words as they read beyond measuring those muscles. Seems like an imprecise science to me.


#28

Thank you @daines

I’m happy to hear at least some anecdotal evidence that my speculation could in fact be true.


#29

Thank you @Sezme. I was totally unaware of this. Although I did read a paper some time ago that was claiming that if you think you are exercising, there is in fact some sort of muscle response that mimic that physical exercise (sorry for the over-simplification).


#30

I just thought of another example of when I do that.

When I play a new score at the organ, my eyes read the score and my fingers and feet respond by playing the keys. There is no inner voice sub-vocalising A, B♭, C♯, etc.

39%20AM

The notes on the score have a name and a pronunciation associated with it. In fact the same name and pronunciation is associated with more than one note, for instance A4 and A5 are conventionally just called A although they are one octave apart.

If I was to read aloud or sub-vocalise each note, as you do with solfeggio, that would considerably slow down the process of reading the score, to the point that the rhythm would be compromised. Also you could only read one voice at a time, while by not sub-vocalising, you can read 2, 3, or 4 voices at a time, in fact a whole orchestra score.


#31


konekush If I subvocalised all 3 of those pages I’d 1. become mentally exhausted, 2. take over 15 minutes to read those pages, 3. have an irritated throat, 4. more likely to be distracted by something

I find all of this seems strange to me. I subvocalize everything I read, and none of this applies to me. The mental exhaustion I feel from reading is dependent on the complexity of what I’m reading, it certainly doesn’t take me 15 minutes to read 3 pages, unless they’re extremely dense and I’m taking notes, throat shouldn’t be irritated if it’s not doing anything?? and I find myself harder to distract because I’m listening to the internal monologue. But hey, maybe all this just points to it being a more individual basis than anything else.


Yalmar
I just thought of another example of when I do that.
When I play a new score at the organ, my eyes read the score and my fingers and feet respond by playing the keys. There is no inner voice sub-vocalising A, B♭, C♯, etc.
The notes on the score have a name and a pronunciation associated with it. In fact the same name and pronunciation is associated with more than one note, for instance A4 and A5 are conventionally just called A although they are one octave apart.
If I was to read aloud or sub-vocalise each note, as you do with solfeggio , that would considerably slow down the process of reading the score, to the point that the rhythm would be compromised. Also you could only read one voice at a time, while by not sub-vocalising, you can read 2, 3, or 4 voices at a time, in fact a whole orchestra score.

As a fellow musician, I think this is a bit of a flawed analogy. You’re still having a reaction to the notes, it’s just in your fingers instead of your oral cavity (or both for wind players like myself). And the speed limitation only applies to music that is clearly not intended for voice. Vocal pieces obviously don’t have their rhythms compromised by being sung, but I’m going off track a bit there. As for “reading” an entire orchestral score, if you’re looking at a score for the first time, it is absolutely not possible to take in each and every line of music at once. To argue otherwise is purely asinine, and any good conductor in fact does hear the line they’re looking at internally as they read. That should really go for any experienced musician looking at a piece of music. You don’t have to say the note names to hear the notes.


#32

I would like this thread to feel as a safe and friendly place for everybody to test and share their thoughts.

When I compose or read an orchestral score I perform three types of reading.

In no particular order, the first one is horizontal one line at a time. While doing that, I can internally hear one voice at a time.

The second one is vertical. While doing that I imagine internally how the notes from different parts would sound together. I obviously keep an internal ear for notes that are for example an octave or a 13 apart.

The third one is also vertical and it takes into consideration the notes that are written on the score and are playing at one moment in time together with the notes of the overtone series that are not written on the score but whose frequencies are also generated.

In all three types of reading I may or may not sub-vocalise or read aloud the individual notes.

What I find interesting is that in some cases I completely bypass sub-vocalisation, similarly to what I’ve found myself doing while reading some kanji or vocabulary. For me there is a similarity as in both cases I go from a symbol - be it simple or made up of a number of individual symbols - to a concept (in a way a Plato’s idea) and bypass other linked entities, like for instance the name and pronunciation that are given to that concept.

I’m totally prepared to accept that some of you may think that this is all poppycock :slight_smile: I still find comfort however when I hear that someone else may have a similar experience.


#33

This is probably more a question for people in general rather than Japanese people haha


#34

I wonder whether people who are learning a foreign language where you can not always guess the pronunciation of a word by its written form notice it more.

Apart from Japanese, I seem to do so with Danish, where I often know the meaning of a word, but I’m also aware that it may be pronounced very differently from how it’s written, like tredive, rugbrød, or hvedemel.


#35

Speaking to whether we always go through the sounds of a word to access its meaning when we read, I can share what I learned when we were studying cognitive models of reading last year in grad school (Yay Speech Pathology! We learn cool language things!)

Current theory suggests that a skilled reader can access words in one of 2 ways - either they sound the word out (usually because they haven’t seen it very many times), or they pull meaning directly from the shape of the word. When kids are learning to read, because they have little experience, they always have to go through the sound to the meaning. However, as they become fluent readers, they start to bypass that for familiar words. (Interesting sidenote: it seems that people with dyslexia typically don’t shift to using the automatic recognition system, which makes reading a lot slower and more laborious for them).

Neither of these, incidentally, speaks to subvocalization. Currently, the generally held theory is that when you recognize a word by shape, you still activate the representation of the sound in your brain, even if you don’t need to use it in any way.

One more word about reading science - you might think, given that Chinese is not alphabetic, that phonological awareness (noticing that words are made up of sounds, being able to break words into sounds and blend sounds together to make words - obviously essential to taking the sounds k - a - t and figuring out they make the word “cat”) wouldn’t be that essential to learning to read. But all of the current research shows that phonological awareness is still a very important predictor of Chinese children’s reading ability. This is part of what makes us think that, regardless of writing system, early reading is governed by connecting sounds to symbols, not connecting symbols to meanings

On a more subjective note, I find that the degree to which I hear my inner voice depends strongly on my current reading purpose. If I’m really trying to understand a complex research paper, I definitely hear the voice. If I’m reading through the paper quickly and looking more for specific pieces of information, I usually don’t hear the voice. In fiction, I always hear it when I’m reading dialogue (possibly because I enjoy dialogue, and will read out lines that I find particularly funny at home, just to hear them). I rarely hear it when I’m reading descriptions, unless something about the description was confusing.

Just my general thoughts, and what I know about the current research. I can cite you guys some research papers later if you’re curious (though most of them are not mega fun to read)


#36

Emojis have a distinct, single meaning.

Ahem, :eggplant:.


#37

I don’t understand how people can think without a voice in their head!


#38

When I learned how to speed read years ago, one of the main techniques was to turn off the inner voice. When I did this, not only could I read much faster, I also comprehended more, which the site that I used said would happen. The brain can comprehend much more information than can be communicated at the speed of speech, which is why our minds tend to wander when listening to a lecture. This is also why our minds tend to wander when we are reading; we are able to read, but also think about other stuff at the same time and before we know it, we are “reading” but not really paying attention to what we are reading anymore. Speed reading forces you to pay strict attention the whole time. Nevertheless, I rarely use speed reading because it’s so mentally exhausting.

Interestingly, I read somewhere just a few weeks ago (I wish I could find this so you can read it for yourself), that apparently Chinese people see meanings, not sounds as they are reading. They said that Chinese people could even read and understand ancient texts because, while the pronunciation has changed, the meanings haven’t. If this is true, this is a very significant point. Perhaps, we tend to approach reading kanji incorrectly because we are used to our phonetic alphabets, and so, we are obsessed with readings, as if Japanese functioned like, say English, where knowing the sounds of the letters means we know how to at least say a word we don’t know. The fact is, though, you could know every reading to a kanji and still not actually know how to read an unfamiliar word. I’m not saying learning readings is useless, but I think it’s important to understand that this isn’t just an alphabet of 2200 letters. It can’t be approached the same way.

To me, this thought is actually encouraging because it can feel like the kanji system has a huge disadvantage. However, in reality, each system has it’s pros and cons. In English, you can see an unfamiliar word, be able to read it, and yet have no idea whatsoever what it means. In Japanese, you can see an unfamiliar word and not know how to pronounce it, and yet have an idea of what it means.

Anyhow, judging on everyone’s responses so far, it seems like it may not be consistent across the board in any language how people read. Perhaps, as with learning styles, it just varies from person to person and the best thing would be to tailor the approach to what works best for your brain. I think this is an area worthy of more exploration.


#39

Theoretically the Chinese could still be assigning sounds to those characters, just different ones from what was intended. So they’d get full understandimg with a ‘false’ pronunciation.

Conversely, learning English as a 2nd language I had the opposite experience. :smiley: at some point I switched to original English content (usually books over movies and the such bc of personal preferences) and, while now I can fairly confidently say I’ll know the meaning of most words I’ll see in the wild (or understand by context if it is a technical term), pronunciation still trips me up regularly when I discover known words in actual, spoken conversations and have to stop for a bit while my brain processes that. One instance I remember quite vividly is when I first watched BBC Sherlock, don’t quite remember the episode, but JW was explaining something about ‘saliva’ - which, until that point, my subconscious had been subvocalising wrong :stuck_out_tongue:

Really interesting discussion! I guess there’s as many different experiences as there are people, but it’s interesting to see the commonalities.


#40

in Japanese, it’s quite possible to encounter a word written in kanji whose meaning you can guess, but whose reading you don’t know. So in that case, do you subvocalize your guess

How the heck are you doing that (vocalizing) if you don’t know how the reading? In a western language I can at least try to sound it out according to letters, but with Kanji…?


#41

You can try to guess, just subvocalize the first reading that comes to your mind, or try to guess from the radicals, I suppose? At least that’s how I always do it. I also try to guess the reading while looking an unknown word up (before I draw it or use radical lookup), and if I’m lucky the IME finds it and I can be pretty sure that I got it right.

I only skip trying to subvocalize a word that I don’t know when I’m under time pressure, like during the JLPT or something, because I know it’s not worth spending time on trying to read a kanji whose pronunciation doesn’t matter anyway.