Any way to remove 飛び込み自殺?

I know this will make me sound like a jerk because of assumed tone, but I mean this in the most literal and neutral way possible: if you have a problem with suicide you picked the wrong culture’s language to learn. Contemporary Japan has a high suicide rate, is famous for ritual suicide into the modern era, and used suicide warfare within living memory. Suicide in Japanese culture is A Thing.


Ah might be, don’t know how userscripts work (or js/html/css). They could create generic hide functionality but they should not hide anything by default.

But no client-side chagnes don’t need to be done on WK-side at all. That’s how ad-blockers works too, when the website is loaded all data is read and checked against the blacklist before it’s put on webengine to render. So it can be done on the client-side, but I don’t know the codebase of WK so cannot say if it’s hard or easy to implement on their side either.

Yes, I don’t see how else this could be done. Only WK can delete a word from their server.

If a script is done, I assume the script would scan the batch of reviews when you start one and see if the word(s) you have added to the delete list appears there or not. If it does, then it can make it not show up but I wonder, how could someone do this? How could we block a word from appearing? Perhaps we could have the definitions of the words being blocked in a list and automatically submit them when the word is detected so that they go through the normal stages but you just don’t see them in action.

Right. The information that is rendered on the client side can be manipulated but the words themselves cannot be deleted. I can get and manipulate info from their API either with an IDE or using Postman which is easier but there you’re not free to do all you want there. I have not learned now to do this via the web. It is so much fun to see all of these scripts that people make and I also want to make one.

I’m gonna see if I find tutorials on how to get started with scripts for the web. But yeah, I believe it is something that can be done by requesting a batch of reviews from their server (whatever the user is set to do— either reviews or lessons) and then simply submitting the correct answers for those automatically so that the user doesn’t see them.

Edit: if someone that makes scripts can point me in the right direction that would be great :slight_smile:
I am familiar with HTML and CSS and have recently ventured out into fetching data from WK API with success.

There is no need to remove those from the server at all. There only needs to be a possibility to hide elements for specific users.

To do this on the server-side there need to be at least a few changes (but as I said I don’t know exactly codebase so there might be even more things)

  • In the user database there needs to be a list of elements that he has hidden.
  • Change website to support this functionality (checkbox in element)
  • And hide it from review/training sessions.

There is no need at all to remove them.

On the script side what needs to be done.

  • Make changes on few pages, the checkbox for elements what’s hidden
  • Keep a list somewhere on client (ex. Web Storage?)
  • If element appears one of two possibilities:
    • Autocomplete element and continue forward (you can keep screen white during this so user wont get triggered
    • Keep element in training session but block somehow to going there (as I said don’t know how userscripts works or wk api) and update front page training image - ignored elements what should have happened.

Those are two methods what came in mind. To be honest I think script to block would be easier than in WK side.

But as far as I know, we can’t hide or alter anything from the server side. Only WK can do that.

Yes, that’s what I was thinking as far as the client side. The thing is, whoever makes the script would have to have a web storage—mongoDB for example but it may get expensive due to the user base I suppose and the server will always have to be available— but I believe it could work as well with this method.

You don’t need to alter anything on the server side if doing client-side blocking.

No need to use mongodb, Storage - Web APIs | MDN should be sufficient.

Edit: mongodb is nosql serverside database, you don’t use that heavy thing in client-side.

Oh ok. So you were just mentioning what WK would have to do on their side if they wanted to block certain words from users.

Good to know :+1:

Well, they can do it on the server-side too.

But it can also be done on the client-side as I explained earlier.
You don’t really remove them but you just hide their existence.

Even putting aside this pretty unkind way of portraying those with adverse reactions to discussions of suicide, I have to wonder why you’ve gone out of your way to portray TC like this given they’ve explicitly stated that they’re only wanting to remove it out of concern for others in their workplace.

Would also like to add my support for a SFW mode, if possible tied to a user-managed list of vocab items. As someone who works in an office in Japan there’s a couple of items I’d get nervous about appearing in big letters on my screen.


Yep. I’m gonna learn how to do web apps. Perhaps I could make a script for WK in the future

I really thought that this script existed already but I couldn’t easily find it. Rather than work out all the integration stuff yourself it might be easier to add your filter logic to [Userscript] WaniKani Open Framework Additional Filters (Recent Lessons, Leech Training, Related Items, and more) instead.

There are other motivations for this though. Some people don’t want 金玉 coming up in their reviews in front of students / faculty room colleagues during break. Also, others complain about the baseball terminology in WK.

1 Like

Yes, indeed. That’s what this specific word issue is about. It’s very important to navigate this subject specifically in extra careful ways. This isn’t about users potentially being uncomfortable, or disliking a word on WK. It’s about keeping people safe and alive.


Worries about a trigger word and yet puts it into the topic name…

I am not a psychologist but I assume the OP isn’t either. Where is the evidence that this specific word will do the most harm to people? Mental health is not an issue to take lightly but censoring content based on a hunch is wrong IMO.


I will never understand a world where people have to be protected from a word. Life doesn’t come with a trigger warning. That level of censorship with kill everything I hold dear to me; it would be the end of movies, books, art, comedy…everyone is triggered and offended by something.


I may have been slightly harsh (not overly so, I think), but it just really irritates me this constant push to neuter language in the never-ending quest to preserve the feelings of fragile people. Don’t think that I don’t have empathy though. I’ve personally been affected by suicide (two close family members). But I still want to learn these terms. And if someone really does want to cushion themselves from the impact of reading a word then I’d suggest taking the piece of advice I’ve seen here and look ahead to find out when that word is next going to come up for review so you can mentally prepare. In any event, words related to suicide and other sensitive topics are not rare in day-to-day life so I view the efforts to avoid them to be completely in vain.

And as far as wanting to keep looky-loos from seeing these words on the screen, there are a couple of foolproof ways to fix this problem: A) don’t do reviews at work, or B) do them on your phone.


I’ve actually been in that spot. There have been very dark times. But I learned one thing…the only way out is through. Avoidance won’t help. Sometimes our triggers are there to tell us what we should be moving toward and confronting/dealing, not avoiding.


Censoring language because you feel uncomfortable is kind of weird. Instead of changing reality, how about dealing with your issues instead? I’ve had a few friends commiting suicide, but I deal with my issues instead of telling everyone to change their reality.


Going to go ahead and post this excerpt from the Community Goals Welcome to the WaniKani Community [Please read this first!] ✨

Goal 3 - Maintaining Healthy Relationships

Respect and recognize others’ boundaries, experiences, traumas and struggles when interacting with the community and when posting.

  • If you choose to discuss sensitive topics such as sex, addictive substances, violence and abuse, etc., please use content warnings. Some of our users are minors, and some may be struggling with some form of trauma, addiction or a combination of the above. Be mindful when you post about them.

Creating content warnings is done either by entering the following or clicking the gear icon on the far right side of the text editor

Screenshot_2021-04-28 Latest Staff topics - WaniKani Community

which will generate the tags below:

[spoiler]This text will be blurred[/spoiler]
[details="Summary"] This text will be hidden [/details]

  • Do not post or link to 18+ material. Additionally, please refrain from ahem , colorful language and keep it PG-13 .