I’m still having problems both on the website itself and in the Android app.
Guru’ed items are reporting 0%. Most locked some report 100%, some 0%. Ones that are neither guru’ed nor locked seem ok though, so perhaps the script needs a tweaking to match API changes.
Nope. For me, the next_review_date in the /study-queue endpoint is incorrect. It’s showing my next_review_date as 1498024800, which is about 593 minutes ago for some reason. It was working fine 24 hours ago.
Gotcha. Reviewing the code changes in the last 24 hours I think I know what is going on. I will review this with Darin and we’ll make the necessary fixes.
Is this why we just had a couple of outage blips?
This issue was due to some aggressive caching implemented yesterday.
I issued a hotfix which eases up on this. This should address it.
The app is working again! I just refreshed and everything went back to normal ^^ Thanks!
Great! Fix isn’t out yet though… Working its way up the pipeline, but looks like the cache expired for you and generated the correct info
You guys seem to have a lot of load issues (I’m assuming the cache project was to reduce API load), are you trying to downsize your instance(s), or is WK just getting that much more popular?
Hotfix is live now.
Really? Oo that’s funny because I’ve been having this problem since yesterday. The website was working fine to me but the app simply wouldn’t refresh after doing the reviews. The moment you commented, I tried and everything went back to normal.
Oh, well ^^
That’s consistent with the caching viet is talking about (that they implemented yesterday). If they set the cache TTL long enough, it may have been over a day before it was cleared/refreshed.
The updates the last few weeks dramatically reduced response times and error rates and increased throughput. Adding additional caching is also helping with that.
However, we are now running into a different issue. As you probably know we use Heroku to run WK. We procure X numbers of dynos to run the site. Every once and a while one dyno’s memory footprint would just explode (haven’t found the source issue yet) and it’ll be forced into memory swap, which makes the dyno very slow. We have no indication of memory leaking since all other dynos follow a nice memory profile (very very slight bloat, but thats about it). The dyno gets autorestarted once it reaches 5x the allocated memory space. Really wish this was user configurable so we can configure it to 1x.
Another unfortunate thing with this is Heroku does not do smart load balancing… it does a weird random algorithm. You can read more about it here. This sucks because that means requests are getting funneled into the problematic dyno. All it takes is one dyno out of many to reduce performance dramatically ;\
Gotcha, so the goal is to reduce load which in turn reduces response times because you have a hardware bottleneck somewhere in your heroku sizing.
We had a problem like this in our data center before because the default JVM garbage collector was waiting too long to run, so it was a big job everytime it finally did run. Increasing the memory for the instance fixed this, although I can’t explain how.
Good info about Heroku, I haven’t used it before (only AWS) and I was considering it for a side project.
I use the wanikani mobile and since last night it’s been stuck on 20 reviews and even when I actually do have reviews it just sends me to the summary of my last review session rather than start the new reviews.
Awesome, my API calls are now returning the right data for next_review_date. Thanks!
Oh, I see!! Sorry for my ignorance I clearly have 0 knowledge in this subject D: I’m more of a Food guy (majoring in Nutrition Sciences).
Appreciate the explanation ^^
No worries man, always happy to explain what I know (and occasionally a little I don’t >.>)
Yep, all fine now. Thanks for the prompt attention!