There are a few rate limits going on. There are 60 RPM limits for API v1 and v2, but they’re independent of each other. There’s an overall limit that’s throttled per IP, but that’s pretty generous and shouldn’t count API requests.
As @viet mentioned, there looks to be some kind of misbehaved tool or script out there that checks the API v1 study-queue endpoint as rapidly as possible. When monitoring our throttling changes, I found a few API keys that were querying that endpoint at ~1500 RPM — above the rate limit, and all from the same IP address.
If people wanted to append query arguments to their get requests, those’d show up in our logs. It’s also something that could be set in the headers, although I’d need to check to see if our logging records extra headers without any extra setup on our part.
With 60 RPM, that only leaves about 5/script/minute. That should be plenty, since it’s all the same data, but they’re probably all getting it independently and eating up the API limit. (Points quietly at @rfindley’s excellent framework for caching all that goodness as the solution to that problem…)
Let us know if you figure out which one is causing the problem so we can share it with the author or the community at large. Like @viet said, we’re open to feedback on the limits. We want to find the right balance between keeping server load steady (and therefore the responses speedy) and giving people what they need to use the site and the API.