Stroke order images and a Javascript LZMA decompression utility

This is information for userscript developers.

I have released on githib files containing the stroke order svg images. You may use them in your project. This is an alternative to soliciting jisho every time an image is needed.

The data originally come from Ulrich Apel’s KanjiVG project. ( The data is freely available under the Creative Commons Attribution-Share Alike 3.0 license used this data to produce the SVG files. Their files are released here and are also available under the Creative Commons Attribution-Share Alike 3.0 license.

I have collected these files in objects indexed by their kanji. These objects are under the JSON format. They are available also under the Creative Commons Attribution-Share Alike 3.0 license.

The original files are too large to be hosted on github. They are available in LZMA compressed format.

  • All_svg.json.compressed - All SVG files provided by jisho.
  • WK_svg_normal.json.compressed - The SVG for the kanji taught by wanikani in their original height of 109px
  • WK_svg_smaller.json.compressed - Same as the previous file but proportionally smaller with a reduced height of 63px.

The url are with the sample code below.

The SVG in these file differ from the jisho original in that the px units are omitted from the viewBox attribute. This change is inconsequential because when units are missing from the viewBox attributes SVG defaults to px. This change is required because there is some Javascript on wanikani that reacts negatively to the presence of units in viewBox attributes and generates error messages on the Javascript console. Removing the units works around this problem.

In a Tampermonkey userscript you may decompress the files with jcmellado’s js-lzma javascript port of LZMA.

You will need @require directives for these two files:

// @require
// @require

LZMA compressed files are binary files. You can’t load them with the wkof.load_file() function as it stands now because it only downloads strings. A modified version together with sample decompression code is provided below.

Sample code
    // Begin code lifted from wkof core module and adapted to transfer binary data

	function split_list(str) {return str.replace(/^\s+|\s*(,)\s*|\s+$/g, '$1').split(',').filter(function(name) {return (name.length > 0);});}
	function promise(){var a,b,c=new Promise(function(d,e){a=d;b=e;});c.resolve=a;c.reject=b;return c;}

	// Load a file asynchronously, and pass the file as resolved Promise data.
	function load_file(url, use_cache, options) {
		var fetch_promise = promise();
		var no_cache = split_list(localStorage.getItem('wkof.load_file.nocache') || '');
		if (no_cache.indexOf(url) >= 0 || no_cache.indexOf('*') >= 0) use_cache = false;
		if (use_cache === true) {
			return wkof.file_cache.load(url, use_cache).catch(fetch_url);
		} else {
			return fetch_url();

		// Retrieve file from server
		function fetch_url(){
			var request = new XMLHttpRequest();
			request.onreadystatechange = process_result;'GET', url, true);
            if (options.responseType) request.responseType = options.responseType;
			return fetch_promise;

		function process_result(event){
			if ( !== 4) return;
			if ( >= 400 || === 0) return fetch_promise.reject(;
			if (use_cache) {,
			} else {

    // End code lifted from wkof core module and adapted to transfer binary data

    // the object that will hold the images
    var strokeOrderSvgImages;

    // function to be invoked in the startup sequence
    function get_stroke_order_file(){
        //let strokeOrderFileName = '';
        let strokeOrderFileName = '';
        //let strokeOrderFileName = '';
        return load_file(strokeOrderFileName, true, {responseType: "arraybuffer"})

    function lzmaDecompressAndProcess(data){
        let inStream = new LZMA.iStream(data);
        let outStream = LZMA.decompressFile(inStream);
        let string = streamToString(outStream);
        strokeOrderSvgImages = JSON.parse(string);

    // converts the stream to a javascript string
    // the native stream.toString() methof of the LZMA
    // script is abysmally slow and a major source of latency
	function streamToString(outStream){
		var buffers = outStream.buffers, charList = [];
        if (window.TextDecoder){
            // TextDecoder supported - do it the fast way
            var decoder = new TextDecoder();
            for (let n = 0, nL = buffers.length; n < nL; n++) {
        } else {
            // TextDecoder unsupported - do it but a bit slower
            var conv = String.fromCharCode;
            for (let n = 0, nL = buffers.length; n < nL; n++) {
                var buf = buffers[n];
                for (var i = 0, iL = buf.length; i < iL; i++) {
		return charList.join('');

If you want to use LZMA compression with some other files be aware that the Javascript port of LZMA is picky about which compression software is used to compress the file. Python lzma module doesn’t work. 7-zip packages don’t work either. The lzma utility available here works. This is the lzma.exe file buried in the 7-zip archive. Just copy it to your working directory.

1 Like

@rfindley I changed my mind again. I will download binary files after all. The files are those mentioned in the top post.

If you plan to upload the userscript to greasyfork, you have to use one of the whitelisted sources for your required scripts, or greasyfork won’t accept it:

// @require
// @require

( is not whitelisted)

1 Like

I didn’t know this. Thanks for finding the alternate URL. I will update the top post.

1 Like