Thank you!
Great idea and amazing AI, really like the concept-art style.
Would be amazing to read or speak Japanese and have those images flick through your mind.
Small note for the AI mnemonic images userscript:
I updated it because it caused the website to halt for a fraction of a second on new lessons because it tries to load the images first. But I adjusted it so that it does it in the background and the checking is not noticeable anymore. So, if you’d like please update it here.
P.S. I love the samurai! Although he sadly isn’t missing most of his body like in the mnemonic. lol
That was probably too much to imagine even for an AI
Maybe something like samurai head and samurai legs would work? Since it works with a lot more complex ideas, I’d imagine it wouldn’t struggle with this.
Is it possible to save the image from midjourney with an alpha channel?
It would be easy to exchange the background (if there is one).
A follow-up to this, I have created a discord server for this thread with an intention to collect and discuss visual mnemonics to be used in @saraqael userscript. I also have invited MidJourney bot in that server so you can directly generate images there. But actually, as long as MidJourney is a pay service, this project is not sustainable imho. It may or may not end this month, but yeah, we’ll see
Still, everyone is very welcome to join, the invitation link is here. The link will be expired in a week, so feel free to reply.
Hey fellow WaniKani Midjourney human! Nice to meet someone with several shared interests! I’ve been loving playing around with this over the weekend exploring Japanese influences - hadn’t even considered making up kanji-based illustrations!
Is alpha channel a layer kind-of-thing like when you edit in Photoshop, etc.? If yes, afaik, there is none. We can only download it as it is.
Wow, great idea! I’m definitely joining right now.
Sadly, from what I have used their service I think the AI does not create the character/foreground and the background separately but generates the whole image from scratch. Therefore it doesn’t even have a “sense” for what could be background to make it transparent in the alpha channel.
Exactly, it is for photoshopping. In a rendering software you can usually save the image as a targa file (I think, haven’t done this for a while) and then the background is stored as an extra layer.
So you can easily just exchange the background with another image without having to select the foreground first.
I see, looking at the results it seems to have an idea of background and foreground etc…
yeah, it knows from the trained data what each of them looks like but does not render them separately I think. But could also be wrong
It’s more like, it knows what images generally look like. Computers can’t really look at an image and go “oh, this is foreground and that’s background” the way humans can, though. Computer image recognition is closer to “oh, this part is a distinctly different colour from this part”.
Do you know this or is it an assumption?
Because it looks like there is an idea on treating the background different.
For example in the image of the gargoyle behind the leaf the background is blurred.
I guess with the recent AI model and algorithm, it can kind of knows what is the background and foreground. For example, if you give a prompt of “a picture of X with a background of Y”, it can generate a relatively good image as intended.
I the Wani Kani Wall of Shame Fortune Telling thread would be a good source of inspiration to feed into Midjourney AI.
There is a tiny leader on Wolverine’s forehead sitting on a stool and commanding his troops to invade Wolverine’s brain.
There is always the problem that details are hard to add so I just photoshopped the leader on the stool.
If anyone has a better idea I would be happy to hear about it…
When a spirit lives in a sheep it becomes the most auspicious creature on the planet. Lights shine out of it and the whole world weeps at its beauty. Look at the sheep’s fluffy glowing head. Nothing could be more auspicious and glorious than this.
This is simple, but can someone do the 主 radical?
I’ve already been doing this with Crayon and with DALLE for a while (ref https://www.reddit.com/r/LearnJapanese/comments/vk52d7/new_trick_for_kanji_learning_using_ai_to_generate/ ) and I think it’s an excellent use of it! I think we should start building up a shared database of all these and make a serious project of this! I know there’s already scripts to add images to Wanikani pages. What is the best way for us to systematically collaborate on these mnemonic images, construct a database that covers all the kanji and vocabulary, and get them inserted into the pages? Who wants to participate? I just got my DALLE 2 access so I’m chomping at the bit to get this project started!