Fragment from the production of Language Arts, a series of generative adversarial networks trained to interpolate a constrained visual space of text.
Font: Code2000 (an old freeware font with good Unicode coverage) Architecture: Stylegan2-pytorch, with attention, AMP, and augmentation. Script: I rebuilt the script from scratch because it was the worst Python code I’ve literally ever worked with. And I’ve worked with my own code.
Oh, this is 7 hours of training btw.
Haha, I found how to slow it down and now it’s l i q u i d i n k.
36 hours of training. Slowed to 1/4 speed from the one at the OP.
57 hours. Halfway through the first training session, but it should go faster from here because I adjusted some parameters (back to the defaults lol).
62ish. The shapes are now so pretty that I just want to watch one at a time.
100 hours. It keeps learning new letters, but it’s also developing a bunch of “whitespace” modes. I think this is from the image augmentation? Maybe I will turn that off to kind of polish it here at the end of the first training run.
It worked! At least, it seems like it’s producing fewer whitespace characters. But I do kind of like the whitespace in the interpolations. It looks like a blank page with magic ink appearing on it. GF says this one is “Soothing.”
It wasn’t just the image augmentation. There were like five images in there that were literally whitespace, and quite a few that were super tiny. So I removed the whitespace and like 600 little punctuation marks; hopefully now it’ll learn some of the more complex characters.
This is definitely working: more complicated characters, less whitespace. The color blobs and noise textures are because I’ve caught it at a phase where the generator is learning something new. 10 hrs of training (with a high gradient-accumulate-every setting) on top of the old net.