With the animation I’m working on I want some help breaking down the
lip movements that Bamboo is going to have to make while talking.
I saw that Synfig has support for loading Papagayo lip sync files
so I took a look at the 2.0, C++ version of the project and saw it was quite abandoned.
I decided to incorporate a bunch of the great fixes to the software over
the years, as well as a tweak of my own, to the all-the-fixes
branch on my forked repo. And, to make sure it’s working as
intended, I created a lip sync from my animatic audio in the forked version:
…and loaded it into Synfig:
I’m not looking to become a hardcore maintainer of Papagayo. I just want it
working well enough for my animation work. If you want to help keep it going,
pull requests are best!
Update: I found the Python-based 1.x fork called Papagayo-NG
which seems better maintained and has AppImages. I may try this one out
as well. The project is also more active
than the original one.
So you want to do hand-drawn animation on Linux and/or entirely with
Free Open Source Software, and that animation involves lip syncing? Well,
here’s some notes as I work on my first decent sized animation in Synfig/Krita/Kdenlive
and I get my pipeline ready to go for smooth production work on future animation.
Get the latest development version
if you’re on Ubuntu, the version (as of 2020-06-03) in the Ubuntu repos
has a bug where you can’t enter in 0 for the angle of a rotation,
and you also can’t use Ctrl-A to select the contents of a field in the
Properties of a group, and I’ll risk any other weirdness in the app to be
able to enter in 0 for the angle of a rotation and to use Ctrl-A.
If you want better (imho) Papagayo import that doesn’t forcibly put a
rest at the end of every word, grab this branch
and build it. The build process for Synfig is very very nice, by the way! I
wish all large apps were this smooth to build and rebuild.
You should try out this version of Papagayo, cobbled together from various
forks over the years, that is fast and supports single-frame words and has other
nice quality of life improvements. Please help me get it building on other platforms!
If you’re running a modern Ubuntu and using the Synfig AppImage,
there’s probably gonna be a ton of fontconfig
errors and the UI will look weird. Errors mentioning unknown element.
Lots of them. This happens to me in Krita as well on another machine when
run via AppImage, though this did not fix Krita on that machine.
The solution is to
run the AppImage and provide the system fontconfig libraries via LD_PRELOAD.
The exact line I had to use is:
Hunt down those two libraries in /usr/lib and replace the paths with
what you have and the fonts will look all pretty and stuff.
Or build Synfig from source and you won’t have this issue.
Inkscape .sif export is not well supported, even for things like
rectangles. Synfig also doesn’t seem to like Inkscape SVG files.
This is fine, as I’m doing titling in Kdenlive and only importing
hand-drawn bitmaps from Krita into Synfig.
I was going to do the whole thing in Synfig, but I found a combination of
Krita, Synfig, and Kdenlive is what you’ll probably want. I envision a
much larger, more involved blog post/video series once my Papagayo
fix is in for Synfig. This also means separate Synfig files for, say,
camera distances from things, so you don’t need to worry as much about
RAM usage in Synfig as you’re making a lot of smaller scenes (exported
at HuffYUV-compressed MP$ files) and assembline them in Kdenlive.
By default, importing a bitmap is…wrong. The dimensions are all messed up.
Edit > Preferences > Editing > Imported Image > Scale to fit canvas
is what you want. This means you’re importing in everything at full size, so
export everything at full size from Krita. The exports are compressed PNG
files, and disk space is cheap nowadays.
It looks like you can easily build at a lower resolution, say 640x360, and
then export to a larger resolution, say 2560x1440, and everything scales
as it should. As long as the assets are high enough quality, of course.
This pipeline works as expected because Synfig is storing references to
imported images and not the whole image data:
Export rough animation frames in Krita
Import frames as image sequence into Synfig
Export cleaned up animation frames with same names in Krita
Reopen the Synfig file
The new, cleaned up frames will appear in Synfig
Only put one voice into a Papagayo file, otherwise
none of the voices will appear when imported into Synfig.
It seems like it’s best to split the audio files into
one clip per Papagayo file per character pose. In one scene,
Bamboo will have two body/face positions, so I’ll need
two clips and two Papagayo files.
The parts you need for your lipsync from Papagayo are
listed on this lip sync chart
with a few changes:
L is L, Th
etc is C, D, …
Which means you’ll, at most, need art for: AI, O, E, U, L, WQ, MBP, FV, etc, rest
The best way I found to make the mouth parts and export them in a way that
makes Synfig’s handling of them less of a pain is:
Create a new group layer above the layer where the character’s head
is located.
Draw each mouth, in a new layer, with the layer named after the part.
The exporter can also export group layers, with visible layers merged down, so
my standard inks -> shading -> colors layer setup will work just fine.
Import the Papagayo file into Synfig.
This is actually a link! The frames are reloaded if the Papagayo file is updated.
(or if you modify Synfig’s source code to process a Papagyo file differently!)
Import each mouth image individually, and drag the imported image into the appropriate
folder in the Papagayo group object.
If the Papagayo file isn’t openable in the Layer docker in Synfig, put the file in a
Group, then remove it from the Group. I tend to put everything imported in a Group now.
BOOM, mouth animation.
Since Synfig supports Python plugins that work by
manipulting a temporary Synfig (XML) document
during an import, I’ll figure out a way to import all of the mouth patterns for a
Papagayo file somehow. Or, I’ll just make a Python or Ruby script to modify the
SIF file and put in the links via the command line. Yay XML! :(
Below is a Python script to run in Scripter in Krita to create layers for each mouth pattern
used in a Papagayo file. You only get the layers for the mouth patters you’re
actually using, which will save on drawing.
Select a Group layer where the new layers should be deposited and run it. This sets
up the layers I will typically use for an art piece: Pencils, Colors,
Shading (set up Multiply blending mode), and Inks. This works fine in Krita 4.3.0:
from PyQt5.QtWidgets import QFileDialog
from krita import Krita
KID = Krita.instance().activeDocument()
active = KID.activeNode()
file, _ = QFileDialog.getOpenFileName(None, "Create phoneme layers for Papagayo file", "", "Papagayo File (*.pgo)")
# rest is the default when a frame doesn't have a specified phonemephonemes = ["rest"]
with open(file) as f:
for line in f:
line = line.rstrip()
if line.startswith("\t\t\t\t"):
_, phoneme = line.split(" ")
ifnot(phoneme in phonemes):
phonemes.append(phoneme)
for name in phonemes:
layer = KID.createGroupLayer(name)
active.addChildNode(layer, None)
for artName in ["Pencils", "Colors", "Shading", "Inks"]:
artLayer = KID.createNode(artName, "paintlayer")
if artName =="Shading":
artLayer.setBlendingMode("multiply")
layer.addChildNode(artLayer, None)
All in all, the process was pretty decent. It took some time to figure out
the Synfig way of doing things, but once I got that, the actual pipeline of
Krita -> Synfig -> Kdenlive was very smooth. Expect something way more
substantial describing the process in the coming months.
I host all my sites on Sandstorm using Hugo
and hugo-sandstorm. This is super
convenient for a couple reasons:
I don’t have to worry about deploys for new or existing sites, a git push
takes care of it.
I don’t need to architect the git push setup, the app does that for me.
I can easily run the blog locally for testing.
Because of sandcats,
each site magically gets a subdomain I can point at, either with a CNAME or
a proxy like HAProxy or nginx.
However, there are limitations:
No 404 pages.
No easy way to use your own HTTPS certificates like Let’s Encrypt ones.
I want to clamp down on subdomains so www.johnbintz.com redirects properly.
I was solving the latter 2 for a while with HAProxy, but after setting up the
site for The Industrious Rabbit and
realizing I wanted to shift pages and sections
around as the dust settles on this hit new comic, I wanted 404 pages
so folks could still find what they were looking for.
Here’s the nginx config I eventually came up with. You’ll still have to
put the site’s public ID as a TXT record into your domain name as
indicated in your static config setup, and make sure the Host header is
sent along correctly so Sandstorm can do the DNS lookup correctly:
# redirect http to https without wildcards
server {
listen80;
server_name ~^.*\.johnbintz.com$;return301https://johnbintz.com$request_uri;
}
# serve subdomainless from sandstorm
server {
listen443ssl;
# let's encrypt certificates go here
server_name ~^johnbintz.com$;location/ {
proxy_set_headerHostjohnbintz.com;
proxy_passhttps://the-sandcats-url-sandstorm-static-publishing-gives-you;
proxy_intercept_errorson;
error_page404/404/;
}
}
Then, if you’re using Hugo, create content/404.md with the contents of
your 404 page. In the event of a missing page, the user will be handed the content
of this 404 page, and receive
an HTTP 404 Not Found header in response, so search engines will do the right
thing, and it’s way better than the blank Cannot GET /this-page-does-not-exist
page you get normally with Sandstorm static publishing.
I love the board game Root. It’s my favorite board game of all time.
I love the RPG, too. I’ve been been working on making a one page
player guide I use when I teach the game. You should buy the game, then
use my player guide for when you want to help teach the game faster.
I’m building a streaming/gaming PC and got Windows 10 Home because some games
are just not fans of Wine yet. :( Making a USB flash drive to install
Windows 10 from, however, was tricky, and it was all due to the size of the
installer file Windows 10 uses, install.wim. In recent Windows 10
ISOs it’s larger than 4GB, so it won’t fit onto a FAT32 filesystem.
FAT32 filesystems are easily booted and read by most motherboards,
and in order to use these larger ISOs, there’s an NTFS shim you can install.
With my combo of an older MSI motherboard, using
WoeUSB
with
that shim from the Rufus project,
and the ISO for Windows 10 1909, and a perfectly
acceptable 16 GB USB flash drive that I’ve used for countless
Linux installs, I could make the installer boot, but it would fail on
install.wim with no details as to what’s going on except for error
code 0x8007000D. This was with both Legacy Boot and Secure Boot enabled,
and the file size on the USB drive matched the file size in the mounted
ISO.
I forgot how much fun it is to work with Windows.
After trying to use either WoeUSB’s GUI or command line with the 1909
ISO, I decided to hunt down an older ISO where install.wim is smaller
than 4GB. I ended up getting the 1709 ISO from this site
so I could build the USB drive using a FAT32 filesystem.
It installed from there, and after logging in Windows Update seems happy
and it activated and everything, so I guess it’s safe and OK?
Microsoft only offers the absolute latest ISO on their site, or I could’ve ordered Windows
on a USB drive, but I’ve been burning Linux images for years now with
no problems, so how hard could this have been?
I’m also wondering why Microsoft can’t, like, make install.wim smaller.
Does it all need to be in one big file? (no). Is this a way to get folks to
upgrade to newer computers that can better handle the NTFS boot
process? (probably). Would the Microsoft media manager tool have magically
worked right, despite this being, like, a solved problem
technically? (also probably) Do they just have a ton of old USB drives from
conferences they’re trying to unload on us who want to build after-market
PCs to legally install Windows on? (this is the likeliest scenario)