Problem with machine learning type animation generation is that it takes a long time. Quote from the paper (https://www.goatstream.com/research/papers/SA2013/SA2013.pdf): “On a standardPC, optimization time takes between 2 and 12 hours” Yeah, we are not going to show a progress bar to the player for 12 hours after making edits to their creature.
I’m not sure how possible it is to make a library of premade body types and how they move. It’s very unlikely the player would make something that exactly matches. So anyway there needs to be some code to map the animations in the best possible way to the currently made creature. And it needs to be fast. That neural network for blending in animations seems very nice, but training it on a character probably takes a long time (so it is only useful for premade characters).
This is also a classical case of people making fancy videos and/or papers, WITHOUT PUBLISHING ANY SOURCE CODE. It just drives me wild. We can’t make replicate that work, without major, major effort in trying to understand the underlying concepts and then their explanation and work from that to try to build up a similar solution.
So for us the most likely thing to work is that we have a set of basic animations and a really complicated algorithm for comparing the creature the player made against them and blending together parts of the different animations to end up with a good looking animation set for the made creature. Additionally it needs to be fast, it should take just a couple of seconds.