Aware stage movement and animation

I know we won’t be allowed to make the animations for our animals, but will we be able to pick certain movement animations? For example, instead of a normal walking animation, make it hopping, like a frog or kangaroo. Or instead of chewing, swallowing something whole. Other options could include mating rituals, fighting (like using horns instead of mouth), or communication.

Other mechanics that could be implemented are having young follow parents, or go off on their own. Hunting in packs or alone, ambush method.

I think it’s way too general to have a thread called “future gameplay mechanics”. So I renamed this thread.

I think these are good ideas, but I don’t know if they’ll be too hard to implement. Although I also think it would be better if the animation was chosen by the game rather than the player by looking at the type of creature and its stats. A frog-legged creature would already hop on its own.

I think this might be incredibly hard to implement, but might be worth it in the long run (plus the aware stage is still years away). Hear me out.
We might use machine learning like this one to either always generate at least keyframes for the animation OR use it to create a library based on body types and then assign the type your creature mostly resembles. Both have benefits and downsides.
When generating the animations on the run, it will look much more natural and the creature won’t feel weightless or too heavy, or just too silly, but I doubt it is gonna be a super quick process.
On the other hand, if we create a library of hundreds of bodytypes and bodyshapes, all with their assigned keyframes, it might sometimes look a bit off, but at least it won’t take ages. The bigger problem might be if someone makes a creature whose bodytype was not thought of. Yes, the library might just get updated, but it does not seem right.
Once we have the keyframes, the animation should also be seamless and not look too clunky. Of course, we can have an animation for every single movement and every single situation. Or we can have a different approach. I was absolutely amazed by this video and how seamless the animation is with so little keyframes. Another example would be this video, though it did not amaze as much as the first one did.
What do you think? And would the first approach even be possible? (the last question is probably a question for @hhyyrylainen )

Problem with machine learning type animation generation is that it takes a long time. Quote from the paper (https://www.goatstream.com/research/papers/SA2013/SA2013.pdf): “On a standardPC, optimization time takes between 2 and 12 hours” Yeah, we are not going to show a progress bar to the player for 12 hours after making edits to their creature.

I’m not sure how possible it is to make a library of premade body types and how they move. It’s very unlikely the player would make something that exactly matches. So anyway there needs to be some code to map the animations in the best possible way to the currently made creature. And it needs to be fast. That neural network for blending in animations seems very nice, but training it on a character probably takes a long time (so it is only useful for premade characters).

This is also a classical case of people making fancy videos and/or papers, WITHOUT PUBLISHING ANY SOURCE CODE. It just drives me wild. We can’t make replicate that work, without major, major effort in trying to understand the underlying concepts and then their explanation and work from that to try to build up a similar solution.

So for us the most likely thing to work is that we have a set of basic animations and a really complicated algorithm for comparing the creature the player made against them and blending together parts of the different animations to end up with a good looking animation set for the made creature. Additionally it needs to be fast, it should take just a couple of seconds.

1 Like

Seems like I underestimated the size of the problem. Enormously. I was guessing the machine learning would be slow, but my guess was “yeah, maybe a few minutes”. Boy was I wrong! And the libraries would not have to match perfectly, just the closest match. For example dozens of bipeds based on their body proportions, dozens of quadrupeds based on their body proportions and so on. Maybe comparing the sizes of certain body parts and then based on that relation assigning it with the most fitting model (imagine it as taking all the cubes the creatures in the video are composed of and order them based on their size)? I am really not sure, perhaps I’m just beating a dead horse now, as I have no experience with any of this.

Maybe a combination of the two. Have a library, and detect the closest match, then modify slightly