A Weird worthless ramble about the reproduction and thus the gameplay loop far down the line

I’ve never been on a forum in my life, so sorry for all the etiquette things I screw up, and sorry for this strange ramble that I simply had to do.

I don’t like thinking too far into the future with this game, because it only leads to disappointment, but seeing the techyist demoyist tech demo macroscopic phase got me thinking about the gameplay loop down the line.

So as we have it now, your multicellular organism buds off a little baby cell and it matures and buds just like it’s “parent”. I can clearly see this continuing on into complex life with sexual reproduction adding on top of it that you need to find a mate after maturing. So basically, be born, grow up, find hot gorl, [censored], be born and you are back at the start. This will be the basic gameplay loop and it does seem fun, because as a child you are on the backfoot and it’s harder to survive as an adult. Besides, if you played as an adult for most of the time you would have a very near or blurry goal.

BUT WAIT! What about paternal and maternal animals when you reach the aware stage and such? If you play as a baby you get everything handed to you, which isn’t fun. So then what should happen then? Well just think about it! Be the parent! It sounds so fun, you have to take care of yourself, but also your offspring. The main challenge would not come from trying to survive as something not fully grown into the world, but instead you will be burdened with some stupid ai that eats, eats, eats, and runs into trouble. Once the child is fully grown, you become it and repeat the cycle. It sounds like a really good gameplay loop honestly.

Though the question comes of where exactly the evolution would take place and who it would effect. If you have baby and evolve your species then, would it affects the parent and child, or just child? If it affects the parent and child… why? it makes straight up no sense. But if it only affects the child then it seems like evolving is a bit too long of an investment as it would take one (1) gameplay loop (singular) for you to actually play the new version of your creature. Alternatively it could happen once the child grows up, and to that I say: “What the belgium are you talking about? That makes absolutely no sense, what about the model of the child while it was getting raised? This logic is full of holes!”

I don’t really know what I’m saying, I hope you liked the quick glimpse in my twisted and messed up brain that would make a normal person go insane. Also go ahead and say anything you have to add, this is a lovely thought I would love to expand it with yours.

Welcome to the forums. Please read the rules. We have a rule about avoiding cursing, which I helped you out by editing your post, but please keep that in mind in the future.

Well I think a bigger problem is the game design. Escort missions are rarely fun in games, and having to play both as the baby and the parent would increase the length of the gameplay cycles, which could make the game feel too long.


I think this is definitely something to be considered for those stages. There should be the ability for the player to determine their species reproduction and gestation method. This would include how many offspring they produce, and whether they are hidden away, scattered to the winds (possibly literally!) or nurtured. If the offspring is always nurtured, this would be because they are unable to survive on their own. In this case, the gameplay cycle would have to involve interaction between parent and child. This would mean either:

  • the player starting the session with very little control, then eventually being mobile and functional enough to not require their parent
  • staying with the child after gestation and only getting to the editor once the child has successfully grown, or
  • cutting the infancy section out of the gameplay, possibly showing a cut scene of the creature’s development.

As for evolving the creatures, this is something that happens over a very long timescale, rather than the species completely changing the next time you reproduce, so in that regard it makes no difference when the editor is entered.

1 Like

The children’s behavior would evolve too, right? If npc behaviors are created with evolutionary algorithms. It would learn to not jump off a cliff.

The parent and the child are the same species, I don’t get what you mean.

It could be a smaller version of the adult.[1]. More than one forms can be designed for metamorphosis, but are children that different than adults?

I liked the second option. But they aren’t mutually exclusive either. You can have very little control, and since there is nothing to do, skip time forward like a sessile gameplay, which is similar to a cutscene.

  1. My suggestion was keeping the body of the ancestor and using it as the child. So you look like a mouse, you make an edit and it starts to look like a human, its immature form would still look like a mouse, the change only happens in its adult form. You can keep adding changes or make an atavism, delete the latest changes of the adult, and edit the young version.
    We look like our ancestors in the womb, we have tails and such. Human babies don’t look like mice, so the similarities end after birth, when looks start to matter. My suggestion wasn’t good ↩︎

Way too slow to be used in Thrive. If you’ve seen those videos on Youtube where the AI learns to walk or do some other simple task, they always take dozens of generations with likely an hour or more of simulation. That’s simply so much slower than what we need for Thrive that there is no way to even consider using this approach.

But what about only feeding it useful data*(this is not for npc’s) and starting from pretrained networks? Wouldn’t those reduce the training time? In the case of pretrained networks, the time needed is zero for some behaviors.

Well we could do pretrained neural networks, but I assume when most people mention wanting neural networks, they’d want the game to create new neural networks just for their species. From an end user perspective manually programmed AI code and a pretrained neural network would not be that different in practice. Though, manually coded AI would likely be more flexible to work for more kinds of creatures, which is why I think that should be our primary focus / the first thing we attempt for the AI.


Because it is very hard to make an AI that is able to adapt to changing conditions without any further training.

For example see this video:

It quite clearly shows how after the task is changed each time the AI has no clue what to do and takes extra training (though it is less) to be able to start getting good at the modified task. In imagine recognition machine learning training an AI too much is called overfitting where the AI is only good at working with the exact dataset it is given.

what about a combination between genetic algorithm and a self learning ai

Genetic algorithms are a subset of learning AI systems…

i meant a genetic algorithm where the individuals can modify their own AI code based on what makes them perform better and pass it on to the next generation if they reproduce

That is still slow, i.e. it requires many generations for it to start making sensible changes.

aren’t the generations going to take hours on average(assuming the player goes for K selection)?

That’d make the game really slow to play, very likely much too much of a slog to get through for the average player.
And that even reinforces my point. If you can make just one generation per hour it’s going to take like 15 hours for the evolutionary AI algorithm to be able to come up with anything really worthwhile.

seems about right now that i think about it

the point of this

is that the genetic algorithm isn’t doing all the work but yeah it wouldn’t hold up well if the player kept making massive changes every editor cycle

I don’t think any off the shelf Neural Network or ML Model is gonna be useful for anything other than finding good parameters to use in our code. Only exception I can see is a Cognitive Architecture like SOAR or ACT-R with lots of set-up for our game, which is probably more laborious than anyone wants to take on. Those kinds of things have been successfully used to simulate Quake III Arena players, and things like that.


In the video, Albert is taught how to stay upright and walk by controlling all of its limbs separately. This is a bad example, because not even a human player would be able to consciously do what their amygdala normally does. In Thrive you don’t move the cell other than by pressing forwards, and it wouldn’t be different in aware stage.

The highlighted comment doesn’t mention how long the training took, but in the previous videos of the same channel where Albert moves forward or jumps by pressing just one button, it learns the first levels in 10 minutes. In the later parkours, it takes between 15 to 107 minutes to train in the second video and 20 to 48 in the first video. The final levels had low visibility (wall on 2 sides and the spinner in the blind spot) and both of them took 5 hours to train.

To solve overfitting, random neurons of Albert are reseted, but this is not the best solution, and in the highlighted comment in the second video it gives the other solution, saying that overfitting “would mostly be fixed by randomizing the locations of the pressure plates”, which I guess means learning general skills instead of memorizing the map.

Albert also doesn’t have the locations of objects in its environment as an input. It literally has a 20 pixel image as its vision, so not only it has to learn how to move to the correct spot and avoid moving obstacles, it also has to learn to learn how to interpret what it is seeing, so twice the hardwork.

Is there a reason why, in Thrive, the time it takes to train a behavior, starting from another behavior, couldn’t be lowered to 10 minutes for cell stage? The creator of the video says that overfitting could be fixed.

I don’t think that is the case. I don’t think the analogue works because like you said in humans intentional and automatic brain functions are very different, but in a computer the act of staying upright is like an intentional action that the machine learning model needs to learn.

Even a minute is probably quite pushing it in Thrive simulation time. So these just prove my point that machine learning is an order of magnitude too slow for use in a realtime game where stuff is often changing and would need retraining.

Unless someone shows at least proof-of-concept level of code to show that this is possible, then it’s just not possible. Thinking that some computation is probably possible without trying it, is not enough to make a decision. And like I said, even like 10 minutes is quite pushing it as the player won’t want to wait 10 minutes to exit the editor (and the problem is even worse if we have like 20 AI species which also need to update).

Well, this was my last attempt, I can’t give a proof.