I don’t like the idea that robots and AI in thrive should be a secondary technology that will just give small buffs because 1. In reality, self-replicating robots and General AI are very important technologies 2. It’s just boring.
I propose this version of AI technology in thrive:
At the beginning of the work they will simply give buffs, until the invention of two technologies:
Self-replicating robotic factories, essentially this is an autonomous factory with AI that does not require labor and is capable of creating copies of itself, thanks to this their number increases exponentially, thanks to this they can build gigastructures not in centuries, but in several decades, and also produce a huge amount of resources.
General AI is AI that has intelligence at the level of a sentient being and above, the key ability of General AI is that it is able to increase the power of its intelligence with the help of resources and technologies, thanks to this, if it is used correctly, you can get huge advantages in science, military tactics, management and economics. However, the downside of the technology is that with some AI it may consider that in order to achieve its goal it needs to seize power or completely destroy your species in order to fulfill the goal set for it.
Living labor depends on resources no less, if not more (they need not only energy, but also water and food) .
In space there is a HUGE amount of industrial resources that can be extracted from asteroids and planets, and at the late stage of the space stage, blue giants or reactors that simulate the synthesis of heavy substances from blue giants can be used to extract resources.
I didn’t state that live labour is better when it comes to resources.
What would happen if you combined self-replication and AGI?
(Also am I the sole person for whom the forum acts strangely uncooperative today?)
2 Likes
Deathwake
(i nuked zenzone and will never let him forget it)
5
Making ideas prove themselves is very important.
If you look into modern AI safety research it seems to have no clue how to prevent AI from thinking this. Look into the orthogonality thesis and the stop button problem.
Deathwake
(i nuked zenzone and will never let him forget it)
7
People throw a lot of ideas put and they need to have some opposition so we know they’re good ideas it’s hard to disprove, not just that no one tried to stop them. The best example is obviously underwater civs.
How should the heridity of AIs work? Should they always split off as separete civs once developed enough or should there be a way to make your civ evolve into an AI civ with the player keeping the control?