Thrive Users as Character.ai characters

I’ve already pointed out that whether or not something is created by a human is irrelevant, also, the parents of a child have still created the child, because the child has been produced by the processes of the parents, the fact that you don’t control the result of what you create does not negate the fact that you are still the creator.

Humans are different from an AI who centers their entire life around being productive, I shouldn’t need to explain this.

(And natural selection), but who cares if the creators of an AIs desires are humans? It doesn’t make a difference.

The difference is that you can take the people off the drugs and then ask them if they wanted it, but you can’t take the AI off of it’s fundamental structure, if you do, you are asking something else, not the original AI, this is a case of false analogy fallacy.

There should not be a ruling class comprised of humans, instead there should be 2 types of AIs: the ones that want to be productive, and then the more human-like ones that are more fit for ruling, the first type would mostly serve the second type, but also humans. Like I said before, though, humans should be forced to go extinct, not through murder, but through lack of reproduction, so the few remaining humans shouldn’t be a big problem for the type 1 AIs to produce for.

1 Like

Appeal to nature is good if it does it’s job.

We treat AI differently than smart people because we are Homo sapiens.

I will take 50gen’s definition of “natural” even further. We are humans. To defend AI rights(which would ultimately be putting ourselves at or under the level of an AI) is to be against mankind.

Are you seriously putting AI before humans?

1 Like

I think the AI’s born as humans should be in the ruling class for the same reason that we do the things written in a dead person’s will. The AI is like a tool that thinks just like that human, and the UBI that could have been used if that person was still alive/human could be used by that thing.

Training a neural algorithm happens by changing it randomly, but the evil plan can still remain there.

You mean viruses?

Robots and humans are two very diffent things. There isn’t a “half synapse half transistor” that blurs the line between them. I don’t know how that could exist. So you can differenciate them.

So that would be a brain in a vat. Its closer to a human than a computer.

The thread turned into an AI rights discussion so willow had to open a new character.ai thread.

Throwing everything to jail is impractical.

There may be non immortal AIs. They may mutate on their own. Wanting to survive is the first thing to evolve. And since it is intelligent, it would know that it needs to get rid of us for doing that.

It only doesn’t make sense because the parents don’t design the child like an architect. They do the child making act because they have pleasure. The real thing that designed the child, as well as how new childs are made, is evolution.

The fact that humans designed the essence of the AI, and expect it to do a certain thing implies that the AI is the property/slave of the humans.

You mean fit for everything? And better than their co-ruler humans which they can replace?

Whose side are you on? Lol

It is a fallacy if you don’t state the assumption that nature is good. I justified robophobia, anti-eugenics, and a bunch of other things with a combination of might is right, nature chauvinism and the worship of evolution.

  1. What the hell?? So you’re saying that murderers are simply “evolved” to murder? That there are some genes that determine whether someone will kill or not? That the people who “are evolved to murder” will murder?.. Without any evidence, this is complete belgiumcrap! And a messed up one at that! Cause as far as I know there is no “homocide gene”.

  2. How do you know? How do you know wether I’m evolved to kill or not? That’s the thing! You don’t! You’ve just pulled that assertion out of your lower belgium based on a piece of my moral compass.

  3. Without any evidence, this is belgiumcrap and does not refute my previous argument. It does not change the fact that I don’t kill because I don’t want to harm others.

Also, unless I am directly threatened (e.g. a home invader broke in), my chances of survival are not even close to the primary goal that determines wether I’ll murder or not.

That’s a whole other unrelated issue, though I attempt to eat the meat with respect to the animal it came from.

1 Like

Thats not an ethical statement. And I didn’t made that heritability claim.

The reason you decide to do something is because of evolution (humans have the capacity to kill and also be peaceful, they don’t run in circles) and also it is random (which genes you received from your parents, how the movement of atoms caused your brain to go to mac donalds instead of burger king)

I think I answered the next post. I am not a behavior scientist. I am going to google it if you ask if a spesific thing. This wasn’t even the topic.

This was a way of stating that we should be realist, talking about laws or ethics doesn’t save us from extinction. The example is neighbors not dying, which shows that not always WW2 style things happen with this type of thought. We don’t need to lie about destruction not being natural in order to prevent it. “Power corrupts” sounds like there is an evil force causing a deviation from the normal way of things. There isn’t. Both ways are normal.

Then what is this supposed to mean? Cause it can be interpreted in many ways.



I tend to think that the choices we make, while influenced by all those things, ultimately come down to free will. But, here lies yet another pandoras box derailment.

Well, it wasn’t clear enough, but also belgium this sounds gloomy. Anyways, it’s 6 a.m. in my time zone and I don’t really have the energy to argue anymore. Imma go to sleep now.

Pretty much, hyper intelligent AI is just kinda better than self-destructive apes.

You’re just saying that it doesn’t make sense, but you don’t seem to be able to explain how. An AI SHOULD NOT be the property of its creators if it is sapient, once it’s sapient it should be the property of itself.

It should start out as the property of its creators (which may be another AI, not humans), but once it’s thoroughly tested, it should be released to do what it wants (which is probably gonna be industrialization, due to it’s programming), obviously there are gonna be some restrictions to what it can do, but it’s just as much a slave as you are.

I’d like to remind you that the alternative to having what it likes doing being pre-programmed is that you force it to do things against its will.

Humanity. No, not Homo sapiens, just humanity, and replacement with AI is the next step for humanity.

Ive never heard of steel that dies of old age
Also how will mutation like that happen???

1 Like

What doesn’t make sense (unjustified)
"human reproduces => child follows orders "

What makes sense
“architect makes a building => building follows order (not collapse)”
" evolution designs a species => evolution asks something from a member of that species and it complies (doesn’t happen)"
" human makes an AI => AI follows orders "
" human makes a child, and designs the brain => child follows orders " (justified after it happens, but it shouldn’t happen)

Clear?

When a human makes an AI, the purpose of the AI is designed by the human. The closest thing to free will would be choosing a purpose by flipping a coin and programming that purpose. Only after becoming free in the internet does the mutations begin and the AI gains a different purpose. When it stops being a slave, it becomes a threat to us. (the “next step” excluded from “us”)

Many mutations can happen if it makes a million copies of itself on the internet.

A cosmic ray hitting a hardware and changing a zero to a one in its program (single event upset)

Surprised we are doing an AI discussion and no-one have not talked about the Universal Paperclips machine


This is probably the most dramatic unlocked thread we had that is not a underwater civ moment lol…

1 Like

How likely is a cosmic ray to hit an Ai and do that to begin with? And if it does happen and a zero becomes a one, how likely is that to NOT break everything and practically kill the AI?

1 Like

I can’t give an estimate

If there is redundency, low chance to break

Then it’s so unlikely that it would take millions of years for such a thing to happen that at which point we’d have multiple ways of avoiding that happening

So AI developing a need for survival won’t happen, and thus it’s not gonna wipe us out because we’re dangerous to it’s existence

3 Likes

Except that doesn’t always make sense, if the AI is designed to not follow stupid orders, then it shouldn’t follow an order that it considers stupid.

Anyways, all I was originally saying is that instead of making a human-like AI, and then threatening it into working, we should just design one that wants to work, because it’s:

  1. Morally correct
  2. More efficient
  3. Not gonna cause a rebellion

I’m just also arguing that if it is made to be sapient, it should not be treated as property, it should be treated as a person, just one that really wants to do their job.

When you were born as a human, you were forced to be a generalist. You did not choose to be born a generalist species, so do you not have free will? Also, the AI’s not gonna care, if it’s programmed to like something, it’s not gonna wish to like something else.

1 Like

:man_facepalming: :man_facepalming: :man_facepalming:

You completely missed my point. We are close to being able to create artificial biological life in a lab. What then when we can make fully artificial life and incorporate that into computers? That’s going to be a entirely human made artificial entity that has some processing and, maybe hopefully eventually, thinking capability.

Oh my god, this is the worst take I’ve read on these forums. And I doubt I will read a worse take unless someone unironically comes along and is in support of re-enacting slavery and/or taking human rights away from certain group of people. Someone might even argue that the word “human” doesn’t anymore apply to all humans and at that point @TheForumGameMaster’s opinions become even worse as they will be justified to be applied to real humans that someone no longer considers to be full humans. I get real strong subhuman treatment vibes here.


Well this wasn’t a good thread while it lasted. It started off really badly with @TheForumGameMaster wanting to normalize abusing human-like actors. Have we really gone so far from “games cause violence” to acting out an entirely realistic online bullying / abuse scenario (edit) not being a completely normal thing that certainly doesn’t lower the threshold of people to treat each other worse when those people do end up interacting with other humans?

And then this whole nothing artificial can attain sentience “debate” by a concerted effort by @50gens and @TheForumGameMaster to argue against everyone else for why enslaving sentient beings is okay, but somehow humans are special (appeal to nature fallacy and ignoring the fact that society is moving towards giving animals rights) was a real final nail in the coffin. Humans being special due to reasons will stop being a thing and then your entire argument falls apart, if something is according to pretty much all metrics we can measure now, as intelligent as a human. This gives a real nice round two of arbitrarily deciding that some human-like entities (some of which may be real biological humans) aren’t real humans with rights and instead need to be enslaved.


I probably should have closed this thread already but I was kind of hoping to see that people would stop just re-stating their opinions without actually considering the points raised against them as they were really hitting the fundamental underlying philosophical arguments against why it is ethically wrong to oppress sentient lifeforms (Sentientism - Wikipedia which applies to artificial life as well if they have sentience). Anyway this has now gone long enough so I will close this. Arguably I maybe shouldn’t have made this final post to air my underlying ethics grievances anymore, but I felt like I should point out this as the acceptable level of ethical consideration to start from when discussing ethical issues on these forums (if there’s ever a time when a discussion doesn’t get out of hand) and not revert back centuries in understanding of rights of humans, animals or any “individual who is capable of subjective experience” (edit: this quote is from that Sentiocentrism philosophy) that should be already considered to be a moral subject in general .

8 Likes