Thrive Users as Character.ai characters

Then the AI would be semi-sentient but only has received dopamine from humans. Thus the helper AI would feel little obligation to turn on humanity.

This isn’t different from the case where we gave the dopamine directly to the “fully sentient” AI

and how will you keep the non sentient ai from becoming sentient?

yeah, that’s a pretty accurate analogy, we’d have to be smarter than an AI to be able to understand one at all, most people who make AIs barely understand them anyway

We would have to give pain to the sentient AI so that it would realize human dominance. We simply make it impossible for the AI to feel hate, so that it doesn’t rebel. Even then, we could have hordes of Dumb AIs ready to overload the Ai the moment it rebels.

Control and confines within the code.

You watched terminator, right? If you can’t come up with a solution to that, it means the proper risk assesment hasn’t been made.
And you can imagine a better scenerio than the one in the terminator. You’d need to find a solution to all the problems if what you are creating can destroy you.

so many words that need a definition

yes/s , because this is maths, 100 iq plus 100 iq equals 200 iq

I am assuming that means being able to do whatever you want.

A pencil is free to keep doing nothing. And an non enslaved super AI is free to wipe out humanity.

I guess some people put having a biological kid and creating an AI on the same categories in their heads. Letting your creation do whatever it wants, and not wanting to have a control over it. Being a god, but not knowing or caring about what you created, as long as one of the kids doesn’t kill the other.

do you even know how that would work? you’d likely be better off making a human control a dog through a computer interfacing with a rat brain

the perfect solution to something you’re making being able to destroy you: don’t make it

for sapient AI though, just don’t plug it into anything that lets it access the internet until you have confirmed its intentions

3 Likes

Or create an “AI-net” that humans and the AI can access. This would include sites meant for human entertainment(roleplay with AI, forums, browser games, chat). The internet would remain human-only. No important documents or confidential information may be posted on the “AI-net”.

1 Like

and how exactly are you keeping the AI from being spread to the normal internet? you’d need special computers that don’t allow specific files to be moved to or from them

If we make an AI with the current technical knowledge, we are completely incapable of making changes to it like adding instructions like “don’t cause humanity’s extinction.” Trying to modify a neural network without just retraining it from scratch or giving additional training with more training data is impossible. Humans cannot understand such complexity. I think the end result with human level intelligence AI is anyway going to be that the system is so complex anyway that we can’t be certain what it will do. Also we don’t even currently understand conciousness well enough to be able to tell if a created AI is sentient and should have human rights (which need to be extended to be sentient being rights).

3 Likes

Except it isn’t because something that doesn’t have a mind can’t be free.

Non sequitur

heres my take:
just…treat an ai how you would like to be treated. just how you would treat a real person how you would want to be treated. Goodly. Not because they can feel pain or anything like that, but because

  1. its morally correct. and
  2. your conscience will most likely feel better.
  3. if you practice that kind of behavior on an ai…what kind of habits or other habits would that influence you to enact on OTHER people?

thank you for attending my ted talk

5 Likes

image
I’m sorry if this is a little off-topic, but seeing that 28 posts had been made here since yesterday absolutely terrified me (and after reading through them, my worries were confirmed)

1 Like

DW, we have worst in the past (and present)

I was putting AI in the same category as an inanimate object. Yes, its actions[1] are more unpredictable by us, but that’s the only difference.

We never will. You can’t convince a dualist with a materialistic experiment (chinese room argument). So it is up to us to say who is “sentient” and who isn’t.

The rights should only apply to beings created by nature (rights from time immemorial), so humans and aliens. Giving everyone rights would be catastrophic. What if an AI becomes “more sentient” than us? Wouldn’t it make us seem like we don’t deserve rights anymore?


  1. the way it wipes out humanity ↩︎

I see. You are hardcore believer like that. I won’t waste my time trying to argue my viewpoint then.

1 Like

Not gonna argue past this, but what the hell? Absolutely not

2 Likes

this is a fallacy called appeal to emotion

Except he wasn’t trying to dispute anything so he made no logical fallacy.

Appeal to nature fallacy, what happens if I grow a human in a vat? Do they not have rights? What happens if I artifically create a sapient biological species? Do they not have rights?

Fun fact about the chinese room experiment: The room does know how to speak chinese. It may be true that the individual inside the room doesn’t know how, but the room as a collective does.

To make an analogy, if I were to separate your brain into it’s components none of those components would be individually sapient, but your brain as a collective is sapient.

This is to say, even if you were to find yourself in a chinese room situation, the AI would still be sapient.

Ignoring the fact that an AI isn’t an inanimate object by virtue of the fact that an AI can cause itself to change, where are you trying to go with this?

Then why do humans or sapient beings have more rights than other animals? I will tell you why, it is because the smarter a being is the more rights it deserves, this is why a cat has more rights than a worm and a human has more rights than a cat.

2 Likes

The laws of physics aren’t different for the atoms that make the computer chip where the AI is located

I already said it. Everything is a philosophical zombie (except solipsism). We can decide to pretend some things are sentient. We should only pretend that natural creatures are sentient, for an unrelated reason.

Not if the less smart being can enslave the more smart being while creating it and gaining a de facto right. What you said doesn’t contradict my fear mongering.

Let me give my opinion.

  • Human in a vat: Shouldn’t be tortured, because the design (human body) is natural. But it shouldn’t be allowed to vote, because that way a political party could just spawn more voters to win an election.
  • Artificially created biological species: Not natural, enslave it
  • The humans a billion years from now, evolved into an unrecognisable thing: The evolution is natural, has rights and can vote
  • A human who uploaded its brain to a computer: Full rights, because it was born naturally
  • Two people in a simulated environment makling a kid: The kid is created by the computer. No rights
  • Two people exiting the simulation, making a kid, and the three of them entering the simulation again: If the bodies they entered were normal human bodies, then the kid has full rights. If the voting age is 18 and the kid entered the simulation before that age, he should never be allowed to vote, no matter how much he grows up in the simulation
  • The results of an evoultion simulation being created in real life: The evolution follows the real life rules. But it happened in a computer so they don’t have rights.

The room, as in walls? Or the people inside? I don’t get the “a sum is greater than the constituent parts” analogy you are trying to make. The computer in the room isn’t accepted as being sentient. (sentient is a subcategory of conscious, right?)

Non sequitur, this is literally irrelevant to what I said.

  1. You are “pretending” they are sapient and sentient

  2. You are “pretending” because them not being sapient and sentient is highly unlikely, but it being highly unlikely isn’t something that applies only to “natural” beings, the same principle applies to AIs.

So by this logic slavery is justified since you can enslave the other being even if they are of equal intelligence to yours.

Have you even stopped to think about the implications of what you say? Also, it does contradict your fear mongering, only because a being has rights it doesn’t mean other beings don’t have rights. Or is it that you believe that other animals don’t deserve rights because humans exist?

  1. Again, this is a blatant case of appeal to nature.

  2. Why does it being artificial change whether or not it has rights? You realize that classifications such as “artificial” and “natural” are also artificial? In nature there doesn’t exist such a distinction and it makes no sense to make such a distinction in this case.

  1. “The room as a collective” means the room as everything that is in the room and makes up the room.

  2. What is it that you don’t get of the analogy?

  3. I never said anything about the room being sentient, I said that the room was capable of speaking chinese, since that is literally what the experiment is about.

2 Likes