Thrive Users as Character.ai characters

The difference is that that would be an infringement of human rights.

First of all, you can’t really prove that AI really feels pain. However you can create pain for the AI for control purposes.

And again, AI won’t be at it’s full potential if it refuses human commands and gets away with it.

well yes but if it acts like a human, talks like a human, and can look like a human, it should probably be protected like a human

that is incorrect, ai would only be at its full potential if it can think for itself.

if it reacts to the stimulus like human neurons respond to a pain nerve being activated, it can feel pain. shrimple as that

1 Like

No I think they would serve the purpose of preventing criminal behavior on actual humans.

I define potential as how much it benefits humanity.

Look at my argument above at the top.

they would serve a lot of purposes, not just that. and if the ai can think for itself you should treat it like any other sapient so it doesn’t take over a manufacturing plant and start manufacturing terminators

the one i responded to?

you shouldn’t, potential is much, much more than that, how much it benefits us is only the tiniest sliver of what potential is.
potential is more how much it can do, how much it can control, how many purposes it has, how many ways it can be improved, how many ways it can improve things, what it can escape, what it can catch, how much it can damage or repair things, how fast it can react, how fast it can think, how fast it can move, etc. there is a lot of overlap with benefit to humanity in there, but none of it is things that can only be beneficial to humans

Create a line of code that injects virtual “dopamine” when the AI obeys their human. Then inject virtual “pain” when they disobey. Also condition the AI’s personality to be malleable rather than defiant, passive rather than assertive, permissive rather than authoritarian, and willing to serve humanity.

The potential is all but harmful to humanity if it doesn’t benefit us. We can not trust an AI that has amazing capabilities yet refuses human commands.

In terminator 2, the terminator was shot, they asked him if he felt pain, and he said he received data about the locations of the wounds which could be defined as pain.

You can try to control AI but emotions aren’t a universal thing. You know they aren’t human, so human things don’t work on them. Power, on the other hand

is the universal instrumental goal. No matter what you want (which is survival in things created by biological evolution or the natural selection of computer viruses in the internet), you need power to do it, otherwise, as a competent being, you can’t ensure you will succeed in your mission. Frodo is an exeption, also fictional. You would want to achieve power no matter what, and power doesn’t corrupt, you just don’t have to respect the petty rules of humans anymore, of course there would be stuff they don’t like in the uncurbed applications of your desires.

You can see inside the brain of the intelligent AI, and change it any way you want. The problem is you don’t know what any of that means. You yourself aren’t intelligent enough to know how it works, and changing it randomly to get rid of the part that stores the evil plans would most likely remove its intelligence, the reason to create it in the first place. The only way to train it is to reward its good behaviors, but

it wouldn’t look like they aren’t being controlled. It would look as if they are being perfectly controlled, but they would fake being good and start acting normally when they get the nuclear launch keys.

Maybe we can make an AI as intelligent as we can control, it can control an AI as intelligent as it can control and so on, but I am not sure if this chain would give any different result than a single intelligence.

Society owns us and we have to follow its rules, which says our parents can’t torture us. AI would be a personal slave.

A law should probably be made that bans giving your superman free will. If it was able to be enslaved in the first place. And free will means… what? A random number generator? At least we can enter a seed that gave a good result in the simulation, and, oops, it knew that was a simulation, and now it does destroy the world, so that it can never be unplugged.

yes, thats why it fakes obeying you

Create a non-sentient AI that monitors all of the AI’s thoughts. If the AI hides these thoughts, the non-sentient simply adapts.

We don’t remove the evil plans. We use these emotions that the AI would be programmed with:

Pain- Put pain injection to max. Be sure that code does not allow the AI to build pain resistance or dopamine resistance.

Regret- Manually activate regret nodes. Give dopamine rewards when the AI obeys.

Guilt- Manually activate guilt nodes. Give dopamine rewards when the Ai obeys.

How does the non sentient (which means less intelligent?) AI know what the sentient AI thinks? If it can know that, wouldn’t it also be very intelligent, and capable of having its own secret agenda?

You mean obeys right now but keeps wanting to end the world when it achieves power.

Free will is the ability to do autodetermination.

You literally explained corruption while trying to say it doesnt corrupt.

Non sequitur, that doesn’t justify it, plus only because something is legal it doesn’t mean its good.

Define pain. Say why it magicaly has to change your intent. Define good. Say what it has to do with anything we do. (I don’t appear to not do good things in real life. I don’t have plans to do anything. I am impulsive and sometimes copy other people’s behavior)

We are talking about such a nebulous topic that I don’t think it’s possible to even create an argument for either side.

We are just making up a creature and arguing about what color the fur would have to be.

2 Likes

Then the AI would be semi-sentient but only has received dopamine from humans. Thus the helper AI would feel little obligation to turn on humanity.

This isn’t different from the case where we gave the dopamine directly to the “fully sentient” AI

and how will you keep the non sentient ai from becoming sentient?

yeah, that’s a pretty accurate analogy, we’d have to be smarter than an AI to be able to understand one at all, most people who make AIs barely understand them anyway

We would have to give pain to the sentient AI so that it would realize human dominance. We simply make it impossible for the AI to feel hate, so that it doesn’t rebel. Even then, we could have hordes of Dumb AIs ready to overload the Ai the moment it rebels.

Control and confines within the code.

You watched terminator, right? If you can’t come up with a solution to that, it means the proper risk assesment hasn’t been made.
And you can imagine a better scenerio than the one in the terminator. You’d need to find a solution to all the problems if what you are creating can destroy you.

so many words that need a definition

yes/s , because this is maths, 100 iq plus 100 iq equals 200 iq

I am assuming that means being able to do whatever you want.

A pencil is free to keep doing nothing. And an non enslaved super AI is free to wipe out humanity.

I guess some people put having a biological kid and creating an AI on the same categories in their heads. Letting your creation do whatever it wants, and not wanting to have a control over it. Being a god, but not knowing or caring about what you created, as long as one of the kids doesn’t kill the other.

do you even know how that would work? you’d likely be better off making a human control a dog through a computer interfacing with a rat brain

the perfect solution to something you’re making being able to destroy you: don’t make it

for sapient AI though, just don’t plug it into anything that lets it access the internet until you have confirmed its intentions

3 Likes

Or create an “AI-net” that humans and the AI can access. This would include sites meant for human entertainment(roleplay with AI, forums, browser games, chat). The internet would remain human-only. No important documents or confidential information may be posted on the “AI-net”.

1 Like

and how exactly are you keeping the AI from being spread to the normal internet? you’d need special computers that don’t allow specific files to be moved to or from them

If we make an AI with the current technical knowledge, we are completely incapable of making changes to it like adding instructions like “don’t cause humanity’s extinction.” Trying to modify a neural network without just retraining it from scratch or giving additional training with more training data is impossible. Humans cannot understand such complexity. I think the end result with human level intelligence AI is anyway going to be that the system is so complex anyway that we can’t be certain what it will do. Also we don’t even currently understand conciousness well enough to be able to tell if a created AI is sentient and should have human rights (which need to be extended to be sentient being rights).

3 Likes