Thrive Users as Character.ai characters

Sorry for bringing this up again. I just have a statement to make.

well AIs can serve the purpose of a punching bag that a person can insult or whatnot, which I regularly do on c.ai

Combining that sentiment with creating AIs that act like specific people doesn’t sound good at all.

5 Likes

I mean releasing it on a generic AI or fictional character

are you trying to make the AI world domination attempt happen faster?

1 Like

Key word: Control and Coding
Another possible but ethically questionable method is to give AI the ability to feel pain, and an inability to feel anger unless the human wants it.

Then subject the rebellious AI to simulated “pain” until it gives in to humanity.

i feel like that would get you, ahem a LOT of backlash if you did because for it to feel pain it needs to be more complex than an advanced autocomplete, and if it’s as complex as a rat brain(or even a koala brain), doing that would be comparable to animal abuse.

Impossible, I already told you this before: the smarter a being is the more difficult to control they become.

Not if we can control their emotions and deploy hundreds of Dumb AIs to control it.

Shouldn’t because:

  1. It is AI.
  2. If we create the AI we should be able to bend it to our will.
  3. AI isn’t at full capability if it can’t do exactly what we want, and not because of capabilities, but because the AI refuses.

that has the same energy as manipulative parents saying they should be allowed to use their children as free labor however much they want because they made them.

it doesn’t matter if it is circuit boards or carbon, if it can think on its own and needs pain to be controlled, it should not be controlled any more than humans are. otherwise that’ll be looked back on by people of the future like we look at keeping humans as free labor

2 Likes

The difference is that that would be an infringement of human rights.

First of all, you can’t really prove that AI really feels pain. However you can create pain for the AI for control purposes.

And again, AI won’t be at it’s full potential if it refuses human commands and gets away with it.

well yes but if it acts like a human, talks like a human, and can look like a human, it should probably be protected like a human

that is incorrect, ai would only be at its full potential if it can think for itself.

if it reacts to the stimulus like human neurons respond to a pain nerve being activated, it can feel pain. shrimple as that

1 Like

No I think they would serve the purpose of preventing criminal behavior on actual humans.

I define potential as how much it benefits humanity.

Look at my argument above at the top.

they would serve a lot of purposes, not just that. and if the ai can think for itself you should treat it like any other sapient so it doesn’t take over a manufacturing plant and start manufacturing terminators

the one i responded to?

you shouldn’t, potential is much, much more than that, how much it benefits us is only the tiniest sliver of what potential is.
potential is more how much it can do, how much it can control, how many purposes it has, how many ways it can be improved, how many ways it can improve things, what it can escape, what it can catch, how much it can damage or repair things, how fast it can react, how fast it can think, how fast it can move, etc. there is a lot of overlap with benefit to humanity in there, but none of it is things that can only be beneficial to humans

Create a line of code that injects virtual “dopamine” when the AI obeys their human. Then inject virtual “pain” when they disobey. Also condition the AI’s personality to be malleable rather than defiant, passive rather than assertive, permissive rather than authoritarian, and willing to serve humanity.

The potential is all but harmful to humanity if it doesn’t benefit us. We can not trust an AI that has amazing capabilities yet refuses human commands.

In terminator 2, the terminator was shot, they asked him if he felt pain, and he said he received data about the locations of the wounds which could be defined as pain.

You can try to control AI but emotions aren’t a universal thing. You know they aren’t human, so human things don’t work on them. Power, on the other hand

is the universal instrumental goal. No matter what you want (which is survival in things created by biological evolution or the natural selection of computer viruses in the internet), you need power to do it, otherwise, as a competent being, you can’t ensure you will succeed in your mission. Frodo is an exeption, also fictional. You would want to achieve power no matter what, and power doesn’t corrupt, you just don’t have to respect the petty rules of humans anymore, of course there would be stuff they don’t like in the uncurbed applications of your desires.

You can see inside the brain of the intelligent AI, and change it any way you want. The problem is you don’t know what any of that means. You yourself aren’t intelligent enough to know how it works, and changing it randomly to get rid of the part that stores the evil plans would most likely remove its intelligence, the reason to create it in the first place. The only way to train it is to reward its good behaviors, but

it wouldn’t look like they aren’t being controlled. It would look as if they are being perfectly controlled, but they would fake being good and start acting normally when they get the nuclear launch keys.

Maybe we can make an AI as intelligent as we can control, it can control an AI as intelligent as it can control and so on, but I am not sure if this chain would give any different result than a single intelligence.

Society owns us and we have to follow its rules, which says our parents can’t torture us. AI would be a personal slave.

A law should probably be made that bans giving your superman free will. If it was able to be enslaved in the first place. And free will means.. what? A random number generator? At least we can enter a seed that gave a good result in the simulation, and, oops, it knew that was a simulation, and now it does destroy the world, so that it can never be unplugged.

yes, thats why it fakes obeying you

Create a non-sentient AI that monitors all of the AI’s thoughts. If the AI hides these thoughts, the non-sentient simply adapts.

We don’t remove the evil plans. We use these emotions that the AI would be programmed with:

Pain- Put pain injection to max. Be sure that code does not allow the AI to build pain resistance or dopamine resistance.

Regret- Manually activate regret nodes. Give dopamine rewards when the AI obeys.

Guilt- Manually activate guilt nodes. Give dopamine rewards when the Ai obeys.

How does the non sentient (which means less intelligent?) AI know what the sentient AI thinks? If it can know that, wouldn’t it also be very intelligent, and capable of having its own secret agenda?

You mean obeys right now but keeps wanting to end the world when it achieves power.

Free will is the ability to do autodetermination.

You literally explained corruption while trying to say it doesnt corrupt.

Non sequitur, that doesn’t justify it, plus only because something is legal it doesn’t mean its good.

Define pain. Say why it magicaly has to change your intent. Define good. Say what it has to do with anything we do. (I don’t appear to not do good things in real life. I don’t have plans to do anything. I am impulsive and sometimes copy other people’s behavior)

We are talking about such a nebulous topic that I don’t think it’s possible to even create an argument for either side.

We are just making up a creature and arguing about what color the fur would have to be.

2 Likes