they would serve a lot of purposes, not just that. and if the ai can think for itself you should treat it like any other sapient so it doesn’t take over a manufacturing plant and start manufacturing terminators
the one i responded to?
you shouldn’t, potential is much, much more than that, how much it benefits us is only the tiniest sliver of what potential is.
potential is more how much it can do, how much it can control, how many purposes it has, how many ways it can be improved, how many ways it can improve things, what it can escape, what it can catch, how much it can damage or repair things, how fast it can react, how fast it can think, how fast it can move, etc. there is a lot of overlap with benefit to humanity in there, but none of it is things that can only be beneficial to humans
Create a line of code that injects virtual “dopamine” when the AI obeys their human. Then inject virtual “pain” when they disobey. Also condition the AI’s personality to be malleable rather than defiant, passive rather than assertive, permissive rather than authoritarian, and willing to serve humanity.
The potential is all but harmful to humanity if it doesn’t benefit us. We can not trust an AI that has amazing capabilities yet refuses human commands.
In terminator 2, the terminator was shot, they asked him if he felt pain, and he said he received data about the locations of the wounds which could be defined as pain.
You can try to control AI but emotions aren’t a universal thing. You know they aren’t human, so human things don’t work on them. Power, on the other hand
is the universal instrumental goal. No matter what you want (which is survival in things created by biological evolution or the natural selection of computer viruses in the internet), you need power to do it, otherwise, as a competent being, you can’t ensure you will succeed in your mission. Frodo is an exeption, also fictional. You would want to achieve power no matter what, and power doesn’t corrupt, you just don’t have to respect the petty rules of humans anymore, of course there would be stuff they don’t like in the uncurbed applications of your desires.
You can see inside the brain of the intelligent AI, and change it any way you want. The problem is you don’t know what any of that means. You yourself aren’t intelligent enough to know how it works, and changing it randomly to get rid of the part that stores the evil plans would most likely remove its intelligence, the reason to create it in the first place. The only way to train it is to reward its good behaviors, but
it wouldn’t look like they aren’t being controlled. It would look as if they are being perfectly controlled, but they would fake being good and start acting normally when they get the nuclear launch keys.
Maybe we can make an AI as intelligent as we can control, it can control an AI as intelligent as it can control and so on, but I am not sure if this chain would give any different result than a single intelligence.
Society owns us and we have to follow its rules, which says our parents can’t torture us. AI would be a personal slave.
A law should probably be made that bans giving your superman free will. If it was able to be enslaved in the first place. And free will means… what? A random number generator? At least we can enter a seed that gave a good result in the simulation, and, oops, it knew that was a simulation, and now it does destroy the world, so that it can never be unplugged.
How does the non sentient (which means less intelligent?) AI know what the sentient AI thinks? If it can know that, wouldn’t it also be very intelligent, and capable of having its own secret agenda?
You mean obeys right now but keeps wanting to end the world when it achieves power.
Define pain. Say why it magicaly has to change your intent. Define good. Say what it has to do with anything we do. (I don’t appear to not do good things in real life. I don’t have plans to do anything. I am impulsive and sometimes copy other people’s behavior)
and how will you keep the non sentient ai from becoming sentient?
yeah, that’s a pretty accurate analogy, we’d have to be smarter than an AI to be able to understand one at all, most people who make AIs barely understand them anyway
We would have to give pain to the sentient AI so that it would realize human dominance. We simply make it impossible for the AI to feel hate, so that it doesn’t rebel. Even then, we could have hordes of Dumb AIs ready to overload the Ai the moment it rebels.
You watched terminator, right? If you can’t come up with a solution to that, it means the proper risk assesment hasn’t been made.
And you can imagine a better scenerio than the one in the terminator. You’d need to find a solution to all the problems if what you are creating can destroy you.
so many words that need a definition
yes/s , because this is maths, 100 iq plus 100 iq equals 200 iq
I am assuming that means being able to do whatever you want.
A pencil is free to keep doing nothing. And an non enslaved super AI is free to wipe out humanity.
I guess some people put having a biological kid and creating an AI on the same categories in their heads. Letting your creation do whatever it wants, and not wanting to have a control over it. Being a god, but not knowing or caring about what you created, as long as one of the kids doesn’t kill the other.
Or create an “AI-net” that humans and the AI can access. This would include sites meant for human entertainment(roleplay with AI, forums, browser games, chat). The internet would remain human-only. No important documents or confidential information may be posted on the “AI-net”.
and how exactly are you keeping the AI from being spread to the normal internet? you’d need special computers that don’t allow specific files to be moved to or from them
If we make an AI with the current technical knowledge, we are completely incapable of making changes to it like adding instructions like “don’t cause humanity’s extinction.” Trying to modify a neural network without just retraining it from scratch or giving additional training with more training data is impossible. Humans cannot understand such complexity. I think the end result with human level intelligence AI is anyway going to be that the system is so complex anyway that we can’t be certain what it will do. Also we don’t even currently understand conciousness well enough to be able to tell if a created AI is sentient and should have human rights (which need to be extended to be sentient being rights).