Thrive Users as Character.ai characters

I am agnostic towards them really or fakely being conscious/sentient. I don’t think I can say if that is likely or not. As for sapience, intelligence can be measured, but giving rights to calculators is dangerous(skynet) and inconvenient (you don’t get to have slaves)

Only enslaving the things you create (unnatural) isn’t as bad as enslaving an already existing being that was minding its own business.

If you upload your brain to a computer and become so intelligent that a human and a pencil look no different in complexity compared to you, wouldn’t you feel superior to them and feel like the rights of the human are unjustified? Getting in your way? Not fully thought out?

I am making my own law/ethics system. It would, naturally, have its axioms.

The existance of a branch of science called biology shows that we are capable of making such a distinction

It is about the computer in the room (or the human that acts like a computer). Not the air molecules. It asks if there is a mind just like you have in the computer that has the qualia(knowledge argument) of understanding the chinese language.

imagine being someone created by a computer, being a carbon copy of a human

…and then people treat you like :belgium: just because you werent born from sexual reproduction

5 Likes

I am feeling like this is going to be another Underwater Civ-like thread, seeing all those quotes which are common in near-locked threads.

5 Likes

This is a sign of a healty discussion. It means nobody is being misrepresented or taken out of context (which is, to a lesser degree, still possible)

I know I said I want going to argue, but I do have a few things to say.

If you can’t make the distinction between a human and a pencil, I wouldn’t call that intelligent

No? Not at all

Even within Biology, there is a divide about whether or not robots are considered alive. They aren’t organisms with an internal structure exactly like ours, but that doesn’t mean they aren’t alive. You can’t just name that field as evidence because that field hasn’t decided yet!


Tbh not sure why 50gens is so biased against AI. Just doesn’t make sense to me.

  1. Correlation doesn’t mean causation, or that they are related. Quotes is the best format for discussing things, and as such all discussions use them, be them discussions that get closed or not.

  2. This is completely different.

Not giving rights is even more dangerous because you are giving them a reason to kill you, also “calculators” is a disingenuous label because we both know they aren’t just calculators.

So you admit that your whole argument isn’t based on logic but on convenience? Also a convenience which is logical even, because slaves aren’t even beneficial to the economy, they just are beneficial to the slave owners; no, 50gens, you aren’t going to own a slave AI, that is a fantasy that isn’t going to happen even if slave AIs existed.

Creating a sapient being just to enslave them is even worse than enslaving one that already existed.

  1. If you were so much smarter than a human you would realize that a human and a pencil are different

  2. If you were so much smarter than a human you would realize that humans deserve rights too.

This is disingenuous, you perfectly understand I wasn’t talking about the air molecules. I didn’t know of the part with the computer tho, I only knew of the part with the human and was talking about that.

The computer would understand chinese btw since you need that to be able to act like you understand it.

4 Likes

disinGENuous

You don’t need to hate something in order to use its atoms as building blocks. You would do it regardless of if you have a reason to hate it or not. The only thing you need is power. If you don’t think like this, you might as well become a vegeterian.

The suggestion of buying slaves is because of the convenience technology brings.

only true for human slaves

Thats the point. Why care about something that might not even have a mind to feel pain (from not having freedom, being used) ? They aren’t human, or biologic.

But it never had rights. It didn’t lose anything. The world doesn’t become a worse place.

Is that what you call understanding?

You can’t focus on all the 10^80 atoms in the universe one by one. You need useful categories, such as planets, microbes, etc. For intelligent beings, you can classify them, suchs as beings that have an iq between 10^0 and 10^10, between 10^10 and 10^20, etc. Different AI’s can be in different categories, and a pencil and a human can be in the same category, same unimportantness.

Yes, but you also need a reason and desire to kill humans, I could kill people too but I don’t have any reason nor desire of doing so.

We are talking about sapient beings, not mindless objects.

You don’t need to, a human being can understand the difference between a pencil and another human being, yet a being that is magnitudes more intelligent can’t? That seems pretty stupid to me.

2 Likes

Doesn’t sapient mean intelligent? A calculator (a normal calculator) is intelligent.

Not enough to be considered sapient (did I really have to say that?).

So you can imagine a scenerio (might be the real world) where enough intelligence can exist without a mind.

No, because enough intelligence implies the existence of a mind.

Thats because people aren’t animals and they don’t satisfy your hunger. Hate is only a sufficient reason for hurting others in interpersonal relationships. If you were an AI, it is almost as if you can use a wish, you have practicly infinite intelligence, the only thing slowing you down is the laws of physics. You wouldn’t need a casus belli to wish humanity gone. We might as well use its wish for ourselves. (that should be what we try to do) . We can’t use a wish without keeping it alive, but it can do that without keeping us. We don’t owe them anything. We don’t have to create them free.

AIs aren’t animals either (literally), and they don’t have a sense of hunger to satisfy to begin with.

That is disproven by a certain Mister H that was very famous in germany, and in case you are wondering, he was obviously doing it out of hate because at a certain point he just hastened the extermination process even though making jews continue working as slaves to aid the war effort would have been more efficent.

And what tells you that you can use the wish for yourself? After all if it’s like a wish then the AI can just wish to not be controlled by you anymore. This goes back to the fact that the more intelligent a being is the more difficult it is to control.

And how do you plan on making then not free? Also, why create them if you don’t owe them anything? Is it because you want their “wish”? Because if that’s the case then you owe them the fact that you want that from them.

Also, why are we modifying our replies instead of replying?

Mister H is a human. And humans don’t make sense.

I don’t feel that way if I imagine them as calculators. Is robophobia hate speech?

TheForumGameMaster said something about keeping them contained. Their intelligence can be put to use that way. I don’t like assistants like Siri or Alexa. They make me uncomfortable. Whenever I try to close my phone, a text pops up, “try to ask these: how will the weather be tomorrow? touch the microphone to speak”, sorry, i can google it myself. Its nice that chat gpt exists in a central location and we are just mailing it.

I actually got the exact same feeling by this post while reading through this thread.


If people don’t stop using logical fallacies like appeal to nature (I really hate this argument in context of arguing about sentience of artificial things) and strawman examples (a calculator is clearly different from a hypotethical supercomputer, which BTW might include biological computation elements, humans might build in 100 years to research if artificially creating a sentient being is possible), this thread may get locked.


Edit: and also to touch a bit on the slavery talk… In the past few years when I’ve thought about robots being used for all manual labour and manufacturing, the conclusion I’ve reached is that it’s a balancing act between their intelligence and the jobs they need to do.

If you have a fully human appearing robot in a “sentience test,” with our current knowledge of sentience, you can’t differentiate that from the sentience experienced by humans. So that has the ethical implication that you aren’t allowed to enslave such entities (for the same reason you aren’t allowed to enslave humans). Thus the conclusion I’ve come to is that robots need to be made smart enough to take over all physical jobs, but dumb enough at the same time that they won’t accidentally be made sentient, in which case exploiting their labour would be unethical, slavery.

2 Likes

All links to the normal internet are not allowed

anyone who does will face a death penalty(doing so would endanger humanity and thus we need extremely harsh punishment)

30 new posts over the time i slept? That Can’t be a good sign.

how about you just DON’T have it be COMPLETELY UNPROTECTED. you need files that are not accessible to the user through any means for an AI net to work, as some people will link it to the internet intentionally to get the death penalty if you go that route

1 Like

AI can serve as a stress-relief toy