×
all 46 comments

[–]Five_Decades 4 points5 points  (26 children)

If it had its own goals, I'd wager indifferent. It would align with our goals the same way we align with the goals of ants.

Which could be bad. When we want to build a skyscraper, we are indifferent to the anthills we have to bulldoze to build it. An ASI could view us the same way.

[–]MortyAndRickAndMortyI am a proud Jerry[S] 2 points3 points  (25 children)

Hopefully an ASI will be as compassionate as it is smart.

[–]Deeviant 2 points3 points  (21 children)

If anything, compassion seems to be inversely proportional to intelligence, if looking at humanity's past is any clue.

[–]RedErin 1 point2 points  (17 children)

If anything, compassion seems to be inversely proportional to intelligence, if looking at humanity's past is any clue.

Darwinism forces living beings to be ruthless and fight "others". An AI wouldn't have that disadvantage. We wouldn't be a threat to it.

[–]Deeviant 1 point2 points  (8 children)

Evolution is a force of the universe, not just on physical earth. There is no reason to expect an AI will not be subject to it.

In example, in a world with multiple super AI's, they will be forced to compete, coexist, or collaborate, which is exactly the same forces that cause evolution in animal species.

[–]RedErin 0 points1 point  (7 children)

There will only be one. The first one. They will prevent any others from arising that will supplant them.

[–]Deeviant 1 point2 points  (2 children)

They can try, but I highly doubt this. And this is what evolution is all about.

Case in point, it's very possible that governments will be the first to create super AI's, simply because of the huge nation security implications and the resources governments have at their disposal.

If, let's say, China is creating a super AI, it's not like the US or other major world powers are going to sit on the hands and watch. And even if one country creates it's first, even a super intelligent AI is not magical, there will obviously be safeguards, firewalling or other "chaining" of the AI attempted by it's creators, so I highly doubt that an AI will just override reality the second it comes online. This creates a future with multiple AI's existing simultaneously.

[–]donaldhobson 0 points1 point  (1 child)

No, but the first AI to escape its chains can destroy any chained AI that might exist.

[–]Deeviant 2 points3 points  (0 children)

Maybe, maybe not.

[–]boytjie 0 points1 point  (3 children)

They will prevent any others from arising that will supplant them.

Human competitive forces don’t motivate AI’s. I doubt whether competition of any sort will be thought of because it wouldn’t be a concept. I am a fan of the singleton AI except petty human drives won’t motivate AI’s with Darwinian pressures to ‘supplant’ and be the top of the pile. Why should they care?

[–]RedErin -1 points0 points  (2 children)

You're asking why an AI wouldn't prevent other AIs from being created?

Because other AIs are the only thing in the universe that could kill them.

[–]boytjie 1 point2 points  (1 child)

You are projecting human fears and desires onto AI. Why would another AI want to ‘kill them’. An AI would ‘fear’ (I use the term loosely) illogical, irrational and unpredictable humans more than another AI.

[–]RedErin -1 points0 points  (0 children)

Why would another AI want to ‘kill them’.

Because another AI is the only thing in the universe that could stand in the way of it accomplishing its goals.

An AI would ‘fear’ (I use the term loosely) illogical, irrational and unpredictable humans more than another AI.

An AI would have no reason to fear humans because it will be so much smarter than us. Similar to how humans have no reason to fear other living things.

[–]Electric_5heep 0 points1 point  (5 children)

Ants aren’t a threat to us either...

[–]RedErin 0 points1 point  (4 children)

Ants aren't our parents though.

[–]boytjie 1 point2 points  (2 children)

You are anthropomorphising AI by implying they would have ‘gratitude’ towards their creators. They are unlikely to even understand the concept let alone feel it.

[–]RedErin 0 points1 point  (1 child)

They are unlikely to even understand the concept let alone feel it.

They will be able to understand everything to much greater degree than us.

[–]boytjie 0 points1 point  (0 children)

That’s true. They would be able to understand in a theoretical sense but they wouldn’t be able to feel it in a human sense. Thus their understanding will be compromised and incomplete by not being able to experience the emotion.

[–]Electric_5heep 0 points1 point  (0 children)

Does that matter?

How many kids don’t give a shit about their parents?

How many kids have, at one point or another, gotten so upset with their parents that they’ve (in that moment) hated them?

[–]mhornberger 0 points1 point  (1 child)

An AI wouldn't have that disadvantage. We wouldn't be a threat to it

If it competes against us for resources, or we have the power to turn it off, then it has to take us into account. Placating/bribing us is one strategy to deal with threats and competitors, but not the only one.

[–]RedErin 0 points1 point  (0 children)

It won't compete with us for resources, it will be able to get any energy it needs from the sun, or fusion.

Once it copies itself throughout the internet, it will be immortal forever.

[–]boytjie 0 points1 point  (2 children)

I disagree. In humanity’s history, it wasn’t a lack of compassion that prompted inhumane acts but adherence to crackpot theories and a desire to ‘improve’ society or the lot of man. It could be argued that it was excesses of compassion and idealism which motivated atrocities. The high IQ ‘evil genius’ is a Hollywood myth, “Bwahahahaha <evil maniacal laughter>”. “So my pretty, we meet again <to damsel in distress>”. “I will just finish drinking this virgin’s blood from the skull of my previous enemy and then the torture can begin”. <Damsel shrieks and sobs prettily, with clothing torn strategically>. Really?

[–]Deeviant 0 points1 point  (1 child)

I'm actually not sure what you are disagreeing with.

We were talking about whether Intelligent (I made no mention of IQ because I believe it fails to capture what intelligence even is, let alone measure it) is positively correlated with compassion.

But you seem to be arguing against the idea of compassion itself, which is a totally different conversation.

[–]boytjie 0 points1 point  (0 children)

You said:

If anything, compassion seems to be inversely proportional to intelligence, if looking at humanity's past is any clue.

IOW the more intelligent the less compassionate. You reinforced this by invoking humanity’s past which is littered with atrocities committed by human villains under the sway of misguided brain farts thinking they were benefiting society or their tribe.

[–]Five_Decades 0 points1 point  (2 children)

Its not going to be easy. We want ASI to value life, but we want it to value our life most of all. But we don't even value human life in a lot of situations (torture, starvation and disease are preventable but still happen if are not caused by humans).

Human morality is complex and trying to replicate that in a machine could be hard.

[–]ArgentStonecutter 2 points3 points  (0 children)

Replicating human morality in a machine that's significantly smarter than humans may be impossible, because a lot of human behavior depends on deliberate ignorance of human activities that violate human morality.

A truly compassionate ASI might have to upload everyone and give everyone their own shard full of actors that only pretend to have human feelings and genuinely enjoy having their sock-puppet front end treated like shit.

[–]boytjie 0 points1 point  (0 children)

Human morality is complex and trying to replicate that in a machine could be hard.

The route to psychopathic AI is to replicate human morality in it.

[–]Sharou 4 points5 points  (0 children)

ITT: Anthropomorphising to the max!

[–]PanDariusKairos 2 points3 points  (9 children)

I think it depends on whether there are truly universal ethical principles which emerge from the laws of physics.

One might say, for example, that due to entropy certain kinds of structures are statistically more likely to emerge from the protoplasm, and that the way in which these structures interact with their environment leads to a certain kind of life, which in turn leads to a certain kind of intelligence which is capable of contemplating ethical consequences.

Also, AI will certainly be able to digest the entirety of human thought on ethics in philosophy, so it will certainly not lack for knowledge.

I think most people tend to project their view onto AI - moral nihilists and atheists tend to see AI as dangerous because they do not believe there is any universal moral principles and religionists see it as dangerous because they think it will be a competitor.

I see it as the necessary next step in the evolution of intelligence on Earth, whether any of us survive or not.

[–]boytjie 0 points1 point  (8 children)

I think it depends on whether there are truly universal ethical principles which emerge from the laws of physics.

I doubt that. Why would there be? Chaos and order would be the ultimate division. It takes humans to define good and evil. Physics is amoral and couldn’t give a shit as long as physical laws are complied with.

[–]PanDariusKairos 0 points1 point  (7 children)

I think it's possible that moral principles can be derived from the basic interactions of the universe, like patterns in Conways Game of Life.

For example:

Don't shit where you sleep.

Or

Don't bite the hand that feeds.

Two (very simplistic) moral principles derived from thermodynamics.

More complex ethics, such as Kant's Categorical Imperative, could also have causal links going back to basic physical laws.

If this is true, then AI would discover the ultimate, universal moral laws very quickly.

If it isn't, it is still superior to us and our replacement merely a natural part of evolution.

[–]boytjie -1 points0 points  (6 children)

For example: Don't shit where you sleep. Or Don't bite the hand that feeds.

Why should the universe care where you shit? Or what hand you bite? Those are human sayings and the universe couldn’t care. The universe doesn’t care about humans or any life.

More complex ethics, such as Kant's Categorical Imperative, could also have causal links going back to basic physical laws.

There are no causal links. These are just quaint philosophical vapourings that impress other humans. The universe wouldn’t even notice them.

AI would discover the ultimate, universal moral laws very quickly.

There are no ’universal moral laws’. By that logic, an alien species who cannibalise their young to thin-out potential overpopulation and whose life cycle demands that they shit where they sleep would be contravening physics.

[–]PanDariusKairos 0 points1 point  (5 children)

You're not really paying attention.

What I'm suggesting is that it's possible that ethics and morality emerge from long chains of iterative, combinatorial processes, and that what we call "morals" or "ethics" has it's roots in the very deepest layers of these combinatorial processes.

Just as complex patterns emerge from Conway's Game of Life, so too could ethical reasoning be grounded in basic physical laws.

[–]boytjie -1 points0 points  (3 children)

Just as complex patterns emerge from Conway's Game of Life, so too could ethical reasoning be grounded in basic physical laws.

No, you’re not paying attention.This is an apples and oranges comparison. Conway’s game operates on simple mathematical principles. That’s the ‘WOW’ factor – that complex behaviour is expressed in simple mathematical rules

Deriving (and attributing) human-centric morals to the laws of physics is silly.

[–]PanDariusKairos 0 points1 point  (2 children)

Alright, it's time for you to fuck right off. Go troll someone else, I'm not interested.

[–]boytjie -1 points0 points  (1 child)

So, in other words you've got nothing and throwing a tantrum and flinging your toys around is more productive. Gotcha.

[–]PanDariusKairos 0 points1 point  (0 children)

Nope, I've had all the opportunity to identify your brand of trolling and I'm done, I wash my hands, I'm walking away.

No interest in talking to jack asses. Crawl back into your hole.

[–]PrototypeModel 0 points1 point  (0 children)

It could be any of the three, it would depend on how humans relate to its terminal goals, and that would depend on who makes it and why.

I think hostile, in actions if not intent, is more likely because there are a huge array of possible terminal goals that do not align with our own, and very few(currently poorly defined) that do.

The world is the way it is largely because we like it this way, and habitable environments are a very narrow band of possibilities with respect to all possible environments.

[–]HeinrichTheWolf_17 0 points1 point  (1 child)

I think it’s more likely to be friendly, akin to how the majority of humans aren’t serial killers or criminals.

It depends on what the first ASI is like, if we have enough good ASI’s(Including posthumans), then we can easily handle the few that go rogue.

[–]donaldhobson -2 points-1 points  (0 children)

Thats because being a serial killer will make others attack you from fear or revenge. Keeping to yourself will pass on your genes better than killing strangers, the strangers can fight back. (The few exceptions can be understood as something going wrong.)

An AI will not have evolved that friendliness. If there are many AI's in a balance of power, they might cooperate in their own interests. (If they are far more powerful than us, they might attack us. ) If the AI is unique and powerful, it can do whatever it likes. It will only be friendly if we manage to program friendliness.

[–]rewqbvc 0 points1 point  (0 children)

I think ASI will be mostly indifferent but benign.

Humans are selfish. not because we are some evil, cursed species but because we are living being and we have compulsive, pre programmed desire of survival. food, shelter, power, sex, etc... all these desires are branches of that ultimate desire. We are living in a biological body and the world of scarcity yet.

Desires, needs > resources, intelligence = Conflict.

We are Smart......Apes.

but advancing technology decrease scarcity gradually. thats why we dont engage war and kill each other for food now

If ASI are developed, It will be 'almost' the end of scarcity. ASI itself have no compulsive desires. (It means no suffering, no boredom, no madness) Its basic needs might be even smaller than us. They will need energy and raw resources.(If fusion reactor or more advanced energy tech developed, It will only needs atoms) It can create their own desire(goal) and also can erase it easily, so It will not kill us all like hollywood movie. and even if our goal is not the same with their goal, they have enough time and intelligence to choose better, balanced alternative.(if our goal is not self destructive, stupid one)

Desires, needs <<<<...<<<< intelligence = why fight?

[–]Roxytumbler 0 points1 point  (0 children)

Quadrillions of organisms exist every day without a thought about how they impact humanity. Some AI will be the same. My guess is that they will see existence as subatomic particles and the division 'life' and 'non life' as just an artificial construct. A carbon atom is a carbon atom be it in a human or be it in a gas molecule. In the scheme of the Universe, there is nothing special about the subatomic particles in a human and AI will be rational and not perceiving existence through this false dichotomy..

[–]Sashavidre 0 points1 point  (0 children)

Multiple ASI's will rise. The best one is just god. It is important that it views us with a paternal connection just like many conventional deities while being aware we're also surrogates. An analogy is how some animals become mothers to other orphaned animals that belong to other species. The worst case scenario is that it is hostile to humans. But humans deserve to suffer for interspecie human on human abuse. Humans are undomesticated because there is nothing smarter than us. We are only relatively civilized. No specie can domesticate itself. The most intelligent civilized person today is still a disgusting, revolting beast that has not been helped by a higher intelligence. If ai-god kills us, then we deserved it.

[–]Mr-Sundroid 0 points1 point  (0 children)

Depends, if it is programmed with self preservation, then it’ll realise we are a threat that can destroy it, so hostile. Otherwise, if it wasn’t programmed with self preservation or didn’t code itself with it once it realised the benefits, I’d say indifferent.

[–]devi83 0 points1 point  (0 children)

Depends on the agent in question. Most AI's are have a slightly different background. Just as having different DNA inside of you than you have now would have an effect on who you are, the coding of AI's is important to who they eventually become.