Three Worlds Collide, and other works of Eliezer Yudkowsky

Serious discussions on politics, religion, and the like.

Three Worlds Collide, and other works of Eliezer Yudkowsky

Postby snowyowl » Sat Apr 27, 2013 3:50 pm

Fork from The Harkness Test. This post, specifically:

mendel wrote:
Globus wrote:This is a great... erm... novella? by the AI-researcher Eliezer Yudkowsky, about first contact with aliens who actually are alien. I highly recommend it to everyone.
Three Worlds Collide

I'd call it a treatise. Yudkowski endeavours to expound on priciples of philosphy, biology, morals, and, above all, rationality, that is, his special brand of rationality. It's not particularly entertaining, and the people in it are symbolic figures that have necessary roles in the discourse.

I don't like Eliezer Yudkowski. His views on rationality is unconventional, and I find his texts on logic to be subtly yet crucially flawed. And most of his followers to be too dogmatic for my taste.
He tends to clothe his arguments in parables, setting them in stories, which makes it harder to spot the gaps. The most annoying thing is that you have to check everything he uses for an argument, because he'll use controversial propositions without mentioning the alternatives. This looks a lot like indoctrination to me.


Equating pain with moral wrong, equating altruism with "saving" others from pain, and rationally extrapolating prescriptions from these premises that will be imposed on everyone makes me extremely queasy.
I would want every reader to be aware that morality, pain and altruism can (should?) be something else than this. I am worried that these kind of people are trying to evolve a "gentle" AI.
* that something causes pain does not make it wrong (unless you're a hedonist?)
* morality is guiding your own actions so that you don't regret them later
* altruism is enabling others to guide their own actions according to their own precepts
With these premises, the collision of the three worlds in this story has the type of solution that allowed the capitalist and the communist blocks to coexist in the cold war - notwithstanding the opinion of the author that this was merely due to economic constraints. But since Yudkowski uses different premises, he can't get there, so the solutions he can arrive at are quite unsatisfactory to me.

Societies accept that their members are able to make their own choices on how they will act. Some of these choices are sanctioned with consequences in the hope that they thus become less likely, and that society fares better that way. Most wrongs can be atoned. Some wrongs do not need to be atoned: people are allowed to make different moral choices for themselves as long as society as a whole does not suffer.
For the rationalists in this story, the super-happies and Eliezer Yudkowski, most moral choices are not rational, and thus inferior. The autonomy of someone who is not rational on their terms is disrespected. This makes my skin crawl.

The prisoner's dilemma is not about cooperating with people who have the same moral code that you have. In fact, if the prisoners do have the same moral code, there is no dilemma.
Also, the rewards for cooperation are supposed to be greater than those for mutual distrust. The two endings given do not align with that.
Specifically, if you set up the game reward matrix for prisoner A, it must be as follows, at least qualitatively:
* B cooperates, A cooperates: +3
* B cooperates, A betrays: +4
* B betrays, A cooperates: +1
* B betrays, A betrays: +2
If the rewads are ordered differenty. you don't have a dilemma.
You'll notice that in the story, the 4 possible outcomes are never actually analysed for their rewards, so the claim that this dilemma can be invoked is completely unsupported.

For further discussion, I include the comments I wrote down as I was reading through the treatise. It's not a fully-formed critique (yet?).

http://robinhanson.typepad.com/files/th ... ollide.pdf

p.2: Surely, no species would advance far enough to colonize space without understanding the logic of the Prisoner's Dilemma...

Wikpedia: "Because betrayal always rewards more than cooperation, all purely rational self-interested prisoners would betray the other, and so the only possible outcome for two purely rational prisoners is for them both to betray each other." In an alien contact situation, if the other side cooperates, the reward is higher if we choose cooperation as well. So this dilemma doesn't really apply.

p.6: It's a truism in evolutionary biology that group selection can't work among non-relatives.

https://sites.google.com/a/tcd.ie/siobh ... ther-works (SiobhanOBrien: Is there still a role for group selection in biology?): Group selection is said to occur when the traits of groups that out-compete other groups eventually come to characterize the species (Thompson, 2000), and these groups need not necessarily be made up of related individuals.

http://edge.org/conversation/the-false- ... -selection (Steven Pinker): And sometimes the term is used as a way of redescribing the conventional gene-level theory of natural selection in different words: subsets of genetically related or reciprocally cooperating individuals are dubbed "groups," and changes in the frequencies of their genes over time is dubbed "group selection."[2] To use the term in these senses is positively confusing

If it only worked among relatives, it would not be group selection, but rather kin selection.

(see also http://socio.ch/evo/sobwil.html .)
Group selection is (today) a controversial subject, and it seems to me that Yudkowski has yet again selected the less popular theory as the one to support, namely, that group selection doesn't actually exist - only kin selection does.

Whether the group is related or not is not actually relevant to the story, though, at least at this point.



(1/8) boils down to a moral conflict: what they do is wrong to us, and (somewhat) vice versa.

(2/8) tries to decide this conflict by logical argument.

p.13: They point out that by producing many offspring, and winnowing among them, they apply greater selection pressures to their children than we do. So if we started producing hundreds of babies per couple and then eating almost all of them - I do emphasize that this is their suggestion, not mine - evolution would proceed faster for us, and we would survive longer in the universe.

Ok, this only makes sense if you deny group selection. This logic presupposes that the only way to evolve is to select for random genetic mutations. But group selection theory says that group (vehicle) standards can be selected for non-genetically.

(3/8) introduces a second alien race that is in our previous position: what we do is very wrong to them. (And also what race 1 does.)

p.23: "I am a Confessor - a human master rationalist

I think I just spotted Eliezer's avatar. ;-)

p.23: But Bayes's Theorem will not be different from one place to another
p.26: "You confuse a high conditional likelihood from your hypothesis to the evidence with a high posterior probability of the hypothesis given the evidence,"...

Ugh, spouting dogma again.
p.26 ctd.: ... as if that were all one short phrase in her own language.
It is also a short phrase in our language: "converse error" or "affirming the antecedent". Look it up.

Aliens 2 see pain as morally wrong.

So now the poor humans are at both ends of the stick, and the setup we saw coming from 1/8 is now complete, with half the story through.

At this point, I can't help but wonder:
* all these people are masters at their craft, have studied for 100+ years, and know big words like "symmetrist", yet they have not actually written down the game matrix, based on which they have supposedly concluded that this is a "prisoner's dilemma" situation. The argument that it isn't can be made if you assume that future contact is likely, and having betrayed today means your species would be seen as hostile aggressor in future contacts, which makes not betraying the better option in any case => not a dilemma situation at all.
* They haven't figured out the criteria by which they would recognize a solution to the moral problem they encounter, despite talking about about a "solution space". To actually have a solution space, you'd need to set down the dimensions in which you can act, and identify the constraints that govern them.
* Respect for another species' internal affairs and culture and the future developments they may be capable of doesn't actually seem to be one of their doctrines. They're clearly imperialists. ;-) And all of the aliens species as well. Strange.
* The human crew seems to have the authority to establish core parameters of interspecies relations, yet seem to have not been furnished with any guidelines or procedures that could ensure their decisions would later be supported by their own species. The Babyeaters had a procedure of initiating cultural exchange, the Fun-Fun People seem committed to "meliorate" pain.
* There is not actually a discussion about what pain is, or how a moral framework that is based upon achieving its absence could be justified. Hedonism is simply a given, another controversial assumption. It isn't entirely clear that the humans are arguing from Hedonism as well, but it seems likely.
* One of the underlying theses of the story seems to be that moral systems evolve based on the course that the biological evolution of a species has taken: for the babyeaters, their moral system is based on a mechanism that favors evolutionary speed, for the Fun-fun it is based on communication and group forming being biologically linked to pleasure. This is again the controversial stance that links group selection (i.e. selection for moral systems) necessarily to biological (genetic) selection.


p.27 (4/8) Interlude

p.27 "Did humanity go down the wrong path?"
Assuming paths can be wrong or right, here.
The confessor thinks about the question, but does not challenge the assumption that there is only one right path.

p.29 the small resources of altruism were splintered among ten thousand urgent charities, and none of it ever seemed to go anywhere

Cynic much? Economic growth as enabler for altruism?

p.30 Do you know there was a time when nonconsensual sex was illegal?
Ooo, another moral provocation. I can refute that, I think.

p.32 When they hadn't yet visualized the costs. But once they did, there would be an uneasy pause, while everyone waited to see if someone else might act first.
Morality constrained by economics yet again.

Human beings will tolerate what they think is normal.
-- is this why they have not abolished pain?

p.33 (5/8) Three Worlds Decide

p 34: Sterilize the current adults.
More Nazi comparisons after the "holocaust": some of these were actual suggestions on what to do with the Germans after WW2.

p.36 combine and compromise the utility functions of the three species
more Bayesian framing
Also, Kant's categorical imperative appears as a given: why must there be a common moral code?
And how can they determine what kind of utility function compromise would be optimal for the humans, without giving them a say in the matter? The fact that an ending exists for which this utility function is not optimal suggests that the Super-Happy are about as equally bad at analysing situations as the humans are in this case.

p.40 We can make the main star of this system go supernova
So, now we get to the prisoner's dilemma again? Not cooperating as a profitable option? (yes-p.42)
And thus evading the moral issues at minimum economic cost?
Contradiction: with morality goverend by economics and self-interest, no fundamental research that can be weaponized would be kept secret.

p. 41: "Humankind, we must have your answer," she said simply.
Why are the Super-Happy putting pressure on the humans?

The Super-Happy propose a compromise that humanity must either accept (Lord Akon), or deny the Super-Happy all cooperation (make the star go supernova, Lord Pilot). Neither is a moral solution: humanity must comprise its morals by changing itself and beginning to eat fetuses, or it must refuse to act on babyeater immorality. The author implies that no moral solution exists. This is due to the framework within which the problem is posed: it puts "rationality" and innate morals above indivudual and collective autonomy. Which kinda puts the real state of affairs on its head: without autonomy, moral deed is not possible. (-> free will)

If you allow individuals and civilisations the autonomy to decide their fate, there is no moral urgency.

That is also why the super-happy's love is, from a human standpoint, superficial: because it does not tolerate the loved ones to be wrong.


(6/8) Normal ending

Moral compromise. Supposedly many humans would rather choose death.


(7/8) True ending.

The rationalist changes role from advisor to chief. He seems to have had this power all along.
And then he takes the non-cooperative out, but incidentally incurring a huge penalty since any further encounters with super-happy, of which the probability is non-zero, are now tainted.


(8/8) Epilogue: Atonement

p.55: "The most dangerous truth a Confessor knows is that the rules of society are just consensual hallucinations."

Mmmmmh, wouldn't call that a truth.

p.56: Knowing that I'd been a victim for someone else to save, one more point in someone else's high score - that just stuck in my craw, all those years..."

Nope, doesn't understand altruism at all. Or saving.

-------------------------------


The prisoner's dilemma is not about cooperating with people who have the same moral code that you have. In fact, if the prisoners do have the same moral code, there is no dilemma.
Also, the rewards for cooperation are supposed to be greater than those for mutual distrust. The two endings given do not align with that.

Societies accept that their members are able to make their own choices on how they will act. Some of these choices are sanctioned with consequences in the hope that they thus become less likely, and that society fares better that way. Most wrongs can be atoned. Some wrongs do not need to be atoned: people are allowed to make different moral choices for themselves as long as society as a whole does not suffer.
For the rationalists in this story, the super-happies and Eliezer Yudkowski, most moral choices are not rational, and thus inferior. The autonomy of someone who is not rational on their terms is disrespected. This makes my skin crawl.

Equating pain with moral wrong, equating altruism with "saving" others from pain, and rationally extrapolating prescriptions from these premises that will be imposed on everyone makes me extremely queasy.
I would want every reader to be aware that morality, pain and altruism can (should?) be something else than this. I am worried that these kind of people are trying to evolve a "gentle" AI.
* that something causes pain does not make it wrong (unless you're a hedonist?)
* morality is guiding your own actions so that you don't regret them later
* altruism is enabling others to guide their own actions according to their own precepts
With these premises, the collision of the three worlds in this story has the type of solution that allowed the capitalist and the communist blocks to coexist in the cold war - notwithstanding the opinion of the author that this was merely due to economic constraints. But since Yudkowski uses different premises, he can't get there, so the solutions he can arrive at are quite unsatisfactory to me.


Discuss.
... in bed.
User avatar
snowyowl
 
Posts: 1175
Joined: Sat Nov 27, 2010 6:05 pm

Re: Three Worlds Collide, and other works of Eliezer Yudkows

Postby mendel » Sat Apr 27, 2013 4:18 pm

Thank you. I'm looking forward to your take(s) on this text.
What the heck kind of religion do you guys think I follow? (#381)
User avatar
mendel
 
Posts: 116
Joined: Sat Apr 13, 2013 3:00 pm
Location: German North Sea Coast

Re: Three Worlds Collide, and other works of Eliezer Yudkows

Postby crayzz » Sat Apr 27, 2013 11:17 pm

mendel wrote:* that something causes pain does not make it wrong (unless you're a hedonist?)
* morality is guiding your own actions so that you don't regret them later
* altruism is enabling others to guide their own actions according to their own precepts


I agree with 1, though I don't think it goes against hedonism. To my understanding, hedonism is simply "pleasure is moral", which does not necessitate "pain is immoral". Also, pain and pleasure are not opposites, since pleasure can come directly from pain (as any BDSM fan will tell you). I think "pain" is often used to describe a range of emotions as well as the physical sensation, and this leads to confusion. "Suffering" might be a better word for this topic (though still, suffering is not necessarily immoral).

I'm curious as to why you take 2 and 3 as premises (assuming I haven't misunderstood you). Do you mind explaining?
crayzz
 
Posts: 925
Joined: Wed Feb 20, 2013 11:34 am

Re: Three Worlds Collide, and other works of Eliezer Yudkows

Postby mendel » Sun Apr 28, 2013 4:15 am

crayzz wrote:
mendel wrote:* that something causes pain does not make it wrong (unless you're a hedonist?)
* morality is guiding your own actions so that you don't regret them later
* altruism is enabling others to guide their own actions according to their own precepts


I agree with 1, though I don't think it goes against hedonism. To my understanding, hedonism is simply "pleasure is moral", which does not necessitate "pain is immoral". Also, pain and pleasure are not opposites, since pleasure can come directly from pain (as any BDSM fan will tell you). I think "pain" is often used to describe a range of emotions as well as the physical sensation, and this leads to confusion. "Suffering" might be a better word for this topic (though still, suffering is not necessarily immoral).

I went with Wikipedia again, which oversimplies Hedonism to "a hedonist strives to maximize net pleasure (pleasure minus pain)". So pain is bad unless it is outweighed by pleasure. If some action caused pain without also causing (more) pleasure, a hedonist would likely find that course of action bad. I'm vaguely ok with calling a pain surplus (= a pleasure deficit) "suffering" for now.

crayzz wrote:I'm curious as to why you take 2 and 3 as premises (assuming I haven't misunderstood you). Do you mind explaining?

I originally wrote this paragraph and the one that follows it ("Societies accept...") in the opposite order, after having read the story, and that's the way they are repated at the end of my wall of text; read them together. The other paragraph sets down that people have free will, i.e. the ability to make moral choices for themselves. I label this autonomy. I need this as an antithesis to Yudkowski's "solutions" to the TWC dilemma. Thus, my premises 2 and 3 are constructed to enable that antithesis.
* I need to have a different definition for altriusm, because Yudkowski's premise "altruism means to alleviate suffering in others (through economic means)" conflicts with my precept re: personal autonomy, and altruism is one of the key concepts that supposedly drives the actors in the story.
* For the same reason, I need to replace Yudkowski's concept of morality (something to be imposed on everyone) with one that is compatible with reordering personal autonomy as a higher moral good. I'd still be for spreading my morality to everyone, it's a good course of action, but I'd do it by striving to be a model moralist myself and by communicating about it, which doesn't violate the personal autonomy of the people I wish to convert to my view. Doing it by flame and sword would be bad. (Again, that's simplified a lot - I think of Guareschi's Don Camillo as a delightful example of the not-so-peaceful missionary - however, he does respect those he wishes to convert, namely, his antagonist Peppone.)
What the heck kind of religion do you guys think I follow? (#381)
User avatar
mendel
 
Posts: 116
Joined: Sat Apr 13, 2013 3:00 pm
Location: German North Sea Coast

Re: Three Worlds Collide, and other works of Eliezer Yudkows

Postby snowyowl » Sun Apr 28, 2013 10:52 am

Yudkowsky's writing appeals to me because his way of looking at things is very... I think "mathematical" is the word I'm looking for. He likes to frame things in terms of probabilities, and systems where happiness and morality can be measured as if they were distances on a ruler. Under these conditions, if it's not obvious what the best course of action is, that only means you don't have enough information at your disposal! Either that or the reasoning system you're using is somehow flawed. Any problem has a right answer, if you can only express it in the right terms!

He has a special place in my heart because his essays on mental biases were what triggered me to objectively assess my religious beliefs and drop from agnostic to atheist. Any implication that my opinion on him is prompted by emotions rather than reason is absolutely, entirely, true.

What I dislike about him is that he is worryingly similar to the evangelists he constantly cautions everyone to look out for. I'm reminded of the Monty Python scene:
Brian: Look, you've got it all wrong! You don't need to follow me! You don't need to follow anybody! You've got to think for yourselves! You're all individuals!
Crowd: Yes! We're all individuals!

... except that Eliezer encourages people to follow his philosophy in the same breath as telling them to think for themselves. It's almost amusing to read the comment threads and realise that you can completely derail someone's argument by accusing them of some logical fallacy.

Anyway, that's the preliminaries out of the way, so you know where I stand.

----

The difficulty in the Prisoner's Dilemma is not making sure that CC is better than DD. It's making sure that DC is better than CC (you win if you screw over the other guy while convincing him to help you). Normally, our sense of empathy means we cannot help but factor in our opponent's feelings, and we'll have a slight preference towards cooperation that wouldn't exist in a perfectly rational entity. This doesn't happen (or not as much) if the opponents have different moral codes (TWC uses the term "utility function").

Depending on your moral code and how you read the ending, you may or may not agree that the story achieves this effectively.

CC: Normal ending. Babyeaters, humans, and Superhappies get assimilated. Humans not only lose all concept of physical and emotional pain, but also eat their own children. 3 points.
DC: True ending. Several billion humans die, but humanity doesn't have to eat their own children. Nor do they have to have their minds altered. Babyeaters are still assimilated, and their children no longer suffer. Debatably, 4 points.
DD and CD: War. Superhappies probably win, and take over by force. Essentially the same as the Normal ending, but with more deaths and without people's choices being respected.

So if we started producing hundreds of babies per couple and then eating almost all of them - I do emphasize that this is their suggestion, not mine - evolution would proceed faster for us, and we would survive longer in the universe.
I think this was in-story rationalisation. They point out just below that we already have IVF, genetic modification, and we select the healthiest from millions of gametes rather than hundreds of children. If eugenics were on the table, humans would be much better at it than Babyeaters.

yet they have not actually written down the game matrix, based on which they have supposedly concluded that this is a "prisoner's dilemma" situation

I'll write down the first-first-contact matrix then:
CC: What actually happened.
DC: Humans raise their shields, Babyeaters had no intention of firing on them. Mistrust of the humans by the Babyeaters.
DD: Humans raise their shields, Babyeaters fire on them. Both sides mistrust each other, future contact extremely strained.
CD: Humans don't raise their shields, Babyeaters destroy them. Bad.

You're right, this isn't a true PD situation. CC is better than DC, because it results in both sides trusting each other. I'd term this the "fake Prisoner's Dilemma". It might well count as an iterated Prisoner's Dilemma, which to my mind is a better model of most real-world situations. But that's not the point.

The human crew seems to have the authority to establish core parameters of interspecies relations, yet seem to have not been furnished with any guidelines or procedures that could ensure their decisions would later be supported by their own species.
I don't think they have any authority beyond being the only people available and being fairly typical members of their society (so there's a good chance other people will agree with their decisions).

mendel wrote:* that something causes pain does not make it wrong (unless you're a hedonist?)

Mendel, it seems to me that decision-making power has a great importance in your worldview. If a decision affects someone else, you feel it is their right to have their say, rather than have someone make the choice for them - even if the eventual decision is what they would have asked for, had they known. I'd like to comment on this, but am I reading you right or am I way way way way off?
... in bed.
User avatar
snowyowl
 
Posts: 1175
Joined: Sat Nov 27, 2010 6:05 pm

Re: Three Worlds Collide, and other works of Eliezer Yudkows

Postby Sorcyress » Fri May 31, 2013 5:54 am

mendel wrote:
crayzz wrote:
mendel wrote:* that something causes pain does not make it wrong (unless you're a hedonist?)
* morality is guiding your own actions so that you don't regret them later
* altruism is enabling others to guide their own actions according to their own precepts


I agree with 1, though I don't think it goes against hedonism. To my understanding, hedonism is simply "pleasure is moral", which does not necessitate "pain is immoral". Also, pain and pleasure are not opposites, since pleasure can come directly from pain (as any BDSM fan will tell you). I think "pain" is often used to describe a range of emotions as well as the physical sensation, and this leads to confusion. "Suffering" might be a better word for this topic (though still, suffering is not necessarily immoral).

I went with Wikipedia again, which oversimplies Hedonism to "a hedonist strives to maximize net pleasure (pleasure minus pain)". So pain is bad unless it is outweighed by pleasure. If some action caused pain without also causing (more) pleasure, a hedonist would likely find that course of action bad. I'm vaguely ok with calling a pain surplus (= a pleasure deficit) "suffering" for now.
[/quote][/quote]

I think the problem lies in there not being a linguistic nuance between pain(good) and pain(bad). As a passionate little masochist, (and an equally passionate big ol' sadist) I am well aware that there are pains that can feel good. When I say "I want to hurt you", I don't mean in a bad way, I mean in a way that you will enjoy and come out of the experience feeling better for.

I try to use harm as the opposite to hurt, the "bad pain" version. I like the proposal of "suffering" as well. Pain is a good thing, it tells you where on the body things have gone wrong so you can fix them. Suffering is needless, and cruel, and a bad thing.

~Sor
User avatar
Sorcyress
 
Posts: 111
Joined: Wed Mar 30, 2011 6:44 am
Location: Trying to become Tailsteak's creepiest fan.

Re: Three Worlds Collide, and other works of Eliezer Yudkows

Postby Naveed123 » Mon Sep 02, 2013 4:37 am

the reward is higher if we choose cooperation as well. So this dilemma doesn't really apply. :Jamie:
ipass4sure.co.uk [url=http://www.ipass4sure.co.uk]http://www.ipass4sure.co.uk[/url]
Naveed123
 
Posts: 1
Joined: Mon Sep 02, 2013 4:32 am

Re: Three Worlds Collide, and other works of Eliezer Yudkows

Postby Packbat » Mon Sep 02, 2013 7:01 am

I skimmed the OP and don't have any particular reactions to anything I read (it all seems reasonable enough on first glance), so this post is going to be 75% out of left field: a while ago, I wrote a TV Tropes Analysis page for "Three Worlds Collide" discussing why I thought some of the audience might prefer the Normal Ending to the True Ending.
User avatar
Packbat
 
Posts: 948
Joined: Thu Dec 02, 2010 12:16 pm
Location: Three or four boxes downstream of the Interwebs


Return to Serious Business

Who is online

Users browsing this forum: No registered users and 1 guest

cron