Friday, August 14, 2015

A reply to Magnus Vinding on consciousness, ethics, and future suffering

Magnus Vinding recently published a piece, "My Disagreements with Brian Tomasik", that discusses his views on consciousness, moral realism, and reducing suffering. Magnus offers a nice defense of many of his points, and I really enjoyed reading it, though I ultimately disagree with most of his conclusions. (On some points I'm more agnostic.) To save time, I'm only replying to a few issues, though much more could be said in this debate. Following are italicized quotes from Magnus's piece, followed by my replies.

Magnus: He might even claim that there is no screen, only “information processing”, and that consciousness is all a user created illusion.

Yes. :)

Magnus: When it comes to our knowledge, consciousness is that space in which all appearances appear, the primary condition for knowing anything

If you believe that philosophical zombies are possible, then you'd agree that they can know things without being conscious. Even if you deny that zombies are possible, you might believe it possible that a sufficiently intelligent computer system could "know things" and act on its knowledge without being conscious.

Magnus: denying that consciousness exists is like denying that physical space exists. 

Yes, both are matters of faith, but I find that the intuitive appeal of a physical ontology is stronger than that of a physical + mental ontology or a purely mental ontology.

Magnus: If you don't believe consciousness exists, how can you hold that consciousness is the product of computation?

Denying consciousness is a strategic way to explain how my view differs from naive consciousness realism. I don't deny that there are processes that deserve the label of "consciousness" more than others. In this section, I draw an analogy with élan vital. To distinguish my position from vitalism, I would say that "life force" doesn't exist. But it's certainly true that organisms do have a kind of vital force of life to them; it's just that there's no extra ontological baggage behind the physical processes that constitute life force, and it's a matter of interpretation whether a given physical process should be considered "alive".

Magnus: Furthermore, if I were to accept Brian's constructivist view, the view that we can simply decide where to draw the line about who or what is conscious or not, I guess I could simply reject his claim that flavors of consciousness are flavors of computation – that is not how I choose to view consciousness, and hence that is not what consciousness is on my interpretation. That “decision” of mine would be perfectly valid on Brian's account

Yes, though I would oppose it insofar as I don't like that decision.

Arguments over how to define "consciousness" are like arguments between liberals vs. conservatives over how to define "equality".

Magnus: I need make no extra postulate than that suffering really exists and matters,

I don't know what it means (ontologically) for something to "matter".

Magnus: Moral realism, on our accounts, merely amounts to conceding that suffering should be avoided, and that happiness should be attained.

Taboo "should".

Magnus: He appears certain that we will soon give rise to a suffering explosion where we will spread suffering into outer space.

That seems very likely conditional on space colonization happening.

Magnus: He criticizes those who believe in a bright future for being under the spell of optimism bias, but one could level a similar criticism against Brian's view: that his view is distorted by a pessimism bias.

I think space colonization would also result in a "happiness explosion", with the expected amount of happiness as judged by a typical person on Earth plausibly exceeding the expected amount of suffering. But I think we should give special moral weight to suffering, which means that the potential explosion of suffering beings isn't "outweighed" by the potential to also create more happy beings.

Magnus: If we become able to abolish suffering, as Pearce envisions, then why wouldn't we?

If we became able to feed ourselves without killing animals, then why wouldn't we?
If we became able to distribute wealth more evenly to prevent homelessness, then why wouldn't we?
If it were possible to eliminate wars by being more caring for one another, then why wouldn't we?

Magnus: One may object that our future is rather determined by market forces, but couldn't one argue that the pain-pleasure axis indeed is the ultimate driver of these, and hence that markets gradually will work us in that direction: away from pain and misery, toward greater happiness, at least for those actively participating in them, which in the future may be an ever-increasing fraction of sentient beings.

I'm doubtful that non-human animals will begin holding economic wealth and making trades with humans. Advanced digital intelligences probably will, but lower-level "suffering subroutines" will probably not. At present, and plausibly in the future, most sentience relies on altruism in order to be cared about by powerful agents.

Also, Robin Hanson's Malthusian scenario is one example where actors in a market economy may be driven into potentially miserable lives despite being able to buy and sell goods as rational agents.

Magnus: Perhaps Brian's view does not support this, since there ultimately is nothing more reasonable about minimizing suffering than there is about, say, maximizing paperclips on his view, and hence that empathy is all we have to rely on in the end when it comes to convincing people to reduce suffering. I doubt he would say that, but I don't know.

All reasoning relies on foundational premises that come from emotion or other primitive cognitive dispositions. But reasoning based on those premises is often useful in convincing people to care about certain things.

Magnus: As an empirical matter, however, it seems clear to me that reason is the prime force of moral progress, not empathy.

It's not obvious to me either way.

Magnus: small changes today can result in big differences in outcome tomorrow, and hence that the motion of our tiny wings at least do have a small chance of actually making a major impact.

Agreed, but with "small chance" being a key qualification.

Magnus: Even if [digital sentience] is not possible, Brian still thinks we are most likely to spread suffering rather than reduce it, for instance by spreading Earth-like nature and the suffering it contains. I don't consider this likely, because what rules the day with respect to people's views of nature is status quo bias, which works against spreading nature.

That might be true regarding directed panspermia (we can hope), but at least when it comes to terraforming, the economic incentive would be very strong (in futures where digital intelligence doesn't supplant biological humans). People have no qualms about starting farms in an area (disrupting the status quo) to feed and clothe humans. Likewise when it comes to terraforming other planets so that they can eventually support farms.

Magnus: It seems to me that when it comes to where on the hedonic scale our future will play out, it is appropriate to apply a very wide probability distribution [...]. Brian seems to have confined his distribution to an area that lies far below basement level. An unjustified narrowness, it seems to me.

It depends whether you're thinking of "happiness minus suffering" or just "suffering". I claim that the "suffering" dimension and the "happiness" dimension will both explode if space is colonized. I'm more agnostic on the sign of "happiness minus suffering" (relative to a typical person's assessments of those quantities), but I don't think "happiness minus suffering" is the right metric for whether space colonization is good, since suffering has higher moral priority than happiness.

Magnus: I understand where Brian is coming from: he has had discussions with vegans, and the majority, I would guess, have defended leaving the hellhole that is nature alone.

Yes. :)

Magnus: “Non-interference” seems to be the predominant view among both vegans and non-vegans

Well, the rate of conservationism is higher among vegans than in the general population (since vegans tend to be liberal, and liberals tend to support ecological preservation).

Magnus: The core issue here is suffering in nature, so I think it's worth asking the question: who is most likely to care about suffering in nature, someone who is vegan for ethical reasons or someone who is not?

Given that habitat loss is now and will be for the next many decades the primary anthropogenic determinant of wild-animal suffering -- not whether people care about wild animals -- then what matters most in the short run is how much people support environmental conservationism. In the long run, ideas about animal suffering may matter more.

Magnus: For example, among the few vegans whom I have spoken about the issue with, I have only met agreement: yes, we should help beings in nature if we can rather than leave them to suffer. I have met nothing like it among non-vegan friends, and it is a much larger sample.

Thanks for the data points. :)

Magnus: And genuine moral concern for non-human beings is exactly what must be established in order for us to take the suffering of non-human beings seriously. I maintain that there is no way around veganism for the establishment of such moral concern.

Many transhumanists aren't vegan but care about wild-animal suffering.

Magnus: I welcome Brian's response to any of my comments above, and hope he will keep on challenging and enlightening me with his writings.

Thanks! Same to you. :)

Monday, December 2, 2013

Posts moved to "Essays on Reducing Suffering"

In Oct.-Nov. 2013, I revamped my main website, "Essays on Reducing Suffering," to improve its appearance, add pictures, and rewrite significant portions of several essays. I also moved some of the higher-quality blog posts there from here. I plan to close out this blog and publish further writings on my main website because
  • I think readers find essays on my main website more authoritative -- lots of people have blogs but not as many have standalone sites like that one,
  • I prefer the fact that my website is static and lays out essays by topic rather than by date -- I fear that on blogs, the old posts get lost and unread even if they're well written and not stale, and
  • my website allows for more customization with formatting, etc.
One feature my static site lacks is comments, but I find that most discussion happens on Facebook nowadays, and sadly, my writings may appear more credible without comments. (As an example, an academic would not have a comments section for the papers on her website.)

Thanks to the readers of this blog for past contributions. I welcome continued feedback on my writings by email or Facebook.

Monday, October 14, 2013

Beauty-driven morality

In a waiting room today, I talked with someone I met about suffering by animals in nature. His reply was that suffering isn't really bad, and because nature is beautifully complex and intricate, we should try to keep it the way it is as much as possible. I've gotten this reaction many times, including from several close friends. For these people, nature's aesthetic appeal outweighs all the suffering of the individual insects and minnows that have to live through it.

Jonathan Haidt's Moral Foundations Theory describes five principal values that seem to underlie many moral intuitions:

1) Care/harm
2) Fairness/cheating
3) Loyalty/betrayal
4) Authority/subversion
5) Sanctity/degradation.

The last of these is partly driven by feelings of disgust, which seem to move from the visceral realm to the moral realm in some people by acquiring a higher sense of "absolute wrongness." A classic example is a thought experiment involving completely safe and harmless sex between a sister and brother. Some people say, "I can't explain why, but it's just wrong."

It seems there's a reverse side of disgust-driven morality, one which probably has much more sway over more liberal-minded types. It's what I'm calling "beauty-driven morality," and it's slightly different from Haidt's "moral elevation" concept. In beauty-driven morality, outcomes are evaluated based on how aesthetically pleasing, complex, amazing, and sublime they seem to the observer. So, for example, the intricacies of ecosystem dynamics -- complete with brutal predation and Malthusian mass deaths shortly after birth -- are seen as so elegant, such a wonderfully harmonious balance, that to replace them with anything more bland, sterile, or civilized would be morally tragic.

Our sense of beauty and awe is part of a reward system designed to encourage exploration and discovery. Identifying patterns, figuring things out, and otherwise tickling our aesthetic intellectual senses makes us feel good. In those with beauty-driven moral intuitions, this feel-good emotion seems to be not just a personal experience, like the pleasant taste of chocolate, but also a morally laden experience: The sense that "this is right; this is how the world should be."

Of course, care/harm-based morality is fundamentally very similar. Our brains feel reward upon helping others and punishment upon seeing others in pain, and we regard this not just as a private emotion but a reflection on how the world should be, i.e., it should contain more helping and less suffering.

A pure care/harm moralist like myself can tell the beauty-based moralist: "You don't understand. Beauty is just a reaction you have to imagining something. It doesn't mean we should actually work toward the scenario you picture as beautiful. The real deep importance of acting morally comes from improving the subjective experiences of other beings." The beauty-based moralist can reply: "No, you don't understand how transcendent this higher beauty is. It's so fundamentally important that it's worth many beings suffering to bring it about. This is where the deepest moral purpose lies."

Of course, I don't agree with the beauty-based moralist, but this fundamentally comes down to a difference in our brain wiring. Similarly, I can't talk a paperclip maximizer out of pursuing its metallic purpose in life. The paperclip maximizer tells us: "No, you both don't understand. The ineffably profound value of paperclips rises far above both of your petty concerns. I hope one day you see the shiny truth."

That said, there is more room to change the minds of beauty-based moralists than paperclip maximizers insofar as the former are humans who also tend to have care/harm intuitions. The aesthetic approach makes most sense from a "far mode" perspective -- looking at whole ecosystems or inter-agent evolutionary dynamics on large time scales -- but if you see in a near-mode way this particular gazelle having its intestines ripped out while still conscious, even the aesthetics of the situation may seem different, and if not, hopefully care/harm sentiments can enter in.

Since beauty-based morality presumably originates from aesthetic reward circuits, we would predict that people with more of these circuits (artists, poets, mathematicians, physicists, etc.?) would, ceteris paribus, tend to care more about making the future beautiful than average.

As a postscript, I should add that even if we don't agree with beauty-driven morality, there are good strategic reasons to compromise with people who do subscribe to it. For that matter, there are even good strategic reasons to compromise with paperclip maximizers if and when they emerge.

In addition, if we're preference utilitarians, we may place intrinsic weight on agents' desires for beauty or paperclips. In general, we should strive for a society in which other values are respected and in which we do cheap things to help other values, even if we don't care about them ourselves.

Monday, July 29, 2013

Should we worry about 1984 futures?

Summary. It seems that oppressive totalitarian regimes shouldn't be needed in the long-term future, although they might be prevalent in simulations.


When you hear the phrase "dystopic futures," one of the first images that may come to mind is a society like that of Oceania from Orwell's 1984. Big Brother eliminates opportunity for privacy, and orthodoxy is enforced by brainwashing and torture of those who fail to conform. As far as future suffering is concerned, the most troubling of these is torture.

In the short run, futures of this type are certainly possible, and indeed, governments like this already exist in some degree. However, my guess is in the long run, enforcing discipline by torture would become unnecessary. Torture is needed among humans as a hack to restrain motivations that would otherwise wander from those the authorities wanted to enforce. For arbitrary artificial minds, the subjects/slaves of the ruling AI can have whatever motivations the designer builds. We don't need to torture our computers to do what we ask. Even for more advanced computers of the future that have conscious thoughts and motivations, the motivations can simply be to want to follow orders. Organisms/agents that don't feel this way can just be killed and replaced.

Huxley's Brave New World approximates this idea somewhat for non-digitial minds in the form of drugs and social memes/rituals that inspire conformity. 1984 has plenty of these as well, and they don't represent an intrinsic concern for suffering reducers.

If we encountered aliens, it seems unlikely there would be much torture either (except maybe to extract some information before killing the other side). The side with more powerful technology would just decimate the one with less powerful technology.
Whatever happens, we have got
The Maxim gun, and they have not. (source)
Just wiping out your enemies is a lot cheaper than keeping them around subject to totalitarian rule.

The main context in which I would worry about 1984-style torture is actually in simulations. AIs of the future may find it useful to run vast numbers of sims of evolved societies in order to study the distribution of kinds of ETs in the universe, as well as to learn basic science. Depending on the AI's values, it might also run such sims because it finds them intrinsically worthwhile.

Sunday, July 21, 2013

Counterfactual credit assignment

Introduction

Effective altruists tend to assign credit based on counterfactuals: If I do X, how much better will the world be than if I didn't do X? This is the intuition behind the idea that the work you do in your job is at least somewhat replaceable, as well as the reason to seek out do-gooding activities that aren't likely to be done without you.

Perils of adding credit

We can get into tricky issues when trying to add up counterfactual credit, though. Let me give an example. Alice and Bob find themselves in a building that contains buttons. Each person is allowed to press only one button, at which point she/he is transported elsewhere and has no further access to the buttons. Thus, Alice and Bob want to maximize the effectiveness of their button pressing. There's a green button that, when pressed once, prevents 2 chickens from enduring life on a factory farm. There's also a red button that, when pressed twice in a row, prevents 3 chickens from enduring life on a factory farm. In order to make the red button effective, both Alice and Bob have to use their button press on it.

Alice goes first. Suppose she thinks it's very likely (say 99% likely) that Bob will press the red button. That means that if she presses the red button, she'll save 3 chickens, while if she presses the green button, she'll only save 2. There's more counterfactual credit for pressing the red button, so it seems she should do that. Then, Bob sees that Alice has pressed the red button. Now he faces the same comparison: If he presses red, he saves 3 chickens, while if he presses green, he saves only 2. He should thus press red. In this process, each person computed a counterfactual value of 3 for the red button vs. 2 for the green button. Added together, this implies a value of 3+3=6 vs. 2+2=4.

Unfortunately, in terms of the actual number of saved chickens, the comparison is 3 vs. 4. Both Alice and Bob should have pressed green to save 2+2=4 chickens. This shows that individual credit assignments can't just be added together naively.

Of course, the situation here depended on what Alice thought Bob would do. If Alice thought it was extremely likely Bob would press green, her counterfactual credit would have been 2 for green vs. 0 for red. Or, if she thought Bob would switch to red if and only if she pressed red, then the comparison was 2 for herself vs. 3-2=1 for Bob's switching to red and giving up his green.

Joint decision analysis

The decision analysis becomes more clear using a payoff matrix as in game theory, except in this case both Alice and Bob, being altruists, share the same payoff, which is total chickens helped:

Bob press red Bob press green
Alice press red 3 2
Alice press green 2 4

Alice and Bob should coordinate to each press green. Of course, if Alice has pressed red, at that point Bob should as well.

In this example, reasoning based on individual counterfactual credit still works. Imagine that Alice was going to press red but was open to suggestions from Bob. If he convinces her to press green and then presses green himself, the value will be 4 instead of 3 if he hadn't done that, so he gets more counterfactual credit if he persuades Alice to press green and then does the same himself than if he goes along with her choice of red.

Acknowledgements

This post was inspired by comments in "The Haste Consideration," which is a concrete case where counterfactual credit assignments can get tricky.

Tuesday, January 8, 2013

Mr. Rogers on unconditional love

Summary: Unconditional love is an attitude we adopt and a feeling we cultivate because of its salutary effects on people.

Fred Rogers ended many episodes of Mister Rogers' Neighborhood with the reminder that "people can like you exactly as you are," an expression he learned from his grandfather.

Episode 1606 of the program featured Lady Aberlin singing to Daniel the following song:
I'm glad
You're the way you are
I'm glad
You're you
I'm glad
You can do the things that you can do
I like
How you look
I like the way
That you feel
I feel that you
Have a right to be quite pleased with you
I'm glad
You're the way you are
I think
You're fine
I'm glad
You're the way you are
The pleasure's mine
It's good
That you look the way you should
Wouldn't change you if I could
'Cause I'm happy you are you.
Do these statements mean people shouldn't bother improving themselves? If others like them as they are, is there no incentive to get better at things?

Well, it's possible that conditional love could force people to try harder in order to seek approval, but at what cost and for what benefit? I think the cost is big: If you're not certain that anyone loves you, life can seem very scary, hopeless, and pointless. And I think there are plenty of other factors motivating people to improve in areas that matter without trying to use love as another carrot and stick. When people are in a rough emotional situation, they may not even have the motivation or support to undertake self-improvement, and might either wallow in despair or seek approval in unproductive ways -- including, as the song hints, through trying to look more attractive on the outside.

There's a time and place for incentives, but love by and for another person is one domain where trying to introduce incentives does more harm than good because of the nature of human psychology. Consider how popular the theme is in Christianity that God loves you no matter what: This is a powerful idea that can transform people's lives.

I feel unconditional love for a person even at the same time that I might prefer him/her to be different. If the person is open to advice on changing, I'll suggest things, but at the same time, I feel that even if the person doesn't change, it's okay -- s/he is still a special individual whose feelings matter just the same. In my mind, unconditional love is closely tied with hedonistic utilitarianism: When I realize that an organism feels happiness and suffering, at that point I realize that the organism matters and deserves care and kindness. In this sense, you could say the only "condition" of my love is sentience.

From "Then Your Heart is Full of Love" by Josie Carey Franz and Fred Rogers (1984):
When your heart can sing another's gladness,
Then your heart is full of love.
When your heart can cry another's sadness,
Then your heart is full of love.
[...]
When your heart has room for everybody,
Then your heart is full of love.

I'll close with another Fred Rogers song, possibly my favorite. It hints at this idea that the other person's feelings are the reason for our love of him or her.
It's you I like,
It's not the things you wear,
It's not the way you do your hair--
But it's you I like.
The way you are right now,
The way down deep inside you--
Not the things that hide you,
[...]
I hope that you'll remember
Even when you're feeling blue
That it's you I like,
It's you yourself,
It's you, it's you I like.

Monday, December 31, 2012

Agile projects

Give feedback early; give feedback often. Especially the early part.

When it comes to writing a paper or planning a campaign or picking a cause to focus on, a little bit of feedback at the beginning is worth hundreds of micro-edits or small optimizations later on. The topic that you write about can matter more than everything else in your whole article. If you complete a research paper about something unimportant, it doesn't much matter how well-written and well-researched the piece is (unless your goal is to establish prestige as a writer or build an audience that you can then direct toward your more important essays). If you pick an inefficient activism campaign, it doesn't much matter how well you carry it out (except for getting practice, personal experience, etc.).

Most of the time, feedback won't have the dramatic effect of reorienting the entire direction of a paper or campaign, but it may have smaller impacts, such as whether the author considers a given argument or whether the campaign undertakes measurement of its impact. A stitch in time saves nine, and it's easier (both physically and cognitively) to improve something at the beginning than near the end.

So why is it that people sometimes hesitate to share drafts, ideas, plans, etc. until they're almost completed? Maybe one reason is that slow feedback is sometimes more customary, and people fear that if they share totally incomplete drafts or brainstorms, people would judge them for not being thorough and polished and not having considered such-and-such objection. If this is the case, we should work to change the culture of feedback among the people we know, to make it clear that preliminary drafts are potentially better than polished products in terms of the benefit of giving feedback per unit time.

Another reason can be that when people comment on a rough draft, the author may already know that he needs to fix most of what the reviewer points out. But this can be largely allayed if the reviewer understands the stage where the project is. You don't (usually) give sentence-level edits on a paper outline. Also, the author could sketch out the areas that he knows are incomplete so that the reviewer won't comment on those.

The title of this post comes from agile software development, which is one area where the principles I described have been well recognized.