Posts Tagged semantics

My Sacrifice

Friday, November 12, 2010
What’s the biggest sacrifice you’ve ever made for another person? Was it worth it?

I actually spent a bit of time thinking about this today, but I realized that (a) I can’t remember a huge sacrifice I made, and (b) I prefer to make lots of little sacrifices for my friends.  So instead, I figured I would talk about the difference between sacrifice and reward-seeking behavior.  I think I would define sacrifice as taking a meaningful, irreversible, painful cost upon oneself, either because it’s the right thing to do (based on one’s own principles), or because it greatly helps another person AND there is no expectation of reward.

Is it a sacrifice to take a big financial hit to help out a friend, if there’s an expectation of a favor in the future?  I would argue no – for it to be a real sacrifice, you need to be giving away without expectation of mitigation of loss.  This leads me to wonder about the utilitarian framework where everything everyone does is to increase utility in some way – that is, would sacrifice (as I have defined it) have meaning in that model?  I think we can define it to do so, by looking at the idea that a person can become happier solely through the happiness of their friends and neighbors.  In such a case, the “reward” is seeing the other person happier, which does not in any way decrease their happiness (well, except in rare cases with crazy people).

So, in such a case, where I am sacrificing in the hopes that my friend is happier off (and that will make me happier), I think I would further stipulate that I have to sacrifice without shoving it in their faces.  By pointing out how great I am for giving away so much for their happiness, I am bound to generate feelings of guilt, and in some sense be a utility vampire.  Not cool.

As an aside, in talking about how my friend’s utility increases my own, it’s interesting to think about other ways in which an economy of utility is very non-zero-sum.

Tags: , , , ,

Bloodied

Today was one of the blood drives at Wizards; they park a big bloodmobile outside the building and you can set up appointments to donate.  I did so, and when I got there the place was packed.  It got me to thinking about altruism again, and exactly what the motivations are for spending an hour getting quizzed and then stabbed and then having your literal lifeblood taken… potentially to save the lives of unknown others.

Blood donation is very interesting to me in this respect because you don’t know who the blood is going to, and hence consideration of the character of the recipient doesn’t really factor into the equation.  There is definitely incentive in the sense that it increases “good guy” points for the donor in the eyes of others – as I was lying there thinking about all this, I definitely thought about how good I felt knowing my friends knew I was a good person for donating.  But is that all of it?  Is there a “true” altruistic impulse here?  Do people donate blood because they genuinely care enough about unknown others who will be in need, and want to help them?

I kept thinking about how I wanted to know if my blood was going to be used to save lives.  Could I ask for statistics on that?  Unlikely.  But even not knowing, I wanted to give.  The distinction between giving because it makes the giver feel good, and giving because you want to help others, is such a fine line that I’m not sure how relevant it is.  It does lead to another interesting question, though – is it the end result (helping others) that matters here?  If so, can a totally selfish person, who inadvertently helps other people, be considered altruistic?  I don’t really think so – in which case, I need to further define altruism to make it a useful concept.

Tags: , , ,

Identification, Please

Some thought experiments before getting into definitions and discussions of identity (as discussed by me and some college friends during a recent visit; I can’t find the website with them in quiz form):

EDIT: Todd found it! The questions below, in quiz form.

  1. You are traveling to Mars.  You have two options for getting there: spacecraft or teletransportation.  The spacecraft is pretty dangerous: there’s a 50% chance you’ll die in transit.  The teletransportation method will disassemble you at the origin (Earth), transmit you as information, and reconstruct you at the destination (Mars).  Which do you choose?
  2. You are suffering from a peculiar neurological malady.  You have two options: you can let it run its course, or you can go through a process designed to fix it.  If it runs its course, your brain will be screwed up and you’ll have a radical personality shift.  The process to fix it involves replacing all of your brain, piece by piece, with new techy brainlike material – you’ll retain your current personality (totally cured) but when the process is complete, none of your original brain will remain.  Which do you choose?
  3. Scientists discover the nature of the soul (or agency, or consciousness, whatever you prefer), around the same time that you contract a deadly disease.  It turns out the soul will leave a dead body and attach to a newborn, a sort of reincarnation.  You have two options: you can let the deadly disease run its course, have about a 50/50 chance of dying, and your soul will reincarnate to a new host, or you can be put into cryostasis long enough to cure the disease.  The same research which determined the nature of the soul also found that this cryostasis operation will destroy your soul.  Which do you choose?

Another really interesting essay on identity, via Kelly: Where Am I?

Tags: , , , , ,

Quantum Suicide

A thought experiment (borrowed from wikipedia, as discussed by Todd and myself down in San Diego):

Take a modified Schroedinger’s Cat Box, which has in it a weapon set to trigger if a particular quantum particle is spin-down upon measurement (assume that the particle is equally likely to be spin-up or spin-down) – and place yourself inside.  Every ten seconds, another particle is measured; if it comes out spin-down, the weapon fires and you die.

If you believe in the Many Worlds Interpretation (MWI) of quantum mechanics, then entering the Box might be a way to have your consciousness “travel” into low-probability outcome universes.  MWI means each possible state is actually a parallel universe, and the observer happens to be in one of them measuring one possible state.  (This might sound crazy to you, but other possible interpretations, like the Copenhagen Interpretation, require the act of observation to force reality into a particular state, which I consider to be just as crazy, physics-ly speaking.)

I believe that I observe reality from the seat of my consciousness, my “self”, where I consider myself to be.  Even those of us (ahem) who believe in no free will still must consider the seat of their sensory centers, and can call that the “self” for the purposes of this discussion.  So the question becomes: what happens to the “self” in this experiment?  You do the experiment, and every 10 seconds, it is 50% as likely as previous that you survived.  Each time you die, you won’t make any further observations.  But each time you live, you will observe that you survived.  There will always be a smaller but nonzero number of universes in which “you” survived and made that observation.

And then the question that Todd and I posed to ourselves: is there moral value here?  By running the experiment, you “kill” the spin-down self after 10 seconds.  Even if you ended up in the spin-up universe, what is the relationship between you-up and you-down?  Should it matter?

Nobody really understands quantum mechanics, which is sometimes really disturbing.  Thinking about this idea in one sense makes things even more disturbing, but in another good way, helps me think about the high-minded issues that bother me (some not particularly related to quantum mechanics!)

Tags: , , , , ,

Influence and Trust

This is sort of a secret combined cruise-and-philosophy/psychology blog (mwa ha ha)… or maybe now not so secret.  Anyway!  On the cruise, I often found myself sitting with a group of people where I knew one of them from the previous Magic Cruise (or through previously established Seattle friendships) and was able to bootstrap my way to new friends via introductions through that person.   I’m sure everyone has experience with this phenomenon in multi-group situations: when one group meets another, there is nearly always bridging going on through mutual connections.

Thanks to Lindsey, Steve, Peter, Dwayne, Patrick and Roberto, I met a ton of awesome new folks and with their (often merely implicit) support, quickly developed friendships through them.  The development of these relationships is an interesting study for me, since if you observe carefully, you can tell that at some point you + new friend have a stronger connection than you + old friend.  On rare occasions (I can think of only a few times over the years this has happened to me), you + new friend can develop a stronger bond than old friend + new friend.

I want a word to describe the strength of the connection between the pairs in you-old-new; the unsatisfactory word I have used in the past is Influence, because generally speaking the stronger the connection, the more likely you are to ask and receive something (a favor, a thought) from that friend.  But really that just describes the effect; the cause is closer to comfort and trust.  When you are out far past your normal bedtime (which before the cruise was like, 9-10pm?) drinking and socializing with new friends until roughly 3am, and you do it many nights in a row (despite the pounding headaches in the morning!), that is definitely indicative of comfort.

But what about trust?  My stab at the source of trust is something like: as you interact with a person, you are subconsciously testing them for reasonable responses.  Reasonable, in this case, is what you consider reasonable (highly subjective).  Each time you get a reasonable response, or are “pleasantly surprised” in some way (I think because any positive emotion gets mapped at least a little into the trust-o-meter, at least in my case), your trust toward the other person ticks up a notch.

I have a thought experiment I’ll discuss another time that helps me consider the usefulness of an Influence metric.

Tags: , , , , ,

Competitive

I used to be a lot more competitive than I am now.  I’m not sure when this change came about — my guess is partly as a result of tons of collaborative work in college, partly due to finding out the competition was insanely rougher in the wider world (e.g. past high school), and partly because of a slightly deeper understanding of my own motivations.

I think that for a lot of people, the meaningful “competition” is with one’s previous effort — continually striving to improve one’s skills and then demonstrate it tangibly via breaking personal bests.  This might even define “casual competitive,” which is a kind of player type we discuss a lot in talking about Magic and games in general.  In some sense, I think this is the default way to play games and sports.  It takes significant effort (for me, at least) to not care about how I perform with respect to my previous experience, if I have done it before (and haven’t forgotten!).

Is the range of competitiveness affected by how much you track your own ability?  I totally understand the nature of competition that involves beating other people – there is certainly positive incentive to play against others, and feel good about doing better than them.  But if you don’t take any particular utility from smashing others (and I don’t, particularly), do you convert to being competitive when you realize – consciously or not – that you are reaching the limits of self-improvement and you need others to compete against in order to break your previous boundaries?

(All of these thoughts were spurred on due to me considering a more rigorous workout schedule – routines that I always base on beating personal bests.  I then thought about the nature of competition [since there are clear benefits to being driven to succeed based on external factors] and whether I could incorporate it into my routines without needing to make other people share my experience.)

Tags: , , , , ,

Entropy and People

Entropy, my nemesis, is actually sort of a confusing concept.  My favorite re-statement of the three (four) laws of thermodynamics is:

0. Everyone must play. (Systems that interact reach thermal equilibrium.)
1. You can’t win. (Energy cannot be created or destroyed.)
2. You can’t break even. (Work done increases heat and heat cannot be converted completely into work.)
3. You can’t leave the game. (Entropy goes to zero as you approach absolute zero, but you can’t reach absolute zero.)

Entropy is a measure of the number of ways a system can be arranged.  The higher it is, the more “disorder” a system has.  You can’t reduce this disorder, because forcing the system back into a smaller number of possible arrangements takes work, and work produces heat (which is itself increasing entropy).

Entropy tending to increase has two major physical consequences: it prevents perpetual motion (no work-creation process is totally reversible) and it indicates a direction for time (time is the direction in which entropy increases).

Alright, with those definitions out of the way, I had an intriguing thought: over time, relationships between people become more complicated.  Logic tells me that this is because of shared history – as the number of experiences you have with a person increases, your expectations and internal analysis of their behavior become more refined.  When I first meet a person, the number of things we have to talk about is small.  When I have known a person for years, the number is much larger.

Is the number of potential (plausible, likely?) interactions between people in some sense related to entropy of a nonliving system?  That is, are the number of “possible person-to-person interactions” the same as the number of “possible arrangements of a system”?  And if so, can we formulate Laws of Human Dynamics that are extensions of the Laws of Thermodynamics?

I expect that a more formal revisit of this topic (by which I mean, more logically structured rather than stream of consciousness) is necessary.

    Tags: , , , , ,

    Fantasy vs. Science Fiction

    When I was down in Los Angeles, with Dan O, staying with Todd and Tory, going to Sam and Kirsten’s wedding (namedrop c-c-c-combo!), we watched James Cameron’s Avatar.  On the car ride afterward, we started having a discussion about what the difference between fantasy and science fiction is.

    For reference, here are the dictionary.com definitions:

    • fantasy (Literature): an imaginative or fanciful work, esp. one dealing with supernatural or unnatural events or characters
    • science fiction: a form of fiction that draws imaginatively on scientific knowledge and speculation in its plot, setting, theme, etc.

    Interestingly, the most meaningful definition among us (and the other friends we asked) was descriptive; that is, fantasy is in a fantastical setting and science fiction is in a science-y fictional setting.  These definitions also follow that basic rubric.

    However, I was interested in a definition that was more prescriptive and thus capable of determining what Avatar was: fantasy or science fiction.  For although Cameron’s epic takes place on another world and with futuristic spacecraft and society, it also deals with the mystical and, well, the fantastic.

    (By the way, if to you the fantasy/science fiction distinction is purely setting or descriptive, feel free to use different demarcations.  I’m just looking for a way to consider, say, Star Wars and Star Trek from a different perspective, one that allows us to meaningful comparisons unrelated to setting.)

    One thought I had was that in fantasy, there is some destiny or fate at work (someone is chosen) and in science fiction, the person or people to whom stuff is happening or with whom events are caught up could be anyone – even you!  This worked for me for a movie like Star Wars (which I consider to be in the “fantasy” camp) but unfortunately failed with Lord of the Rings from at least the hobbit side of things.

    Another thought I had much more recently on my own (and which prompted me to write this blog!) was that maybe it goes:

    • science fiction: a logical extrapolation of starting conditions (either our world, or a parallel one)
    • fantasy: something in the setting or action defies logic, or breaks with the logical extrapolation above – usually this would be something supernatural (magic, God/gods) but wouldn’t (necessarily) have to be… I am having a hard time coming up with such an example, though!

    I guess that is really just a restatement of the descriptive dictionary.com definitions above – but it seems more useful.  It is also more subjective, which is probably a bad thing.

    Tags: , , , ,