Posts Tagged morality

Contingent Morality

In my blog on Choice vs. Consequence, Pete called me out in my apparent subordination of the outcome to the choice.  If I make a choice, a Good choice (to help someone in need, for example), but botch it real bad, should I get “credit” for said moral choice?  He asserts no, and I wanted to think it through today.

First off, I don’t feel like one can judge the morality of the environment.  That is, I feel things can be shitty in many different circumstances, but if PEOPLE (or otherwise choosing entities) made it that way, than it’s the influence and decisions and actions of the people that make it immoral (not the shitty things themselves).  When you seek to take a moral action, how much does your eventual impact on the world matter?  I would say that your impact demonstrates your effectiveness, but not your morality.

If a very incompetent person sees a person in distress and goes to help (let’s consider this action “moral”), but ends up bungling it so bad he ensures the distressed person’s death (let’s consider this consequence “bad”), I don’t think of the person as immoral.  They are merely as described – incompetent.  From the other side – a competent person meaning to do harm but “accidentally” (or incidentally) doing good – I think it’s harder to zone in, but I do think that person is acting immorally (and also, hilariously ineffectually!).  I want to talk more about this later (not today) because there is value to result, maybe even moral value, but not in the primary sense that I care about most strongly.

We should strive to be both competent and moral, I believe.  But I don’t think a person’s competence or ability to remake the world for the better really has a bearing on whether they are essentially “good”.  There is a secondary level of consideration – if a person knows themselves to be awful at helping, but helps anyway (and therefore harms incidentally), I might consider that in the immoral space.  But overall, at the broadest level, I think it is choice that defines our moral nature, and not consequence.  Morality is not contingent on results.

Tags: , , , ,

The Fix

I’m leaving whether people are essentially good or evil to a later blog (I think it’s a bigger question), but I believe it is pretty apparent that what people do and how they behave is influenced (in, I believe, a negative way) by our “animal brains” – that is, the part of us that is still around from before we evolved to be big abstract thinkers.  That part is geared to survive, and often causes us (influences us) to do things that are cruel, thoughtless and ironically self-destructive (e.g. using drugs because they ‘feel good’).

(Whew, that was a lot of parentheticals!)

So I got in a conversation with Kelly and then with Zac about the fix to this “problem” with humans – that we are still burdened with our unenlightened animal brains – with a thought experiment.  What if I (the hypothetical scientist) created a technology that could remove the animal part of our brains in a reasonable way.  Leaving to the side for the moment the question of whether this would be feasible, I could imagine that the resulting being is not really human – the being has engineered itself to be different than natural selection would have brought about.  Call it a human-prime.  A human-prime might be a better person, ideally, than a human, because maybe they are more rational, or more compassionate, or any number of other things that we (as humans, now) value but have a hard time synthesizing with our more animal behavior.

So, we have (in the thought experiment) the technology.  Would you use it?  Would you recommend it to others, as a human now?  And finally, is it even okay to consider administering it to others?  What about without permission?

I have a deep-seated paradox of thinking on this point.   I don’t, as Kelly so aptly put it, believe that “being human” has any intrinsic value, more than say, “being human-prime” does, just because I am human now.  If human-primes are just better – human emotion and thought without the baggage of “destructive” emotions and thoughts – then I would definitely support converting us to them.  But I also value freedom of the individual, and think that forcing someone to convert is morally abhorrent.  I hate giving someone the freedom to destroy themselves (which is essentially what you are doing if human-prime > human but you let a person stay human), but taking away freedom (the good and the bad kind) seems worse.

Of course, it’s possible that my initial thoughts on this matter are wrong, and human-primes are NOT better than humans… maybe this is another case of the good and the bad together and inseparable – losing the “humanity” that involves “destructive” emotions and thoughts also loses the other totally important part of “humanity” as well.  And who am I (or anyone) to judge where the boundary lies and how much of one can be traded for the other?

Tags: , , , ,