Posts Tagged philosophy

God, Faith, and Belief

Monday, November 15, 2010
Are you spiritual, religious, agnostic, or atheist? Do you think there is one path to God, or many?

I would say that I am not religious.  However, I am unwilling to state with certainly that no God exists.  I don’t have faith – that is, I don’t believe without evidence in the existence of a God, Judeo-Christian or otherwise.  But by the same token, I can’t say I have enough (in terms of quantity or quality) evidence to dismiss the idea of divine being.  This position is often hard for my friends to come to terms with, so I thought I’d explore it a bit more in this post.  I think I am best described as an agnostic, and I don’t feel any particularly spiritual devotion except to the idea of human agency (which is itself non-falsifiable).

I believe quite strongly that anything that is imaginable is possible.  Possible doesn’t mean probable, but it does mean not easily dismissed.  So, despite a perfectly reasonable explanation for the universe without divinity, I can’t say for certain that divinity doesn’t exist.  Critics of religion will often draw out Occam’s Razor, the principle that among many hypotheses, the one that requires the fewest assumptions is best, but I don’t trust that line of logic.  I do act in my practical life in accordance with Occam’s Razor –  less assumptions mean less things *have* to be true, and therefore generally those scenarios are more likely.  But that’s a question of predictive power, and I’m not considering divinity or religion to need predictive force to be able to something meaningful.

Since this question is about belief, I believe strongly that the Truth, such as it is, about the structure of reality is certainly not well-known to us simple humans, so the possibility still exists that something non-falsifiable (the existence of God, for example) is True.  I think the most important question about religion, spirituality, and faith is why a person believes the way they do, and I certainly can never have that conversation with a person if I start out by denying their belief.

Anyway, a lot of run-around to say simply, I believe in not discounting what is possible!

Tags: , , , , ,

Review: Small Gods

Now, I am no particular Terry Pratchett fan.  (Wow, he is famous enough that I got a spelling auto-correct for his last name!)  I had only read Night Watch previously, because it was on a bookshelf and I was in need of a book.  That one was alright, nothing special, and I didn’t really understand why so many people – especially my roommate Sam – thought Pratchett’s Discworld books were so amazing.  But Sam, rational thinker and arguer that he is, convinced me to read Small Gods, holding it up as a better (maybe the best) example of Pratchett’s work in a single book.

I love it.  Small Gods is wonderfully irreverent, while at the same time saying so much more about faith, religion and spirituality than many other texts designed for that purpose ever do.  The characters are compelling, the integration of the story with the details of the world is excellent (and Pratchett’s world is quite impressive, and so it is even more impressive that it doesn’t overshadow the story), and it’s got quite a few laugh-out-loud funny moments.  Before Small Gods, I was dubious at the prospect of “humorous fantasy,” which is the genre I have always thought of Pratchett belonging to, but now I am a believer.

I’m not sure that Small Gods made me want to read the rest of the Discworld books, so well-contained was its story, but it does leave the seed of interest in my mind where before they was disapproval.  So that must also be seen as a success.  I would definitely recommend Small Gods to anyone who is interested in fantasy and likes stories that make them think.

Overall: A
Balance of Philosophy and Fantasy: A+
Death’s Review: IT WAS PRETTY ALRIGHT, WOULD RECOMMEND

Tags: , , , , ,

Game Universes: The Why of Gaming

For my last post of Game Design month, I wanted to think through my reasons for enjoying gaming so much.  I have skipped social commitments to play games, I sometimes interact with friends solely through the lens of a game during a meetup, and I (obviously) enjoy thinking about and designing games.  But why?

Games are an escape from the real world, yes, but I don’t think that’s the reason.  More likely, the allure comes from the fact that each game can be a world unto itself.  As a scientist, I understand fundamentally that although I may learn a lot about how the universe works, I will never understand enough on my own to make sense of most of the universe.  In the universe of a game, however, the rules are much more limited and understandable, and that’s enjoyable in and of itself.

But the real draw comes because even very very simple games – those with a handful of easily understood rules – have emergent behaviors, especially when many human players are involved.  At the same time it promises solvability due to its simplicity, its complexity means that truly “solving” some games – the strategy kind, and the social kind – is impossible.  Maybe it’s my personality that is always searching for puzzles to fiddle with that have no final solutions, but I really enjoy exploring the space that games create.  It’s like a little bubble world that often gives insight into the minds and behaviors of the players, the designer and even the greater world around us when the game is a model for something about the real world.

It’s pretty remarkable, really, when you think about it.

Tags: , , , , , ,

The Tool Trap

Often I rail against money, or scarcity.  I also rail against injustice served up by free markets.  But I don’t rail against technological tools (like teleportation or power generation) that could be leveraged to evil ends.  What gives?  I was thinking about this because I do believe in the usefulness of tools, and I wanted to speak to what I believe are their moral implications.

Tools are powerful.  I feel like more than any other idea or thing, tools are the means by which humans gain mastery over their environment.  But tools have a secret cost (I mean, in addition to their not-secret costs, like materials or skill) and it’s that they allow us a wider range of choices, and therefore require from us more responsibility when we have the opportunity to make a good choice.  The advancement of technology happens at the speed of science and engineering, but the advancement of reason and wisdom can happen at a different rate.  If we don’t “grow up,” so to speak, at the same rate as we generate more and more powerful tools, then we are bound to screw up and make bad choices that have bad consequences.

It’s troublesome, because often once a tool is created, it is impossible to know who will use it.  Thus the moral cost of the creation of the tool is unknown.  It’s something to keep in mind – are the creators of the atomic bomb (or perhaps more ambiguously, those who figured out it was possible) morally responsible for the deaths of those the bombs have been used against?  I don’t know what my answer to that question is, but it certainly isn’t “definitely not.”

I don’t think we should stop making better tools, but I do think we need to be aware that a tool is only as Good as its wielder.

Tags: , , , ,

One Wish

Patrick wrote an article – about a week ago? – that was an interesting treatment of the “what would you wish for?” thought experiment.  He defied other gamers in the room and claimed he would not wish for anything.  Why?  You can read it here, but basically, I boil it down to two fundamental reasons:

  1. The universe is in the correct state right now, even though it might not seem like it.
  2. The result of a wish has little value compared to the result of one’s own experience and actions.

I couldn’t agree with Patrick’s conclusions (not to use the wish) but I had a hard time framing why until today, having mulled it over quite a bit.  I just disagree with the first reason, but I think it is by far the weaker of the two – my ideal universe bears only a small resemblance to the current universe, and others may not assign the same moral weight to those ideals.  I do agree with the second reason – human experience is very valuable.  I believe things are essentially worth what you pay for them (the value you assign to them), not what others are willing to pay.  However, though I am a fan of promoting this particular virtue, I think there is a spectrum of reasonableness in promoting virtue and this falls outside the line I would set for myself.

Yes, I do want to live my own life and make my own experience – the good and the bad.  But I also think there are things more valuable than one’s personal, human experience and those are the kinds of things I would wish for.  It is the value that we assign to things that makes them matter, and I am quite capable of receiving a boon or gift from an altruistic person I’ve never met – it becomes a tool I can use to do good.  (I may write more about tools in a later post.)

I do think it is admirable in a way to push the virtue of self-sufficiency (doing things on one’s own, enjoying the labors of one’s own hand(s)) so far, but I would not do the same.

Tags: , , ,

Contingent Morality

In my blog on Choice vs. Consequence, Pete called me out in my apparent subordination of the outcome to the choice.  If I make a choice, a Good choice (to help someone in need, for example), but botch it real bad, should I get “credit” for said moral choice?  He asserts no, and I wanted to think it through today.

First off, I don’t feel like one can judge the morality of the environment.  That is, I feel things can be shitty in many different circumstances, but if PEOPLE (or otherwise choosing entities) made it that way, than it’s the influence and decisions and actions of the people that make it immoral (not the shitty things themselves).  When you seek to take a moral action, how much does your eventual impact on the world matter?  I would say that your impact demonstrates your effectiveness, but not your morality.

If a very incompetent person sees a person in distress and goes to help (let’s consider this action “moral”), but ends up bungling it so bad he ensures the distressed person’s death (let’s consider this consequence “bad”), I don’t think of the person as immoral.  They are merely as described – incompetent.  From the other side – a competent person meaning to do harm but “accidentally” (or incidentally) doing good – I think it’s harder to zone in, but I do think that person is acting immorally (and also, hilariously ineffectually!).  I want to talk more about this later (not today) because there is value to result, maybe even moral value, but not in the primary sense that I care about most strongly.

We should strive to be both competent and moral, I believe.  But I don’t think a person’s competence or ability to remake the world for the better really has a bearing on whether they are essentially “good”.  There is a secondary level of consideration – if a person knows themselves to be awful at helping, but helps anyway (and therefore harms incidentally), I might consider that in the immoral space.  But overall, at the broadest level, I think it is choice that defines our moral nature, and not consequence.  Morality is not contingent on results.

Tags: , , , ,

Patience and Serenity

I had a minor epiphany (not a major one – maybe one day!) on the drive home from work yesterday, that went like this: there are really only two things that frustrate me.  The first is not knowing something, and the second is not having control over something.  Interestingly, the two qualities that indicate capacity to handle those frustrating states – patience and serenity – are two qualities that I both lack and want to improve upon.

Now, it’s not that I want to know everything.  There are some things I am okay with not knowing (for example, how to manage a sewage plant, or the current angular momentum of Sagittarius A*) but there are a ton of things I wish I knew, even though a lot of them aren’t realistic (for example, what my friends are actually thinking at any given moment, and how the universe came to be).  One of my core identifiers as a person is my pursuit of knowledge, so I have a hard time accepting that some things are not for me to know (lies! of *course* they are for me to know!)… rather than accept it (serenity) or wait for deeper elucidation through experience (patience), I just need to know now now now

I also don’t really want to control everything.  I am quite happy allowing people I don’t know personally to live their lives, the planets and the stars to do their thing (orbit, rotate, accelerate), and in general for things outside of my sphere of influence to do what they will.  But I do want more (direct) control over my life, and often that involves other people who I accept as free-willed individuals, and therefore should be outside of my control.  It is a very hard lesson, one I understand intellectually but which still escapes me instinctively, to relinquish control over others and external situations, and by so doing achieve greater happiness.

Tags: , , ,

Choice vs. Consequence

I had an interesting set of conversations with Mark (Rosewater) today, which was great, but I also identified something in my own thinking about morality and “good” that I need to define a bit better: the relevance of choice versus the outcome (consequence) it produces.

Mark brought up the following question: if a person is in need, and you help them, and they are happy/better off/no longer in need, is that good?  To me, there are two factors at work here:

  • My decision to help a person, apparently in need
  • Whether they are in fact helped by my action (or harmed by my inaction, to look at it a different way)

I am pretty sure I can only assign moral weight to my decision, because I can’t necessarily affect the outcome significantly.  For example, if the person is lying and doesn’t need help, does that make my decision to help them any less “right”?  What if they are truly in need, I help, and they end up no better off because I couldn’t give them what they needed?  In both cases, I believe that my choice to help the person is just as moral as the choice to help the actually-in-need-and-I-end-up-helping person in distress.  The consequence is relevant, in the sense that I think the world could be better off in some utilitarian way if the outcome is positive, but not as relevant as the choice.

Tags: , , ,

What is Good?

I think there’s a lot that goes into the question of “what’s Good” with a capital G, and sure, a lot of it is probably subjective, but I wanted to focus on two of them that I believe in pretty strongly: fundamental principles, and reasoned insight.

By fundamental principles, I mean that there are things about people and the world that you can deduce as intrinsic.  You could imagine situations that contradict these “guidelines”, but for the most part they form a reasonable baseline for goodness.  One of them is the idea of fairness, as in there’s no basic reason to treat one individual differently from any other.  Another is the value of freedom: to thinking beings, having choices is generally preferable to predetermined outcomes, so actions that preserve choice are generally “good”er and actions that constrain choice are generally “bad”er.  (I understand that there can be value in violating fairness or freedom, but I think those are much more edge and I am focusing on the core of what’s good.)

By reasoned insight, I mean the kinds of conclusions that rational beings can draw upon given these fundamental principles.  This is where the bluriness starts to creep in, because two people acting rationally can either (1) disagree on the correct application of principles to situations, or (2) disagree entirely on which principles apply and in what priority. Some of the conclusions that people reach (and that I agree with) are:

  • Faithful/loyal service should be rewarded (projecting fairness backward)
  • Treat strangers well (projecting fairness forward)
  • Promises should be kept (expectations of outcome that constrain individual freedom should be met)

This is all a very highly logical view of things.  I think you could unravel a lot of systems of thought down to these principles-and-conclusions, and I may try to tackle that one day.  I do know for sure that if I can identify what principles and conclusions a person I am interacting with is using to define “good,” I’m always in a much better place to understand them (and myself, in relation to them)!

Tags: , , ,

The Better Me

Sometimes, I have friends ask me why I am so hard on myself.  The answer is not a particularly long one, but it might take time to explain: I think it is worthy to strive to be a more ideal version of oneself.

To unpack that principle, I want to first speak to worthiness.  I don’t know about other people, but I like to analyze why I feel the way I do when I have a strong feeling about something but don’t have an immediate logical answer.  This is a process of self-discovery, sure, but I think it’s also like flexing the muscles of awareness and of logical-emotional connection — two things I am terrified might atrophy!  (Well, maybe not terrified of them atrophying, per se, but I don’t think I would like the Dave who had, e.g., less awareness and/or less logical-emotional connection.)  I feel guilt and regret when I have made a decision or taken an action and I feel it was “wrong”.  But why do I feel this way?  That led me to considering what outcomes at worthy, and to think of decisions/choices as a means to try to achieve those worthy ends.

I would even go so far as to say that for thinking free-willed beings, the essential purpose of choice is to have the opportunity to achieve a “good” path from among many alternatives (however the thinking being defines it).

So, how do I determine what ends are worthy?  Well, I know I am not perfect – this I can determine via self-observation, and general feel, and even comparison to other individuals.  So I identify areas in which I’d like to improve, often subconsciously, and predict-project a Dave who has made those improvements.  This is a Better Me, a more ideal version of myself.  I don’t like to think in terms of Best Me, or Ideal Dave, when I am making decisions.  I kind of have a rough sketch of that guy in the back of my mind, but for choices, I always want to be moving toward a Better Me.

Yeah, doing this can make me hard on myself – sometimes shockingly so – but I think it’s worth it, and I think it’s the Right thing to do. :)

Tags: , , , , ,