Wednesday, 29 February 2012

Theory and Fact

     "My son is taking piano lessons, and I'm pleased with that. I just wish they wouldn't teach music theory as if it were fact."

     There's a widespread misconception about what academics and scientists mean by the word "theory". In common usage, we often speak of "theory" as if it were a guess, a tentative fact-in-waiting yet to be confirmed, and so we'll often hear a complaint like the above with respect to evolution: people object to it being taught as if it's fact when it's "only a theory".

     Of course, that's not what "theory" means at all, and thinking about music theory is a good way to understand that. As you probably know, music theory includes all those complicated symbols composers and arrangers use to write down musical scores, which was the only way to preserve a record of compositions back in the days before audio recording. But there is more to musical theory than simply notes, rests, time and key signatures. Musical theory is a systematic way of describing music, relying on a set of defined terms and symbols and rules relating those terms to one another in various ways. With musical theory one can not only write down a melody so someone else can play it, but analyze and manipulate musical propositions, and even make predictions such as which notes will make for pleasant harmonies with which others.
     In short, musical theory is a cognitive framework for understanding music. The key word here is "understanding", not "knowing". Musical theory is not something you can know in the sense of justified true belief; it simply isn't a matter of being true or false. Rather, it's a matter of being more or less helpful in making sense of music.
     So it is with other theories: game theory, number theory, relativity theory and yes, evolutionary theory. A good theory enhances our ability to make meaningful and accurate inferences or predictions about the subject matter. These inferences or predictions themselves are factual claims, and a theory lives or dies on the accuracy of these predictions; if a theory leads to inferences that turn out to be false, well, the usefulness of the theory is reduced. Useless theories aren't false; they're just not used.
     Mind you, a theory that always makes true predictions isn't necessarily useful; it may just be ridiculously vague. Creationism (which explains all observed phenomena with "Because God made it that way") always gives a "true" prediction to the question "What will happen if we do this experiment?"  But "Whatever God makes happen" is a very uninformative answer, even if it is true (which, if an omnipotent God exists, it must be by definition). This is why scientists talk about good theories making falsifiable predictions.

     So to complain that evolution is "just a theory" and therefore shouldn't be taught as "fact" is simply to misunderstand what a theory is. Of course, that doesn't really answer the objection; the creationist who uses the word "theory" correctly will complain instead that the predictions of evolutionary theory, which often are presented as facts (and justifiably so), are false. And that's fine, because then we can have an intelligent conversation about what the facts are, with reference to evidence and principles of reasoning and so forth, and such a conversation does properly answer the objection.

Monday, 27 February 2012

A Tip on Parking in Snow

     We've had a sudden dumping of snow on our streets this past weekend, and while it isn't really a stupendous amount in historical terms, it is enough to get stuck in if you don't know how to drive in it. Sadly, I find a lot of people don't, which includes many SUV owners who claim to have bought their vehicles specifically to deal with Alberta winters. It's as if they think an SUV is magically immune to weather, without considering what exactly it is that makes an SUV better able to handle certain kinds of driving conditions. And so, one winter a couple of years ago when I had an hour long highway commute, I once counted 38 vehicles in the ditch, the disproportionate majority of which were SUVs, an alarming number of which had rolled over after leaving the road (owing in part to their high center of gravity).
     I'm not a fan of SUVs generally, at least not as private vehicles for ordinary use. It's not that I don't see their utility in certain contexts, but the same can be said of SCUBA gear. If you want to wear an air tank around town as a Cousteau-chic fashion statement, fine. You'll look silly. But if you wear it on a crowded elevator, you can expect to annoy people. Likewise oversize vehicles that take up more parking space than is warranted, with high suspensions that make even your low-beam headlights shine down directly into the eyes of drives of smaller vehicles. (This latter problem is made worse by following too close.) But I'll grant that it's true that the higher wheel base does make it possible in principle to go through deeper snow than I can safely manage in my sanely sized car.
     Last winter, an SUV got stuck in the lane behind our house, and when we went out to help dig it free, it became clear how the driver's confused thinking about snow caused the problem in the first place: he seemed to think that more power was the solution. To be fair, it's not necessarily a completely stupid idea; if you think of snow as creating more resistance to the movement of the vehicle, then greater force to overcome that resistance is a natural inference.
     Of course, snow doesn't just create more resistance; it also decreases traction, and this is usually the bigger problem. Indeed, ordinary small cars like mine have more than enough power to get through even fairly deep snow, provided the tires can firmly grip a solid surface underneath. So one of the tricks to driving in snow is to manage the surface under the tires. Be aware of the effect your wheels are having on the snow, and use it to your advantage.
     An example: Last night, I had to drive to pick up my son at the home of a classmate, which was in a residential neighbourhood where the snow had piled up, especially along the curb where I would normally have parked. I parked in the snowdrift anyway, and had no trouble extricating myself. Here's how I did it.
    First, I approached with just enough power to keep me moving forward into the snow drift, relying primarily on my vehicle's momentum to get me to the parking spot. I was careful to keep the wheels rolling through the snow, not spinning free but maintaining firm contact with the snow underneath. Like a rolling pin going over pie dough, the wheels compressed the snow into a firm track under the tires. I let the snow itself bring me to a halt, not using the brake at all, so that the tires never scraped the snow beneath. Spinning or sliding tires will polish the snow underneath them into slippery ice, so avoid that at all costs.
    Then, when it was time to leave, I knew that there was lots of deep snow ahead of me, and the front of my tires were right up against it. To try to drive straight out, as I would in summer, would require enough traction not just to accelerate the mass of the car but also to overcome the resistance of that snow. However, the way I rolled gently to a stop in the snowdrift meant that there was a flat track of compressed (but not polished slippery!) snow under and behind my wheels. So, I very gently backed up in that track until I had enough room to build up the momentum to roll through the deeper snow and out into the main thoroughfare. And that was that.

     So, to put it another way, the lesson is this: Do not treat the snow as an enemy to be overcome with force. Treat it gently with your tires, so that it becomes your ally. 

Monday, 20 February 2012

Faith, Pretence and the Tragedy of Abortion

     In my first posting to this blog, I spoke about the difference between faith and belief, and argued that faith is a matter of acting as if some proposition is true, regardless of whether or not one actually believes it to be so. It's a matter of acting in good faith, even if one suspects that one's counterparts may not live up to their own obligations. In such cases, it's a matter of suspending one's personal judgments (such as if a defence lawyer happens to believe her client is guilty despite his claiming otherwise) and acting in accordance with the "belief" that duty requires (the assumption the client is not guilty, and that the court will reach the correct conclusion in light of all the evidence).
     However, the line blurs sometimes, especially when some people have trouble with that level of abstraction. It's not easy for everyone to believe one thing and act as if they don't; indeed, it may seem dishonest to them to do so. ("What kind of lawyer could, in good conscience, defend a client he believes is guilty?" is a question I'm often asked, and my answer is always that a good lawyer suspends judgment, and recognizes that her own gut feelings about something might well be wrong.) Perhaps it's simpler for such people simply to believe the article of faith, and not worry about such distinctions.

     This ties into abortion with respect to people's beliefs about the personhood of the fetus. If one believes that personhood begins at conception, then it follows one would believe that abortion is murder; if one believes personhood begins at birth, then abortion is not murder.
     Now, I think we ought to err on the side of caution, especially when what's at stake is something as serious as killing a person, but the experience of watching my own son's personhood emerge has shifted the boundaries of the debate somewhat. While I used to accept unreflectively that obviously a newborn baby had to be considered a person, and so the question for me was at what point during pregnancy does it become a person, it now seems to me that the morally relevant characteristics of personhood aren't really all there at birth. Conscious thought? Moral accountability? No, probably not. Or at least, for the first several months, the personhood claims of a human baby don't appear to be significantly weightier than the personhood claims of any other mammal, some of which we happily kill and eat. Just about any objective trait we might point at and say "A human baby is a person because it can do X," would also commit us to recognizing as persons some animals we might not want to grant legal personhood. So, my own powerful emotional entanglements aside (I adore babies), I have had to conclude that objectively, these little bundles of joy probably aren't actually persons yet.
     But at the same time, it's extremely useful to their development into persons that we treat them as such from day one. It's true, of course, that trying to engage a baby in meaningful conversation is futile; the baby won't understand a word you say, and won't be able to respond with anything intelligible. Yet we have to treat babies as persons, and talk to them and pay attention to them and all that entails, in order for them to learn and develop healthily. We owe it to the persons they will become to treat them as persons now, even if they aren't actually persons yet.
     Indeed, it's almost certainly a good thing that we develop these habits of treating them with love and affection even before they are born. I confess that we put earphones over my wife's belly when she was pregnant with our son, though not out of any belief in the "Mozart effect"; it was just one of many acts of belly-interaction (patting, talking, feeling for little baby-kicks) that formed the foundation of growing relationship with what we would eventually come to know as our son. And when he was born, of course, we adoringly held him and cuddled him and talked to him and treasured every little hint of a response, conscious or not.
     To be sure, we didn't act this way out of a conscious choice to act "in good faith"that he was a person, despite objective evidence to the contrary. We instinctively identified him as our son, a distinct individuated person, and bonded with him on that basis, regardless of whether or not it was objectively accurate.

     So I am torn about the whole abortion debate. On the one hand, I think it is objectively incorrect to characterize the unborn fetus as a person in the moral sense, and I now strongly suspect that actual, genuine, honest-to-goodness capital-P Personhood doesn't really emerge until a baby is several months or even a year old. It follows from that that strictly speaking, I cannot call infanticide murder. But I still want to err on the side of caution, and so it seems to me eminently reasonable and practical to establish the moment of birth as the onset of legal personhood.
     On the other hand, the emotional and developmental benefits of starting early on bonding with and establishing a personal relationship with a pre-person are so great that I do not wish to upset the belief by many that a fetus is a person. It's a good belief for parents to act upon, even if it's not objectively true. It would be nice if we could encourage the behaviour without the belief, but I'm not sure that's possible for everyone. Our natural and healthy parental instincts will make it impossible for some people to acknowledge, even intellectually, that a fetus could be anything other than a person. Abortion is legal, as it should be (and like root canal, no one should ever want to need one), but I don't think we'll ever truly be at peace over that. And in a strange way, I'm sort of glad about that, because I want everyone who will become a parent to love their future children from even before the moment of conception.

Thursday, 16 February 2012

Stupid Stigmas

     For some time I've been puzzled by the way we seem to think of intelligence or the lack thereof as a morally relevant trait. That is, we seem to think that stupidity is a moral failing, rather than simply a lowered intellectual capacity. For example, most of us think nothing of saying "Anyone that stupid deserves to get ripped off!" Yet we'd not talk that way about other weaknesses: "Anyone that short deserves not to be able to reach things on the top shelf!" or "Anyone that slow deserves to be caught and eaten by a pack of wolves!" Why is this?

     One reason might be that genuine stupidity is so difficult to confirm, in contrast to many other handicaps. Most of us will quite cheerfully hold a door open for someone in a wheelchair, or fetch things from the top shelf for someone who can't reach as high as we can, because it's visually obvious: she's in a wheelchair, and he's shorter than I am. If we were to discover that the person in the wheelchair could, like Guy Caballero, stand and walk and dance just fine, but just liked having other people do things for her, we'd resent it.

     So people who make stupid decisions might be suffering from genuine cognitive handicaps, but they might also just be too lazy to think things through for themselves. What's more, for some of us, thinking is something that just comes naturally, and doesn't seem to take a lot of effort, so it's not always easy to identify with someone for whom it's difficult. Certainly it's frustrating to try to deal with someone who can't (or won't) understand and apply what appears to be a rather simple concept, but why should this be different from getting something down from the top shelf for someone? It's not THAT high, why can't you get it yourself? Are you REALLY that short?

     I suspect that's a big part of why we tend to judge stupidity in moral terms, while we are more forgiving of other defects. Another part of it may be rehabilitative; most of us learn by making mistakes, and when a mistake is embarrassing, the lesson is that much more effective. In other words, in the normal course of events, ridiculing or chastising someone for doing something stupid is a natural part of helping to make them smarter. But in any case, our tendency to stigmatize stupidity in moral terms actually has some rather nasty side effects.

     First, however therapeutic it might be to the not-yet-smart, it's terribly unfair to those who really do suffer from cognitive defects. There are genuinely stupid people out there, who would really love to be smarter than they are, but something about the way their brains are wired or their neurotransmitters are secreted just won't let them. The genuinely stupid are handicapped in one of the worst possible ways, and suffer tremendous disadvantages as a result; they deserve our help and our sympathy, not our moral contempt. (Of course, one of the disadvantages of stupidity is that you don't necessarily know you're stupid, and you may not be able to recognize or appreciate when someone is being helpful. This makes it especially frustrating to try to help the genuinely stupid; they often reject what they need most.)

     Second, though, is a backlash against smart people, or perhaps more precisely against people who give the impression that they think of themselves as smart. If we attach a moral stigma to being stupid, it follows that we imply a moral superiority to being smart. Yet this goes against deeply held egalitarian values; no one is just better than anyone else. And so there is in some circles a powerful resentment against intellectuals, nerds, brains, or other elitist scum who think they're better than the rest of us. This is also unfair; many smart people do not think of themselves as being above anyone else. Indeed, a good many smart people steadfastly refuse to think of themselves as smarter than average, much as Socrates stood in disbelief of the Oracle's claim that none was wiser than he.

     The backlash against smarts isn't just unfair, though, and even the unfairness is more than balanced by the natural advantages of being smart. The real problem, I think, manifests in the choices we make as a society. When we vote for a candidate because he or she is just a regular guy, someone we might have a beer with, rather than a highly educated and intelligent solver-of-big-problems, we hurt ourselves. And when we treat it as a social faux pas to win an argument with facts and reason, and take it as a personal offence to be shown wrong (rather than a valuable opportunity to improve our own understanding), we discourage meaningful and constructive discourse, and the means by which we all become smarter.

     So I'd like us to de-stigmatize stupid. Being stupid isn't a good thing by any means, anymore than any other handicap is something we should willingly choose. All other things being equal, it's better to be smarter. We should all be ready to admit that we're not as smart as we'd like to be, and strive always to become smarter, without vilifying those who aren't as smart as we are, or who have hit the wall in the quest to become smarter. And, equally, we should not resent those who seem to be (or think they are) smarter than we are, but encourage them to demonstrate their intelligence if they've got it (and be willing to constructively, respectfully and critically assess their claims). We're all morally significant, and we all have things to learn from each other, even if some have more to offer than others. Let's all try to get smarter together.

Monday, 13 February 2012

An Invitation to Convert Me (via the comments thread)

This isn't the kind of post I'd normally make. What I'm trying to do, normally, is to share ideas I find interesting or puzzling, and perhaps provoke some discussion of them in the comments thread so as to get new and better insights. I may illustrate some of these ideas with personal anecdotes, or give a bit of personal background as to how I came to be thinking of them, but the posts themselves are not intended to be about me. They're about the ideas.

Yet I have noticed, particularly when I post about religious ideas, that some commenters feel the need to try to convert me to their religion. In the past I've had long religious discussions in email as a result of articles on my old web page, also aimed at convincing me to adopt this or that set of beliefs. I've also enjoyed my discussions with the Jehovah's Witnesses and other proselytizers who come to me door on occasion.

I do enjoy such discussions, but they're not always directly appropriate to the topic at hand. While it might well be true that if I believed this or that claim about the Gospels, I'd no longer care about the question posed in a blog post, the fact is that I am at the moment interested in the question, and so might any reader who took the time to read the original post.

So the purpose of this blog posting is to provide an appropriate comment thread in which to persuade me that I should accept the Bible as the literal Word of God, and of whatever it is that you might think follows from that claim. I don't mean to imply that people posting in this thread are unwelcome to contribute in other comment threads as well, but I do request that comments be related directly to the subject of the posting that starts the thread.

I suppose I should begin by giving my reasons for not currently accepting the Bible as the literal word of God. For one, there's a basic epistemic hurdle: the only basis I've seen for taking it as God's Word is that, well, other people have earnestly said it is, and I while I tend to trust people as being generally honest, I don't trust them never to be mistaken. True, there are some passages in the Scriptures that make some claims to divine authority, but those also suffer from a similar problem: just because someone claims to be speaking the Word of God doesn't mean he is. He could be lying, or sincere but deluded.

But another reason is that, in my reading of the Bible so far, most of the books do not claim to be divinely authoritative, and in many cases identify themselves as the works of identifiable mortal humans. Indeed, pretty much all of the New Testament consists of epistles from this or that disciple (Paul being the most voluminous) to various recipients, preaching about God but for the most part not claiming to be God. The Gospels, too, are presented as the testimonies of authors known as Matthew, Mark, Luke and John, and thus presumably represent the differing perspectives of individual humans on the events described within. And in the Old Testament, though tradition states that Moses himself wrote the Pentateuch, that seems unlikely, given that he dies before the end of Exodus, and no explanation is given as to how he'd be a reliable authority on the events of Genesis. (One might assume God told him, but that can only work to sustain the premise that the Bible is God's word; it cannot count as evidence for the premise.) Plus, having just finished Psalms and now slogging my way through Proverbs, I can't help but see a lot of this stuff as historical text written by people with political agendas (Solomon proposing to cut the baby in half as a veiled threat to tear apart the kingdom in a civil war, for example) and self-indulgent ego-fests like Mao's Little Red Book. (Seriously, Psalms seems like a collection of King David's Greatest Hits (and some not-so-great which no one would dare to call less than great), while Proverbs reads like Solomon sat down and scribbled out a couple of hundred, perhaps planning on selecting just the best for a small collection of pithy sayings, and some toadying yes-man of an editor told him, "Sire, they're ALL so good! I couldn't bear to cut a single one!" Even though an awful lot look like edits and re-edits of the same trite observation.)

So in other words, it really doesn't read like God wrote it. Not that I have any idea how God writes (though I'd kind of expect high standards), but I don't see anything about the style, structure or organization that sets the book apart from any other human creation. In short, it looks to me like exactly what you'd expect from a collection of ancient myths, historical/political texts (and a few straight attempts at narrative fiction, poetry and philosophy, like Job, Psalms, Proverbs and the Song of Solomon), combined with the newer quasi-historical accounts and evangelical letters of the New Testament, all arbitrarily selected by the Council of Nicea in accordance with the dominant religious agenda of the time. I see no compelling reason to see it as anything other than a human text, about God, perhaps, but not by God except in the same empty sense that everything else is as well.

That's how I see it, anyway. I hereby cordially invite any and all of you use the comments thread to persuade me why I should (or shouldn't) adopt a different view of the Bible.

Saturday, 11 February 2012

The Care and Feeding of Trolls

     Trolls have been in online forums for as long as there have been online forums, and there are generally two types: the good and the bad. The good ones are more akin to satirists; they present an idea or a statement that is demonstrably false, but plausible, and do so in the spirit of play; people who clue into it are welcome to play along and share in the joke. The bad trolls, on the other hand, get their amusement by stirring up trouble with inflammatory posts, starting flamewars, and just generally seeing other people get worked up over nothing; the angrier people get, the happier the bad troll.
     The conventional wisdom on how to deal with the latter type is simple: don't feed the trolls. The theory is sound; trolls are looking to provoke a reaction, so if you ignore them, they'll eventually get bored and go away. Personally, however, this is not my policy. I never feel comfortable with assuming someone is a troll, in part because I know there are real people out there who genuinely believe outrageous things, and so someone I dismiss as a troll could actually be sincere. Moreover, in a public forum, even if someone is trolling, it doesn't necessarily follow that everyone in the audience of lurkers recognizes that; there might be someone who might actually agree with what the troll says, and who therefore is in need of a healthy dialogue on the subject. So my policy has always been to take posts at face value, and not to concern myself with whether or not the person posting it actually believes it.
     I first started consciously using this policy about 14 or 15 years ago, when I decided to put my money where my mouth was on the subject of free speech. (I've always argued that the solution to hate speech is not censorship, but vigorously debunking the hate speech. In other words, vaccination rather than quarantine.) I spent a few month arguing in the Usenet newsgroup alt.politics.white-power.
     Now, I didn't have any illusions going into this. I didn't expect to convert any racists into champions of tolerance. I assumed that anyone actually arguing with me probably had made up their mind and would be at least as resistant to changing it as I was (and I've invested a fair bit into my anti-racist position, having not only entered into an interracial marriage but produced a healthy hybrid child by it). My purpose there was primarily for the lurkers, and those among them who might not have completely made up their minds.
     That's why I adopted for myself some ground rules. I committed myself to always be as polite and respectful as possible, and to consider the ideas presented as fairly and rigorously as possible. I never wanted to leave a lurker wondering why I left some question unanswered, or worse, indirectly insulted them by saying that anyone who could believe such rubbish was an idiot. Calling people idiots (even if they are) doesn't really gain you much credibility or respect, and if anything, makes them more resistant to your arguments.
     Of course, I have no idea how successful this approach was with the lurkers, since of course they're lurkers; I wouldn't likely hear from them if I made any difference. But I did enjoy a rather surprising indicator of success: several months after I retired from that particular arena, I got an email from one of the people who had been openly arguing with me in the newsgroup, thanking me for taking the time and being patient and respectful, and telling me that he'd come to see he had been wrong about the race thing. Admittedly, he wasn't one of the most hardcore and dogmatic there, but was immensely gratifying to have had a tangible effect. And I like to think that if my approach was effective enough for that, perhaps it helped a good number of lurkers, as well.

     That's why I've chosen to use the same approach in dealing with trolls. Remember that for the most part, the bad trolls are trying to provoke a flamewar, so responding emotionally to inflammatory posts plays right into their hands. If you can't respond rationally, then of course it's best to ignore them, but if you can remain calm and logical, a polite and respectful response is often more effective. And it might even be useful to someone some day, googling for an answer to a question no one else seems to think it worth taking the time to answer.

    So that's my policy on trolls. Except for the good ones, with whom I hope I'm clever enough to catch on and play along.

Friday, 10 February 2012

A Problem with "Because God Says So!" as a Basis for Morality

     Atheism is often criticized by Christians on the grounds that it does not provide any basis for morality. To these Christians, morality is a matter of following God's commands, and if there's no God to give this guidance, to establish values, then how can there by any morality?
     In return, atheists often pointed out that obeying the literal commands of God (assuming that the literal commands of God are in fact recorded in the Bible) leads to some pretty horrific results, particularly if you look at the Old Testament. Are we really expected to stone adulterers to death? How can that possibly be a moral thing to do?
     One answer I've often heard is that most of that Old Testament stuff no longer applies. Most recently, a Christian friend of mind argued that Jesus fulfilled the Levitican commandments (including the commandment to stone adulterers, but also the many kosher rules still observed by Jews today), and so they no longer need to be followed.
     Okay, so that gets us off the hook for having to follow those old rules, and maybe the only obligations remaining are the ones that we all can feel nice and warm about, the ones that make moral sense to us today because they're rooted in the general principle, "love thy neighbour". And that's how I used to feel about the argument, until just after my friend brought it to my attention again, when it suddenly occurred to me that this doctrine creates greater problems than it solves.
     Consider what it means for people living before Jesus came along to fulfil the commandments. It means that in those days, God wanted us to stone adulterers, and consequently that it was not just culturally acceptable, but morally right to do so! Something which, today, we consider morally outrageous and barbaric -- partially burying a living human being and then throwing rocks at the exposed parts until the person dies -- was good and proper at some time in the past. The very nature of morality has apparently changed.
     I find this argument rather shocking, especially coming from a tradition that condemns moral relativism. I, too, have little patience with people who look at the various cultural practices going on today in other parts of the world, such as female genital mutilation and, yes, stoning adulterers to death, and say we can't judge it to be wrong because it's a different culture. Wrong is wrong, and just because something is culturally accepted somewhere doesn't make it right. I rather think it would go without saying that this is true across time as well as across space; if it's wrong to stone adulterers today, then it was wrong yesterday and wrong three thousand years ago. Conversely, if it was right three thousand years ago, it should be right today.
     It also seems to me to be far more damaging to the idea of biblical literalism than even young-earth creationism. After all, there is something to the creationist rebuttal to scientific claims about origins: "Were you there? Did you see any of this?" No, of course, I wasn't there, and didn't see what happened with my own eyes, and it is conceivable that things unfolded differently from what modern science has inferred from the evidence. But claims about moral principles are different. I may not know the circumstances of any particular act of ancient adultery, but don't need to have lived 3000 years ago to say with great confidence that it would generally be wrong to stone someone to death for adultery. Wrong is wrong, and fundamental moral principles do not change over time, although our understanding of them certainly does evolve. Slavery was always wrong; it just took us a while collectively to recognize that.
     I don't see a way out here for the theory that God's commandments are definitive of morality, or at least not one that also preserves the biblical literalism so many Christians insist upon (and which I've criticized as idolatrous (typo corrected Feb 13, 2012: I originally wrote "adulterous" by mistake) in an earlier post. You could say that the fundamental principles of morality are given by God, but encoded into the fabric of reality like the laws of physics or mathematics, rather than accurately described in the Bible, and that ethical philosophers like Kant, Mill, Jesus, Lao Tzu and Confucius have been adding to our understanding over time. Or you could just bite the bullet and insist that yes, the principles of morality really did change drastically when Jesus fulfilled the Levitican commandments, in which case it becomes rather puzzling to consider how Confucius came up with his version of the Golden Rule back in the days when the Golden rule couldn't possibly have applied because it really was morally right to stone people to death, to slaughter the Canaanites, and so on.

Tuesday, 7 February 2012

Pro-Business versus Pro-Business: Mistaking Universals for Particulars

     One of the ways in which our thoughts often seem to go astray is in the ambiguity of how we can use nouns in English. If I say, for example, "I believe in the rights of the individual to freedom of speech and belief," you might reasonably ask (particularly if you're not a native English speaker), "Which individual?"
     Of course, that's a pretty obvious one, and just about everyone knows that in that kind of context, speaking of "the individual" really refers to all individuals. Yet I think that sometimes it's dangerously easy to slip into that trap, and to mistake the particular for the universal and vice versa.
     In particular, I'm thinking of how often politicians fall into the trap of thinking that if they help a particular business or even a particular industry sector, they're "pro-business" in the abstract universal sense, rather than just being pro-that-business. I used to think, when I was younger and even more cynical, that it was simply corruption, that the politicians were deliberately helping their cronies and disingenuously proclaiming themselves as pro-business or pro-free-enterprise. But now I'm not so sure, and even if that is the case, I feel obliged in good faith to charitably assume it's a subtle (and corrigible) cognitive error, perhaps the very one I'm describing here.
     It's not that far-fetched, after all. The arguments that implementing policies to benefit a particularly important employer in the community will create jobs and bring prosperity can be pretty persuasive, and may actually be true in many cases. It might well be the case sometimes that keeping a particular factory or mine open is in the general interest of a community. And so it can be very easy to slide into the ambiguity of thinking, without any deliberate corruption or cronyism, that being (generally) pro-business actually means helping out particular companies or businesses, even awarding them monopolies at the expense of other businesses.
     Of course, that's not what it means at all, or it's not what it is supposed to mean. Being pro-business in terms of policy means fostering an environment in which businesses generally can flourish, not simply where one particular business is better off. But I can understand, now, how one could make that honest mistake.
     It's even easier to make that mistake when a well-organized business interest lobbies for some concession or handout and manages to present itself as representing the entire industry, or a majority of the community. This is a common tactic whenever a professional sports team asks for a new stadium; they play up how popular it'll be with all the sports fans (and of course every respectable politician is a loyal fan of the home team, right?) and how many jobs will be created and how local businesses will benefit. It's not always true, but it gets a lot of momentum and it becomes hard to challenge the conventional wisdom established: new arena = boost to local business generally.
     I think we're seeing the same thing right now with the music and film recording industries, and their aggressive lobbying for reforms to copyright law. They present themselves as representing artists, and of course it's true they do represent some artists, but by no means all and in fact, only a rather tiny minority of artists, generally those who have enjoyed some degree of commercial success. Those who don't have a contract with a label, or roles in Hollywood establishment films, are simply not included.
     Here's an example of how measures intended to help "artists" can end up harming artists. Years ago, when I was in a garage band, we set out to make a demo tape. (In those days, the late 80's or so, we actually used tape.) Since we were going to be renting a high-quality reel-to-reel unit, rather than an ordinary old cassette, we had to go buy a proper reel of tape for it. We were a little surprised to discover that there was a surtax to be paid on such recording media, to go towards paying royalties to the established artists whose work would presumably be pirated onto some of the recording media sold.
     Now, think about that for a moment. I'm not insensitive to the plight of recording artists who lose sales of their music to piracy, but look who was paying for it! Here we were, wanting to record our own performance of our own original compositions, and we were being forced to pay good money to musicians (or perhaps more accurately, their recording companies) who were already established and commercially successful. It was a regressive tax that rewarded established musicians for their past success, while discouraging new creative talent from even getting started.
     Yet I don't doubt for a minute that the politicians who passed that law sincerely felt that they were really helping artists generally, even though they were helping some particular artists at the expense of a great many more. And they were able to think that, in part, because our language has an inherent ambiguity between particulars and universals: "the artist" can mean "this artist" or "all artists", just as "the individual" or "business" can be arbitrarily extended or narrowed by context.

     So, I suppose, the point of today's sermon is this: Be wary whenever someone (including you) speaks in apparent generalities, and if there's any doubt, insist on clarification. Are you really talking about all artists, or all businesses, or all citizens, or only a particular subset? It's okay to speak for just the subset, of course, but just be clear about it.

Monday, 6 February 2012

The Paradox of Unemployment

     I have always found something a little bit paradoxical about the whole issue of unemployment. When unemployment is high, we often say there's no work available. This is a problem, of course, because without work, the unemployed cannot earn the money needed for food, shelter, clothing and so forth. Therein lies the paradox, because it seems to me that if there are people without adequate food, clothing and shelter, there's work to be done: people to feed, clothe and shelter.

     So the problem isn't really unemployment itself; if there were really no work to be done, then we'd all have cause to celebrate, relax, engage in leisure activities and so forth. Rather, unemployment is a market problem; no one is willing or able to pay for the work that needs to be done. Or more generally, it's a distortion introduced by a market system that only measures value in certain ways.

    Let me be clear: I'm not condemning free market capitalism here. Any economic system involving more than one participant will introduce some kind of market distortion, and the overall gains in productivity and general welfare that accrue from free markets usually outweigh the costs of the distortions, often by a huge margin. But it does seem obvious that we can make our system better in various ways, some of which I expect I'll talk about in future posts.

     Talking about unemployment is further complicated, though, by some unconscious biases associated with the work ethic, which made perfect sense in a time when communities desperately needed every available hand to help out bringing in the harvest, threshing the grain, and with countless other tasks that needed to be done to ensure people wouldn't starve that winter. It was entirely appropriate to condemn idleness, when there was so much work to be done. And, to be sure, laziness is still properly considered a vice (of which I am dreadfully guilty).
     This attitude, condemning idleness as a moral failing, is rather misplaced in our current economy, and I suspect it gets in the way somewhat when we try to think about solutions, because it attaches a stigma to unemployment that is inconsistent with the economic reality. We just don't need everyone to help out with the harvest anymore to ensure that no one starves this winter. And as the enormous growth in the labour-efficiency of agriculture spreads to manufacturing and other industries, it's actually possible in principle to support the entire population at a reasonably high standard of living on the labour of a relatively small number of people. So the historical reason for condemning idleness is now irrelevant.
     That's not to say I believe a few people should support the rest of us, free of charge. That would be unfair to the few working, and bad for the freeloaders' psychological health as well. However, there's clearly something wrong when large numbers of people have only their labour to sell, and no one's buying.