Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Saturday, 26 January 2013

Little Joys of Discovery #3: More Adventures in Neuroscience

     I was always puzzled as to why soldiers standing at attention are trained to lock their gazes straight ahead, especially in the case of the guards outside Buckingham Palace, who are famous for staring straight ahead despite any distraction tourists may offer. It always seemed to me that an alert guard should be scanning the whole field of view constantly, not simply staring at a single spot.
     Just recently, I found myself thinking about this while waiting for my wife to complete a transaction at the market, and I decided to try it out. I picked a spot on the wall and stared at that spot, and sure enough, it was hard to stay focussed, especially when someone walked in front of that spot.
     But then I tried something else. I tried to pay attention instead to my peripheral vision, all the other things that were going on in my field of view besides the spot right in front of me. Taking in the entire picture, noting the presence and movement of everything, rather than trying to pick out the specific details that we look for when we focus on something.
     And suddenly, I began to understand why the guards might be trained to stand at attention that way. Turns out, my field of view spans almost a full 180 degrees! So by attending to the entire image, I can actually monitor more targets than if I were to focus on just the portion on or near the fovea (the part of the retina at the very back which gives the most detail). What's more, while I couldn't make out much detail on anyone in particular, I was surprised to notice just how much I was able to fill in even without that detail. We humans read each other's body language very well, for the most part; I could tell that the tall person in the blue coat to my right was probably looking at his cell phone, without losing sight of my wife in her red coat to my left. My situational awareness felt much keener, studying the entire scene this way, which is precisely what you want in a guard.
      Interestingly, I also realized that this would go a long way to explaining why it's so difficult for tourists to distract the guards by flashing their breasts or other such silliness. (Not that anyone flashed their breasts at me at the market. And I think I would have noticed.) I found that since my attention was actually spread more or less evenly across my field of view, things happening right in front of me were easier to dismiss. That is, I didn't exactly ignore it when someone walked in front of me and lingered for a bit, obscuring my view of the letter "E" on the sign I had been gazing towards. I knew they were there, of course, and what they were doing more or less; I just had so many other things going on in my field of view that the person who happened to be front and center didn't monopolize my attention. Also, because I was now tracking my field of view based on the positions of things at the periphery, I didn't need to remain locked on the letter "E" to maintain a steady gaze.
     It also occurred to me that there's another reason why this kind of attention is useful for certain kinds of guards. Someone who is intently scanning like a searchlightback and forth may be better able to pick out fine details and identify anomalies, but they have bigger blind spots. Worse, those blind spots are more identifiable, and can be exploited. If you watch a scanning sentry, you may be able to tell when his attention is focused on the far end of his field of view, and use that opportunity to creep forward. A guard who stands, eyes fixed straight ahead, is much harder to read. What's more, the straight-ahead gaze is perhaps better adapted to pick up movement.

     I did some modest web searching, to see if there are any documents about why guards stand at attention this way, but haven't found anything to confirm my theory. It makes sense to me, and my informal experiments seem to support it, but really it's just conjecture on my part. As always, discussion is welcome int he comments section.

Saturday, 28 July 2012

Little Joys of Discovery #1A

     This post is just a followup of the one in which I mentioned leafcutter bees, as now I've had the chance to capture a couple of photographs of some.

      On occasion you might notice the signs of leafcutter bee activity before you see the bees themselves. This is a picture of some leaves on flowers in our garden, where you can see a few neatly cut-out sections. I'd seen this sort of thing many times before I learned of leafctter bees, and assumed some caterpillar was at work, but I was puzzled as to why it seemed to have eaten in such regular patterns and left, rather than just devouring the whole leaf.



     I didn't get a picture of any bees actually harvesting these leaves, so maybe these were done by fickle caterpillars after all, but my money's on the bees.

     When I first went outside with camera in hand, hoping to stake out a nesting site where I'd seen a bee the day before, I first crouched down to look around the porch steps to see if I could find the entrance to the nest. Then, I happened to notice that sitting right there on the step was a resting bee, complete with a piece of leaf, almost as if waiting for me to come take her picture.


 
     And as soon as I took this shot, she flew away.

     Unlike honeybees, leafcutters are not eusocial; all the females lay their own eggs, rather than tending to the eggs of their mother the queen. According to this site, all leafcutter species are solitary, but  alfalfa leafcutter bees (Megachile rotunda) are happy to build nests in close proximity to each other, which is why I think the ones in my backyard are of that species. Each of the drainage holes in the base of these flowerpots is the front door to a leafcutter's burrow; I saw bees come and go from all of them, but they're pretty quick, and it was hard to catch good still photos of any of them. I did manage to catch a little video of some, which I've put on my YouTube channel. Boy, nature photography takes patience!

 

Saturday, 21 July 2012

Was the Apollo Mission a Waste of Money?

You are probably familiar with this photograph.


This very famous image of Earth was taken by the astronauts of Apollo 17. It has been seen by pretty much everybody, and has been used so much as to have become a cliché. And yet, it's still an amazingly beautiful and inspiring shot.

But was it worth all the money spent to go to the Moon? There were many people at the time who thought it was a waste, and people today still make that argument. Advocates of space exploration point out that the technologies developed as part of the Apollo program played an invaluable role in advancing our standard of living here on Earth, and there's truth to that, certainly. We have a lot of neat gadgets that we probably never would have developed had it not been for Apollo.

But I want to talk about something else. Look at that image again, and think of how often you've seen it before. It's appeared in books and magazines, T-shirts and posters, advertising campaigns... you name it. It's such a compelling image, it's used everywhere.

Now, copyright and piracy are very much on people's minds these days, and the RIAA in particular is complaining about losing staggering amounts of money to unauthorized copying. Whether their claims are accurate or not, we can agree that images like this photo have commercial value. So I'd like you to consider for a moment just how rich you'd expect to be from royalty payments if you owned the copyright to that iconic photo of the Earth.

Of course, if NASA charged royalties for the use of that photo, it probably wouldn't have been used by nearly so many people, and they almost certainly would have had the same problem that RIAA complains of when it comes to collecting from everyone who uses it. But that isn't really the point. What I want to argue is that if you were to sit down and put a dollar value on the intellectual property of that one, single photograph, taking into account how many people have used it for how many different purposes, the amount of value generated would be staggering. Now, think about these images:







NASA doesn't charge us royalties on using these images. They are part of our culture, and belong to all of us. We are richer for having them. I don't know what dollar value to put on them, but it's got to be pretty large, especially if we listen to RIAA and the film industry.

So forget about all the fancy technology we enjoy as a result of the Moon landings. I think we may even have turned a profit just on the intellectual property assets alone.

Tuesday, 10 July 2012

Thinking about Mersenne Primes

     It is both a blessing and a curse not to be formally trained in a field like mathematics. The blessing is that every once in a while, I get to enjoy the sublime delight of figuring out something on my own. The curse is that when I go to share these exciting discoveries with others, as I'm about to do in this posting, it's almost certain that I'm putting my grave ignorance of the field on display for everyone who knows anything about the subject to see. Fortunately, I'm quite shameless in my ignorance, and after all, admitting you have a problem is the first step towards a solution.

     Anyway, I was thinking about Mersenne primes a while ago, prime numbers that take the form 2^n-1, where n is a prime number. I remember testing it out: 2^3-1 is 7, which is prime. 2^5-1=31, also prime. 2^7-1=127, ALSO prime. Hmmm. Now, I remember reading that not all numbers of this form will be prime, and as it turns out, 2^11-1=2047, which is divisible by 23 and 89, but it is certain that if 2^n-1 is prime, then n must also be prime.

    That's the part that puzzled me. What was so special about 2, and what is it about prime numbers that gave rise to other prime numbers this way? However, I did figure it out, and it's that proof that I want to share with you here.

     Forget about powers of 2 for the time being, and consider a number like 13,131,313. Is it prime? Actually, you can tell that it's not, because its digits show a repeated pattern: 13 repeated four times. So it ought to be divisible by 13, and sure enough, you get 1,010,101 when you divide by 13.
     The same principle can be generalized to any number that consists of a repeating pattern of digits. Simply take the pattern once, and the full number must be divisible by it. So 123456712345671234567 is necessarily divisible by 1234567 (you get 100000010000001).

     Now, take a number that is made up of nothing but repeated 1s. Obviously, that means it's divisible by 1, but all numbers are, so let's ignore that pattern, and look for longer ones. If the number of digits is even, then the number's pattern can be described as a repeated "11", and thus the number itself must be divisible by 11. If the number of digits is a multiple of 3, then it can be described as a repeated "111". And so on. So we can see, for example, that 111,111,111,111 must be be divisible by 11; 111; 1111; and 111,111. (Of those, only 11 is actually a prime factor; you can see that for both 1111 and 111,111 must also be divisible by 11. 111 is divisible by 3, but that's due to a different divisibility test; its digits add up to a multiple of 3.)
   
     So let's get back to the Mersenne primes, the ones that are expressible as 2^n-1. The thing to notice about the formula 2^n-1 is that if you write out the number in binary, you get a sequence of nothing but 1s. 2^n will give you a 1 followed by n zeros, so subtracting 1 you just get a sequence of n 1s.
    And so, if n is an even number, 2^n-1 will be divisible by 11 in binary, which is 3. If n is divisible by 3, then 2^n-1 will be divisible by 111 in binary, which is 7. n=6 give us 63, which is divisible by both 3 and 7.
    Tada! That's it. I can't tell you how pleased I was when this hit me. It was one of those aha! moments that make life good. But it also led me to some other interesting questions that I'm still thinking about today.

    In particular, I'm thinking about how it applies in other bases besides 2. Now, 10^n-1 just gives you a series of n 9s in a row, which is obviously divisible by 9, so we need to tweak the formula a little to cancel that out and give us a series of n 1s instead of 9s. Easy enough. I'm interested in number of the form (a^n-1)/(a-1). Obviously they must be composite when n is a composite number, but how many are primes when n is prime?

    So, in atonement for depriving you of the opportunity to figure out that Mersenne prime thing on your own, I leave you with that question. And if you're a mathematician for whom this is already all well known, I hope you at least enjoy the opportunity to tell me something new in the comments section.

Saturday, 7 July 2012

Straight Talk on DHMO

     By now you've probably heard from some of those agitating for a ban on DHMO (dihydrogen monoxide). I'm somewhat distressed at the misinformation and distortions offered by these activists, and felt  it was high time someone provided a more balanced perspective.

Where does DHMO come from?
     Although it is frequently and easily synthesized in laboratories and as a by-product of industrial processes, most DHMO is actually extracted from naturally occuring deposits. In fact, Canada is blessed with some of the most abundant and high-quality DHMO of any country in the world, and although most of it is consumed domestically, we do export a fair bit of it, both in pure form and as an additive to other products.

Is it dangerous?
    The dangers described by the anti-DHMO activists are real. DHMO can be very dangerous indeed. Inhalation of DHMO can interfere with the lungs' ability to absorb oxygen. Prolongued exposure to DHMO in any form can cause harm, and although that caused by the gaseous and solid forms are more severe, even liquid DHMO is known to cause skin to become prematurely wrinkled. The earthquake that hit Japan last year and damaged a nuclear powerplant was greatly exacerbated by a massive spill of contaminated DHMO.
    I don't mean to downplay the seriousness of these and other dangers. However, as anyone who's ever worked with the stuff (as I have) knows, it's perfectly harmless when you take just a few common-sense precautions. You can literally drink a glass of pure, room-temperature DHMO and suffer no ill effects; your kidneys are actually more efficient at removing DHMO from your system than any other compound. Not only that, but a surprising amount of DHMO is excreted through your sweat glands.
    To be sure, humans have an especially high tolerance for DHMO among land mammals, but most creatures do have considerable resistance to mild exposure. Marine creatures are even more resilient. I once kept a live goldfish alive in a jar of pure DHMO for a week.
    It's true that we dump truly astonishing amounts of DHMO into our rivers and streams through sewage and industrial waste. However, exposure to solar radiation removes many times more DHMO from the ocean than we humans release into it. We could double, even triple our industrial and municipal output of DHMO, and the oceans would scarcely notice. Desert ecosystems are the most vulnerable to damage by DHMO dumping, but even there, sunlight quickly cleans it up; you'd have to dump an awful lot of the stuff to destroy the desert habitat.

Are there alternatives?
    DHMO is one of the most useful compounds ever discovered. It is used as a coolant, a propellant, a solvent, a hydraulic fluid, a disinfectant, a fire retardant, and even as a food additive. It is a vital reactant, consumed in the production of concrete. It is widely used in health care. It is indispensible to modern agriculture, and is even heavily used by organic farmers. However, more than 90% of the world's DHMO is reserved for use in fisheries and transportation, to control the bouyancy and stability of ocean-going vessels. It is no exaggeration to say that without DHMO, there would be a lot of ships lying useless on the seafloor.
     There are substitutes for DHMO for many of these uses, though not all. Unfortunately, we can't really reduce our reliance on DHMO by simply adopting substitutes, because in those cases where the substitute is as good or better than DHMO, it's already been adopted. In the remaining cases, the substitute is even more dangerous than DHMO. Most importantly from an economic point of view, DHMO is cheap and plentiful, and the substitutes simply cannot compete. And let's not forget how many jobs are dependent, directly and indirectly, on DHMO.
     And those industries where there is no substitute at all are the most critical. Agriculture and fisheries are the most committed to using DHMO, and scientists have no idea of even where to look for a viable alternative. The cold, hard fact is that there are 7 billion people on this planet, and we have to feed them somehow. Without DHMO, even the most advanced modern agricultural and fishing techniques could never hope to feed more than a tiny fraction of that number.

     So let's be realistic. Yes, DHMO has its dangers, but the dangers of doing without this vitally important chemical are greater still. There's no such thing as perfect safety, and while maybe someday scientists will find a better solution, that day is not here yet. Until then, we're stuck with DHMO.

Sunday, 17 June 2012

The Man-Cold: Why are Men Such Wimps?

     Having spend the last couple of days largely incapacitated on the couch by a simple common rhinovirus (which may have affected the quality of my last two posts), I've been wondering about the phenomenon of the man-cold, which seems to be more than just an advertising trope. The evidence I have is anecdotal, but it does seem like colds hit men harder than they hit women, and I've been wondering why this would be the case.
     The first thing that occurred to me was that women probably do have somewhat more robust immune systems than men, and that makes a certain amount of evolutionary sense on several levels, mostly having to do with reproduction and child-rearing. Also, let's face it, men are a bit more expendable than women in the gene-propagating game; biologically, dad can check out right after conception and baby can survive and even prosper, while mom needs to stick around for a couple of years after that at least for a decent chance at grandchildren.
     But I don't think that really answers the question. I don't think the symptoms I suffer when I have a cold are really all that much worse than those affecting my wife when she gets it. No, I think it really does come down to men being wimps, at least with respect to things like colds. And I think there's actually a plausible evolutionary reason for that.
     Here's what I mean, first of all, by "wimp": someone with a low tolerance for discomfort. I don't mean pain threshhold, although that's a related concept, because men pride themselves in their ability to press on despite terribly painful injuries. Not that women can't also do this, but rather that we don't see the same kind of gender-linked phenomenon as we do with the man-cold. We don't have "man-sprains" or "man-sunburns". What I'm talking about, at least in the context of men-with-cold-being-wimps, is the kind of discomfort that is not a result of physical trauma gloriously won in battle, but of feeling sick from disease or poison. And I'm suggesting that a low tolerance for that kind of suffering, a propensity to lie in bed and moan from a relatively small degree of discomfort, might actually have served an evolutionary purpose.
     Consider our ancestral environment, and in particular the different social/economic roles filled in hunter/gatherer societies. By and large, hunting (and warfare) have historically and prehistorically been primarily male areas, while gathering has been more open to both genders (if perhaps tending to be dominated by females). Hunting and fighting are kind of a high-performance activities, with rather high stakes for failure. If you go to gather roots and berries with a head cold, well, you may come back with fewer roots and berries than if you were in peak health, but the risks of not coming back at all are not substantially higher; it's still worth it to go ahead and collect food. However, going off to raid another tribe's village, or trying to bring down a giraffe or mammoth, a head cold can put you at substantially greater risk of failure and possibly death. In other words, having a cold can make the odds of a successful hunt so low as to be not worth the trouble.
     Now, nature doesn't generally rely on our cool rational intellect to make these calculations for us. It doesn't lead us to carefully assess how many calories we've consumed recently, and decide how much more we should or shouldn't eat. Rather, it equips us with animal appetites and instincts; we eat because we FEEL hungry, not because we've made a rational calculation that we ought to do so. Likewise, rather than trusting our ancestors to coolly assess the odds of an ill-timed sneeze spoiling an ambush, nature has given us threshholds for various forms of discomfort that make us feel more or less horrible, and more or less enthusiastic about going out and hunting or gathering or whatever it is we normally do.
     And that, I think, is why men are such wimps when it comes to colds. In our ancestral environment, the better decision when you had a cold was not to go hunting, so men are laid low by relatively mild illness, while women will still go out gathering while suffering the very same symptoms. Of course, men aren't wimps in battle, and can ignore pain and fight on despite grievous injuries. But there, the evolutionary payoff is different; it's too late to stay home once you're fighting an angry auroch, so your best chance is to go all-out physically and either kill it or run like hell.
   

Thursday, 14 June 2012

After Images: Amateur Adventures in Neuroscience

     Many years ago (perhaps as early as the 1980s?), I remember reading in a book about artificial intelligence something about attempts to develop vision systems where researchers were surprised to find the processed image essentially going blank after a few seconds, if the camera were left pointed at a static scene. I don't remember the details of the book or the experiment, but it's something I've found myself thinking about from time to time, and of course it makes a certain amount of sense when you think about how neural networks work. After all, visual processing isn't just a matter of detecting the intensity of each pixel, but involves picking out patterns: edges, lines, movement, and so on. Layer upon layer of neurons perform this task, each responding to a particular, relatively simple combination of inputs from several neighbouring neurons to decide things like whether or not its tiny area of the visual field includes a boundary between different regions.

     It has recently occurred to me that I may have observed this very same phenomenon with the visual system in my own brain. I had tried to duplicate the result by staring very intently at a single point long enough for things to fade, but that never worked; even in a very calm environment where nothing appears to be moving, the semi-random saccadic eye movements force your field of vision to twitch just enough to keep things dynamic for your neural net.
     But there is a way to imprint a static image directly onto your retina, without any possibility of its moving around. I noticed this one morning as I woke up, and the high contrast image of the daylight through my window against the darkness of the rest of my room left an afterimage when I closed my eyes again. At first the image was a very sharp outline of my window, with every slat of the Viennese blind distinctly visible. But soon it began to blur and fade.
     Afterimages are caused by the retinal cells in your eye being fatigued. The cells fire when exposed to light, sending a signal through the optic nerve to the brain, but it uses metabolic resources to fire, and it takes time to regenerate those resources. Now, I thought the fading of the afterimage, after I closed my eyes again, was simply a matter of my retinal cells recovering from the effort. But then I found I could restore the afterimage for a short time by opening my eyes without looking in the direction of the window. That is, I controlled for the possibility of simply re-burning a similar after image, by looking at my darkened room rather than the window. And the image came back, though as before, only for a few seconds before it faded. I was able to make this happen again and again, for several minutes. Evidently it can take quite a while for retinal cells to rest fully.

     So here's what I think is happening here. The after image on my retina is very static, because it's fixed to actual retinal cells. The layers in my neural net quickly cease to recognize it as a pattern, and stop reporting it, precisely because it is so static. In other words, the brain automatically filters out these artifacts of the optic system itself, compensating for the fact that one retinal cell might be giving less than its normal response and another nearby one is giving its full signal, and reconstructing what the scene "really" looks like without the local variations of sensitivity.
     And when you think about it, it makes perfect sense that our visual system would adapt this way, since it's supposed to tell us what's happening in the world around us, not irrelevant administrative details about how each retinal cell is performing.

Sunday, 3 June 2012

Little Joys of Discovery #1

     Every so often I learn something new in the most delightful way. Today I was out weeding the garden, turning soil to prepare for planting some vegetables, when I was surprised to come across this:


     As you can see, it's part of a wasp nest. The top image shows what surprised me: instead of hanging from the bottom of a branch somewhere, it's attached to the roots of the grass I was removing from the vegetable garden. That is, the paper nest had be constructed completely underground. I'm used to seeing wasp nests above ground, in trees or under the eaves of my garage, like this one, which I excised from our apple tree one autumn a few years ago and have been keeping in a sealed plastic box because it was just such a fabulous specimen:


    Now, you may have known this all along, but it simply never occurred to me that wasps might go to all the trouble to excavate an underground chamber and then go ahead and make this elaborate paper structure as well. I suppose I just assumed that what with the paper and the stings, wasps wouldn't bother to dig like this. And yet they do, and I feel richer for having found that out first-hand, rather than from a book or a blog, as much as I like books and blogs.

     I recall the same sort of experience a few years ago, lying on the grass and happening to notice a bee landing nearby carrying a piece of a leaf, before it disappeared into a tiny burrow in the ground. I was astonished. I had, of course, heard all about leafcutter ants, and their marvellous underground fungus farms, but somehow I had never heard of leafcutter bees. So I promptly went and looked them up, and it turns out they're very important pollinators for many crops. A few years later, I was replacing some rotten boards on our deck, and found tunnels lined with leaves, and packed with yellow powdery deposits I assume was pollen, stored for the bee's young. (Yes, singular possessive "bee's"; apparently leafcutter bees are a solitary species.) I wish I had taken a picture.

     While I had the camera out for the wasp nest, I also took a couple of other shots of delightful discoveries I happened upon today, though neither quite so surprising to me as learning that wasps built paper nests underground. After all, I knew that robins ate worms, though I was puzzled at why this one seemed to be just idly sitting there on the wire for so long without either eating its prey or taking it home to feed its chicks.

      I also knew that chives spread like weeds, but I was still pleased to find this one, almost as it if had been posing for a photo. Usually I find it disguised as tall grass, hiding from the lawnmower behind the raspberry bushes.


     So much for today's self-indulgent photo essay.

Wednesday, 30 May 2012

The Myth of Abiogenesis

     In discussions or debates about the development of life on Earth, I often hear a remark from the evolutionist side that "Evolution doesn't tell us how life began; it just explains how it evolves from simpler to more complex forms," or words to that effect. I suspect this is an attempt to be conciliatory, by leaving room for the creationists to attribute the first living thing to God.

     It's a nice sentiment, I suppose, but flawed in two ways. First, it's not going to satisfy the creationist who takes issue with evolution in the first place, since such creationists generally want to insist on a literal Biblical account. But second, it's just not true. It turns out that Darwin's principle of natural selection actually does account for the origins of life itself. Not the precise molecular details, of course, but a broad outline of the principle involved. 

     To explain, I first need to ask you to set aside a distinction that doesn't really exist: that between living and non-living material. Now, it seems obvious that there is a difference, and on the scale that we interact with the world, it's a practical distinction to make, but on the scale of molecules, there's just molecules, and no difference between a living or a dead one. A water molecule in my blood is no different from one my my coffee, or one in the cloud I see through my kitchen door as I type this. Likewise more complex organic molecules like proteins and DNA: they're just molecules, ordinary, non-living matter.

     I should also make explicit what the theory of natural selection is all about, at its most basic, abstract level. It's about replication of information patterns, and which patterns will tend to become more common over time.  Natural selection is often expressed as a principle governing what happens when three basic assumptions are true. Those assumptions are as follows:

     (1) There is variation among a population, and those variations affect the likelihood of the individual successfully reproducing.
     (2) Offspring tend to resemble their parent(s).
     (3) More offspring are produced than can survive to adulthood and reproduce themselves.

     This is expressed in terms that assume we're already dealing with living creatures (which isn't surprising, because the theory is almost exclusively used for understanding biological phenomena), but as a Law of Nature, natural selection applies all the time to everything everywhere, and makes no distinction whatsoever between "living" and "nonliving" matter. So just bear in mind that the words "parent" and "offspring" should be understood more as "original" and "copy". What's really being copied with each generation is an information pattern, and subsequent generations are simply copies of copies of copies.

     Now, there are also two concepts concerning replication of patterns we need to be aware of: fecundity and fidelity. Fecundity relates to the number of copies made; a highly fecund creature will have lots and lots of babies. Fidelity relates to the accuracy of the copy, how closely it resembles the original. A duplicate with very high fidelity will be almost identical to the original, while one with very low fidelity might not even be recognized as a copy at all.

     Patterns of information exist in everything, although most of the time we'll not recognize them as particularly useful information, just random arrangements of things. Patterns also give rise to subsequent patterns all the time, simply by the operation of the laws of nature. A pattern characterized by lots of water molecules in clouds may lead to a pattern of liquid water droplets falling as rain, leading to a pattern where water molecules are arranged as standing or flowing water on the ground, and so on.
     This process, of patterns producing new patterns, is in fact a kind of replication. It's just that the value for fidelity tends to be very, very low; almost none of the original pattern of information is recognizable in the offspring. But not always. In fact, patterns are duplicated with surprisingly high fidelity quite naturally, and in ways that we don't often think of as replication. For example, a shadow of a mountain is actually a rather high-fidelity replication of the profile of the mountain itself from a particular perspective. Layers of ocean sediments record climate patterns over time, and so on. In most of these cases, the fecundity of the next generation is very low, however; there are few ways for a shadow to copy itself.

     But there are ways for simple, non-biological patterns to duplicate with both fidelity and fecundity. Consider a rock cleaving in two. The two newly exposed surfaces will have all sorts of random bumps and pits, but they will correspond to each other almost exactly so you can fit the two pieces back together perfectly. Each piece contains a very high fidelity copy of the inverse of the contours of the other piece, and if you were to press one half into some clay, the imprint left would be a pretty good copy of the other half. What's more, you'd be able to reuse the stone to make more copies. So both fidelity and fecundity are well above zero for this process. One can easily imagine, without any human intervention at all, scenarios where a pattern like this is duplicated many times. A rock, rolling down a hill, leaving multiple impressions of itself in the soft earth along its path.

     This process of cleaving to produce mirror-image duplicates can happen on a molecular level, as well. In fact, DNA is very much like the cleaving rock. Each side of the double helix is a sort of inverse copy of the other side. We know very little about how the first DNA molecule came to be, but we do not need to know the precise pathway to see that natural selection would be at play every step of the way. Any pattern of information that, when encoded into matter in some way happens to increase the fecundity and fidelity of the subsequent generation of patterns, will tend to become more common over time. It need not be particularly high in fidelity or fecundity to begin with; it merely needs to be slightly better than the other patterns around it. Its own copies will tend to vary as well (a lot, if the fidelity is low), but whichever pattern has the highest fidelity/fecundity will eventually win out.

     So there is, was, and always will be a natural selection pressure operating in the universe on all matter in every form everywhere, tending to select for higher fecundity and fidelity. In most places, there's not a lot of potential for other, but in some places, particularly planets with rich chemistry and just the right temperature range, there will be enough random patterns that some crystal, some organic polymer, or some other chemical reaction will have a fecundity/fidelity advantage. And that's all it takes to get started. The molecule that is just slightly better at preserving its information by copying will become more common than the next, and over time, ever more sophisticated systems of molecules will accumulate more and better ways to improve fidelity and fecundity.
     There is no point at which the spark of life suddenly appears and matter becomes living, no point at which maggots spontaneously appear in rotten flesh. Every organism, every pattern of matter arises as the result of some previous pattern of matter that it resembles in some way, which in turn arose from an earlier arrangement of matter, and so on back to the beginning of time.

Wednesday, 23 May 2012

Why is Life so Hard?

     The short answer: evolution.

     Natural selection is lazy. It doesn't work to make things the best they can be. It just makes them good enough to have a decent chance of survival. It didn't make cheetahs run 70 m.p.h. just for the fun of it; it did so because that's about how fast you need to run to catch a gazelle. And gazelles only run so fast because cheetahs eat the slower ones. It's a classic arms race. The cheetah can catch its meal, but usually only by really working hard at it, and the gazelle also pretty much has to give its all to escape. Life's not easy at all for either of them; they're both working at their very peak effort just to survive.
     So nature runs on the principle of "good enough" rather than "the best possible". Very rarely does nature equip some species with a trait that is far more than the job of survival calls for. The only reason pronghorn antelope run so much ridiculously faster than any North American predator is because up until a few thousand years ago, there were cheetahs here too. I don't know of any measurements that would confirm this, but I'd be willing to bet that today's population of pronghorns, without cheetahs selecting for his speed, are slower on average than their ancestors. There are more ways for mutations and genetic drift to reduce the efficiency of a runner than there are to improve it.
    At first glance, we humans might appear to be an exception to this general rule. We are so much more linguistically, culturally and technologically powerful than even our closest primate relatives, it's tempting to think our relatively massive brains and corresponding intellect is a freakish anomaly. (I don't want to get into a debate about human vanity in assuming ourselves to be the smartest in the animal kingdom. I'm using "smart" in the fairly narrow technical sense of having bigger and more versatile brains, so please don't read into it any kind of value judgment. Plants are not "smarter" than we are because they "know" how to photosynthesize and we don't, and cockroaches are not "smarter" because they're more likely to survive as a species in the long run. For all their superior survival odds, cockroaches' brains are tiny and support very limited cognitive function. Plants don't even have brains. So that's all I mean by "smarter", and to argue otherwise is, well, not very smart.)
     On a survival level, our brains certainly seem to be disproportionately powerful. They've made us into one of the most effective hunters on the planet, having wiped out virtually all the edible megafauna on most continents within a few centuries of our arrival. We've figured out how to produce food surpluses through agriculture, and our population has exploded to the utterly outrageous figure of 7 billion relatively large mammals. Survival for many of us, at least in the developed world, isn't even a challenge any more; most of us die from cancer and heart disease instead of starvation or being eaten by predators. This is a direct result of our species' unprecedented technological prowess.
     So how does this figure into Nature's "good enough" approach? How come we got so absurdly smarter than we needed to be to survive?
     A big part of it is the arms race principle. We're a social species, but not a eusocial one. That is, while we tend to live together in groups and cooperate for mutual benefit, we're not completely selfless about it the way ants, bees, some wasps, termites and naked mole rats are. We cooperate and compete with each other, and when we compete, it's usually by way of our brains. Of course, it's a lot more complicated than simple competition, and often that competition takes place within a cooperative framework. A group of hunters may be genuinely trying to cooperate to bring down a mastodon, but they may also be competing to establish social dominance. Even the fully cooperative human has to be able to detect attempts to cheat, and the would-be cheater needs to be able to figure out and defeat those detection attempts, and so on. In short, humans with bigger brains than their fellow humans were more likely to pass on their genes, and this arms race has produced a species with way more smarts than we need simply to squeeze food from our environment and avoid getting eaten by bears.
     And there, in our competition with our fellow humans, nature has made us just barely good enough to have a decent chance at figuring out each other's (and even our own) motives and schemes, and not one bit better than we need to be. Sure, getting food might be relatively easy, and we don't even have to think about avoiding hungry wolves or tigers now, but the countless other struggles of social life remain as hard as they've ever been, and our brains pretty much have to work at peak capacity for that.
     In fact, in many ways, our brains are facing much harder problems than they ever evolved to solve. Fact is, as big as our brains are, they're really not that good at solving certain kinds of problems. They're good at forming judgments about the kinds of things we encountered as hunter-gatherers, but they're not so good at things like formal logic and statistics. We are equipped with a whole lot of shortcuts and quick and dirty heuristics that give "good enough" results for basic survival, but aren't always the optimum or rigorously correct solution. We can learn to do calculus or apply Bayes' Theorem, but it takes a lot of effort.

     So life is hard, and it pretty much always will be, because of the way we evolved and because of how evolution works in general. Our minds, our bodies, our willpower, all are the result of a process that makes things just barely good enough to survive in their environment, and not one bit better. And I'm not at all convinced that's a bad thing.

Monday, 30 April 2012

I don't believe in ghosts...

When people learn I'm a skeptic about things like ghosts and such, they'll sometimes relate to me some horrifically spooky experience they had, and then challenge me with "How do you explain THAT?" as if something supernatural is the only plausible explanation. Well, I can't always, but then, I rarely have enough information about the anecdotal situation to make sense of it, especially since it's been retold to me from the perspective of someone who has already chosen to see it in supernatural terms. So I sometimes respond with the following experience of my own, which took place a few years ago.


It’s never so quiet as right after a heavy fall of fluffy snow. I had just been visiting my parents one dark evening, and was walking out to my car, aware of the unnatural absence of the usual background noise of even this quiet residential neighbourhood, and listening intently to the only sound, the squeaky crunch of my shoes in the snow.

They say the ear can play tricks on you in such silence, so I didn’t quite know what to make of it when I heard my wife’s voice, faintly calling my name, as if from far away. I stopped dead in my tracks for a moment, then shook my head and continued. No, I knew my wife was at home and well out of earshot. But then I thought I heard it again.

I stopped and listened. Could I really have heard my wife’s voice? No. Of course not. The silence was messing with my imagination. After a few more long seconds of silence, I continued, a little faster, towards my car, when suddenly I heard my son’s voice, calling “Daaaaad!”

At that I froze. Something about being a parent makes one acutely sensitive to the voice of one’s own child. It’s absolutely unmistakable, and that was no mere trick of the imagination. My son was definitely calling for me.  I hurried to the car, started it up and drove out onto the main street, resisting the urge to go too fast.

Now, I’m not superstitious, and I don’t believe in ghosts or premonitions or anything of the sort. Yet it was difficult to avoid thinking in such terms. I tried to convince myself that I had not heard both my wife and son calling for me in the impossible silence, but it had been just too vivid to deny. I couldn’t shake a feeling of dread for what I might find when I arrived home.

Anxiously I pulled the car into the garage, hit the remote button to close the garage door behind me, and hastened to the house. Twenty paces from the back door, my cell phone rang in my breast pocket.

My cell phone. I stopped again, for just as long as it took to breathe a sigh of relief. As I mounted the steps of the back porch, my wife opened the door with a smug grin and the handset, my giggling son next to her. Somehow, while pulling on my winter parka, I had bumped against the speed dial button for our house, and not heard her “Hello?” over my crunching footsteps. So I HAD heard their voices calling, but in my early days of cell phone ownership, I was not yet in the habit of knowing I had it with me.

I made sure my next one was a flip phone.

Wednesday, 11 April 2012

Evolution is Not Your Friend

In my last post, I mentioned a class of objections to the theory of evolution which do not betray gross misunderstandings of the theory. That is, they acknowledge that while evolutionary theory might have objective scientific merit, it leads to implications about the nature or reality that are intuitively, aesthetically or morally objectionable. These objections take a variety of forms.

For example, some find the Hobbesian view of our nature, red in tooth and claw, particularly depressing. The idea that all living things, including us, are merely the temporary survivors of a brutal struggle of each against every other is not a very positive one for those of us who believe deeply in the values of love, tolerance and cooperation. Indeed, many evolutionary biologists themselves going all the way back to Peter Kropotkin (who was 17 years old when Darwin published On the Origin of Species) have emphasized the role of cooperation and mutual aid, rather than competition and conflict, as an important part of the struggle for survival. Yet while evolution can and has produced altruism and human instincts for morality, it still seems unsatisfying somehow to see these things as ultimately rooted in the self-interest of the genes that produce them. We want to feel that noble self-sacrifice really is noble self-sacrifice, not merely some roundabout way of ensuring one's own survival, or worse, a mistake of misfiring instincts.

As unhappy as that sounds, it doesn't really bother me that much. Regardless of how we happened to end up with our imperfect instincts for morality and justice, we have them and they have provoked the philosophers among us to contemplate the logic of it, using the capacity for generalized intelligence we evolved for other purposes to pursue problems that our ancestral environment never "intended" for us. And personally, I am largely persuaded by the efforts of Kant and Mill that there really is an inherent logic "out there" to morality as distinct from the dictates of natural selection. That we got here by means of "survival of the fittest" in no way means that we must adopt that as our moral compass.

Nor am I particularly upset with the absence of a divinely ordained purpose for our existence offered by evolution. There are people who say that the reason they believe in a Creator is that they feel there must be a purpose, some reason we're here, and that we're not just some accident of no importance in the grand scheme of things. I suppose there are two reasons why this objection doesn't really resonate with me: One, I've never really felt the need for authoritative answers from above. Even as a very young child, I frequently doubted the pronouncements of my parents and teachers, and I've never been able to overcome the epistemological hurdle of some mortal human claiming to speak for God; just because they say God wants me to do this or that doesn't mean that's really what God wants. But two: I've never really understood why it would be such a terrible thing if there were no purpose. We exist, and most of us feel a sense of some kind of purpose, whether it's real or not; why do we need it to be on some absolutely solid foundation before we invest ourselves into it? Isn't our own sense of purpose enough, without having to insist that it be dictated by God to have any real meaning?

No, the implication of evolutionary theory I find more dismal is this: we aren't built to be happy, and to some extent we may actually be built to be unhappy. Think about it: our emotional and intellectual capacities were selected for by evolution because they happened to make it likelier that we'd have offspring who would share these capacities. The things that bring us pleasure and fulfilment are not there for our benefit, but rather simply because they tend to motivate us in certain reproductively advantageous directions. Nature doesn't give a damn that we're happy, and in fact it's not really in our genes' interests for us to be too happy or fulfilled, because they we're likelier to slack off in our gene-propagation activities; it's desire that drives us to do stuff, not satisfaction of those desires.

In our ancestral environment, our desires were rarely if ever entirely fulfilled. One of the reasons we love sweet or rich and fatty foods is to encourage us to stock up on them on those infrequent occasions they became available, such as when fruit comes into season, or we're lucky enough to be able to kill some tasty animal. Most of the time we subsisted on vegetables, and so we tend to view leafy green stuff as something to eat if you're really hungry and there's nothing better available. But today, of course, we have virtually unlimited access to sweets and meats, and we have no built-in instinct to regulate how much of it we eat, because we never needed such an instinct in the nearly constant scarcity of our evolutionary past. Our appetites evolved to make us crave things, and never truly be satisfied. There are very good evolutionary reasons for this, but that doesn't make it any easier to resist overindulging in unhealthy diets.

And the same is true of most of our other biologically determined appetites and instincts. It's not enough that we be well-fed and healthy; we also have evolved as a complex social creature for whom status in the group is a key to reproductive success, so we crave being demonstrably better off than our neighbours, or at the very least not worse off. And this is a game that can never be won for most people, since for anyone to win means for everyone else to lose, to some extent.

So that's what I mean when I say we weren't built to be happy. Not just that it's unlikely to be able to attain happiness, but that it may actually be fundamentally built into our very makeup that we should always be unsatisfied. And so I am sympathetic to those who find the implications of Darwin's theory discouraging, even to the point of wanting to reject it.

Yet in my more cheerful moments, I find reason for optimism. Natural selection may have built us to be chronically dissatisfied, but at the same time, the products of human ingenuity are endlessly surprising. We have figured out ways to satiate ourselves with candy and pork chops. Our technology allows us to communicate with each other instantaneously from almost anywhere on the planet. We have devised ways to organize ourselves and relate to each other that our ancestors never could have imagined. And just as we worked out how to fly despite our lack of wings, we may yet figure out how to maximize human happiness in spite of our Darwinian legacy. And even the mere idea that such a thing could be possible should provide all the purpose anyone could need, divinely ordained or not.

Friday, 6 April 2012

Why Creationist Institutes Shouldn't Be Accredited to Grant Degrees in Science

A friend's Facebook status called my attention to a complaint about religious discrimination in that the Institution for Creation Research is being denied accreditation to give degrees in biology. I started to compose a reply to post in the comment thread there, but decided to write an essay here instead. But I'm going to start out by talking about a very important concept in physics: energy.

Energy's a pretty strange concept, when you think about it, and it doesn't help that the word is misused in so many ways. To a science geek like me, energy is measured in joules, kilowatt-hours or electron volts, so I bristle when someone starts explaining acupuncture and translates the word chi as "energy".

But even a simple, concrete unit like a joule is surprisingly abstract. It's not exactly an obvious quantity, like mass or distance or time. Nor is it directly observable; all our experiences of energy are inferred from our observations mediated by matter in some way. Energy is entirely a derived quantity, and the joule is a derived unit, defined as a kilogram meter squared per second squared. (You can get this by noting Einstein's famous formula, E=mc^2: mass times a squared velocity). We talk about potential energy, kinetic energy, thermal energy, energy stored in chemical bonds or atomic forces, the energy of photons, but in all cases we calculate the quantity through indirect means.

So ultimately, energy is an entirely theoretical quantity, never directly observed but so completely pervasive in everything we do that no one would dream to deny its existence. Physics just wouldn't make any sense at all if we didn't postulate this mysterious energy stuff. In fact, this theoretical quantity is so deeply established in our understanding of the world around us that most of us don't even realize that it's really a completely theoretical concept. Certainly no one would seriously say that energy is unscientific because no one's every actually seen the stuff, and anyone who did say that just doesn't understand what "scientific" means. And it may well be that energy doesn't exist, and our current physical theories are just a happy accident that happen to give good results, and a better, more parsimonious theory will come along some day that gives better predictions with fewer postulates, but anyone who did make the radical claim that modern physics is wrong because there's no such thing as energy would quite rightly be discounted as someone who just didn't understand physics. Unless she could provide that better, more parsimonious theory, which would be a staggeringly impressive accomplishment.

Now, the whole point of accreditation for academic institutions is to try to provide some assurance that a person who gets a degree in physics actually understands something about physics. That generally requires that the faculty providing the instruction should know what they're talking about, and if your faculty is trying to make a go of it without reference to energy, they may be teaching something but it probably isn't physics, and therefore should not be accredited to give out degrees in physics. (If they do have that more parsimonious theory, then they should not only be accredited but awarded Nobel prizes at the very least.)

It's important to note that this isn't strictly speaking about belief, but understanding. You don't need to believe in quantum mechanics (Einstein certainly didn't) or relativity to be a physicist, but you should at least understand the theories well enough to be able to provide legitimate criticisms, or to admit that your objections to it are basically intuitive, aesthetic, moral or otherwise unscientific (as with Einstein's famous statement about God not playing dice with the universe.) If you criticize the theory based on a clear misunderstanding of it, though, you call your expertise into question. Just as someone who claims to be a physicist yet denies the existence of energy is probably just profoundly ignorant of physics.

Which brings us at last to evolution. Like energy, large scale evolution has never been directly observed (though unlike energy, small-scale evolution is observed all the time, from antibiotic-resistant infections to African cichlid speciation). And I do not exaggerate to say that just as energy is to physics, evolution is absolutely crucial to modern biology, as evolutionary theory provides a cognitive framework that gives order to and makes sense of all of the observed data so far.

Again, it's not about belief. You don't have to believe in evolution to call yourself a biologist, but if you clearly don't understand the theory, you have no right to be recognized as an expert. And this may sound unnecessarily harsh, but every single creationist criticism of evolutionary theory I've ever heard (with one exception that I'll get to in a moment) has been based on a profound (if in some cases subtle) misunderstanding of the theory. In other words, creationists who use these arguments demonstrate that they literally do not know what they're talking about. And that means they are not in any way qualified to be hold degrees in biology, much less to be accredited to grant them.

I mentioned there was one exception. The only criticism of evolution I've ever encountered that didn't betray a grave misunderstanding of the theory goes something like this: "It may well be that the theory of evolution is the best one available so far to explain all the observed data, but it still feels wrong to me because it conflicts with my deeply held beliefs or intuitions that are not themselves scientific." That's the same form, essentially, as Einstein's objection to quantum mechanics; it just felt wrong to him that random chance could play so fundamental a role in the basic structure of the universe, notwithstanding that there was no solid scientific basis upon which to object to the theory. Likewise, one might feel intuitively that evolution is wrong because it's aesthetically unsatisfying, or one might believe it is false because one is committed to a literal reading of the Book of Genesis, or that it has destructive moral implications, but none of these are valid scientific objections, and none of them stand in the way of actually understanding the theory even if one happens to believe it is wrong.

Nor do any of these objections stand in the way of accreditation. There are probably lots of biologists who feel intuitively uneasy about some of the implications of evolutionary theory (and I'm working on a post about one of these implications I may get around to finishing in a few days), but they understand and apply the theory, and teach it, and they know what they're talking about. There are undoubtedly physicists who are somehow, deep down, convinced that relativity or quantum mechanics or both just somehow must be wrong, but they know what they're talking about. There may even be physicists who doubt the existence of energy, but they still have to know what they're talking about.

The reason creationist organizations like ICR don't get accredited to grant degrees in biology is not that they're uncomfortable with evolution. It's simply that when it comes to biology, they don't know what they're talking about.

Wednesday, 14 March 2012

Obsolete Ruminations on Obsolete Hominids

     I've always been intrigued by the fact that Neanderthals actually had brains that were, on average, bigger than ours, despite our habit of dismissing them as crude brutes who never had a chance against us refined and sophisticated Cro-Magnon Homo sapiens types. While it's a mistake to assume that intelligence is always directly correlated with brain size, it isn't completely ridiculous to wonder if maybe H. neanderthalensis was actually smarter in some ways than we are.
     I once read somewhere that there was some question as to whether or not our Neanderthal cousins had the capacity for language, based on an examination of their bones that seemed to suggest they didn't have the same vocalization abilities that we do. This got me to thinking about the role of language in our own species' success, how it could have helped us out-compete the Neanderthals, and its implications for our culture generally. Individually, they might well have been smarter. But without language, each individual Neanderthal would have had to figure out new innovations largely on her own, perhaps with some hands-on demonstration by others, but essentially by discovery anew each time. In contrast, the somewhat dimmer  H. sapiens might take longer to come up with a concept, but as soon as any member of a sapiens clan worked something out, everyone else would know it pretty quickly.
    As an example of this process, I think of my own experience in studying math in school. Although the rote memorization of the times table in elementary school bored me to tears, I eventually discovered the beauty of mathematics, and took great pleasure in exploring mathematical ideas. For most of junior high school and the first two years of high school, I did very well, ignoring the teachers for the most part and coming up with my own way of solving problems on the spot. Many of my classmates, however, struggled with trying to memorize and follow the teachers' instructions (and not "getting it" as intuitively and naturally as I did).
     This worked just fine for me, up until Grade 12, when the math suddenly seemed to get more complicated. It wasn't that I couldn't do it; I could still look at a problem, take it apart and derive a solution. But the complexity of the problems in calculus and polynomials was such that it would take me more than the available time to finish the exam. My grades plummeted, and while I did pass (barely), my classmates (who actually listened to the instructions) did much better, even if they didn't always have as deep a feel for the math as I did.
     So I wondered, then, if the Neanderthal predicament was like mine had been in math. On her own, she would have been more than a match for any individual modern human: stronger and smarter. But modern humans aren't on our own; we inherit, through language, a vast amount of knowledge that we don't have to rediscover for ourselves. We don't need to be so smart as individuals, because we share our smarts better.

     These speculations are, at least in the case of Neanderthals, obsolete. It turns out that they probably did have language, based in part on genetic evidence that they had a gene known as FOXP2, which is associated with language. Yet even if my speculations don't actually solve the mystery of how our ancestors survived and the Neanderthals didn't, they have at least given me an interesting insight into the role of authority in human culture. It's always puzzled and even alarmed me the extent to which people are willing to defer to an authority, whether it be an charismatic leader, a peer group or a text. If it's written down, or if it's the wisdom of our ancestors handed down from times past, we're inclined to accept it without question, and I suspect this isn't just something we're taught to do; it seems to come naturally. (It's also taught, of course; I certainly remember the emphasis on citing authority in law school).
     It's tempting to lament this as a curse, the stifling of individuality and slavish unthinking obedience to tradition, but it's not altogether a bad thing, and neither is it all there is to human nature. After all, we also need to have a certain amount of individual creativity to provide the ideas that are then transmitted and received this way. And I have to admit, I found that, when I took linear algebra in university, studying hard and following instructions really did help me to do better within the time constraints, even if it wasn't in my nature.
     But I do think it's important for us to be at least aware of our innate tendency to defer to authority, and maintain a sensible balance. Tradition is useful, and ideas that have been around for a long time can generally be assumed to have passed some kind of test, so we should be inclined to take them seriously, but we should never be afraid to examine them critically as well. A bit of the Neanderthal nature is healthy.

Thursday, 8 March 2012

Some Observations on Neuropathy

     I've recently finished with chemotherapy for colon cancer, which has afforded me the opportunity to experience some things that, while not necessarily pleasant, have been kind of interesting. In particular, right now I'm thinking about the composite nature of tactile senses in a way it never occurred to me to consider before.

     We all know that the colours we see are really just distinct blends of only three basic colours: red, blue and green. That's because our retinas contain photoreceptor cells that are particularly sensitive to one of these three ranges of frequencies. Primates have better colour vision than most other mammals, which only have two different colour receptors. Most birds have four, giving them a greater sensitivity to distinct colours.
     Likewise, the many flavours we can distinguish are generally made up of composite signals from just five types of receptors on our tastebuds: bitter, salty, sour, sweet and umami. Our brains recognize one particular proportion of these signals as garlic, another pattern as lemon, and so on.
     Indeed, hearing is also a composite sense; tiny hairs are located at a different resonant lengths within the cochlea, and thus are sensitive to different frequency inputs. What we distinguish as a single sound (a violin, a trumpet, a human voice) is a complex blend of the inputs of hundreds of different audio frequencies.

     Now, one of the side effects of oxaliplatin (one of the chemotherapy drugs I was on) is what they call peripheral neuropathy, or damage to the nerves leading to the remote (peripheral) parts of the body: fingers and toes. In short, my fingers and toes are uncomfortably numb. However, it seems that this neuropathy doesn't affect all receptors equally. While I've lost most of the pressure sensors in my fingertips, I can still receive signals from the pain and temperature receptors, which has had some interesting (if annoying) effects.
     First, the pressure receptors have by far the finest resolution, as Braille readers demonstrate. The pain and temperature receptors don't need to be as precise about location; it's enough that the brain knows a particular fingertip is risking damage without worrying about what square millimeter is at risk. This means I can't sense things as precisely with my fingertips as I could before the treatment. I have trouble buttoning my shirt, and I can't play the guitar without visually confirming I'm putting my fingers on the right strings and frets. It also means my fingers constantly feel kind of greasy to me, as if a fine layer of oil is preventing me from detecting the tiny variations in altitude of a surface that would normally show up as small local variations in signals from pressure sensors.
     But I can still sense textures to some extent, and I think this is due in part to the role of what we call the pain sensors. The pain sensors are triggered by extremes, stresses of pressure or temperature that threaten to damage tissues, but they seem to have a wide range of sensitivity, and I think that normally they must play a role in sensing things that are not strictly speaking "painful". For example, when feeling the edge of a knife, one doesn't apply enough pressure to actually suffer any damage, but the relatively extreme stresses on the tissues do trigger a mild pain signal which, combined with the pressure and temperature signals, forms a composite tactile image of a sharp edge. The composite signal doesn't register as painful at all, but the presence of a certain amount of the pain signal is part of what gives the feeling of sharpness.
     I am finding I sense the sharpness of surfaces, like the edge of a fingernail, as sharper than normal. I believe this is due to the "pain" signal making up a bigger portion of the composite input, since the pressure sensors are providing very little signal. It doesn't hurt, exactly; it just feels like edges I touch are sharper than they were before the treatment.
     In a way, it's sort of like the reaction to cold I had at the very first cycle. Touching cold things was startling; it's not that they felt colder than usual, but just more vividly cold. It was sort of like having just cleaned my glasses and then seeing the world much more clearly all of a sudden. (Subsequent rounds of chemo made the effect worse; instead of just feeling vividly cold, handling something right out of the fridge was like grabbing a live wire, a very nasty shock.) I am not sure how to interpret this experience, because the attenuated pressure sensitivity hadn't kicked in yet.

     Anyway, it's been fascinating to be able to take advantage of this experiment, much as I'd prefer to have learned this stuff second-hand.

Monday, 27 February 2012

A Tip on Parking in Snow

     We've had a sudden dumping of snow on our streets this past weekend, and while it isn't really a stupendous amount in historical terms, it is enough to get stuck in if you don't know how to drive in it. Sadly, I find a lot of people don't, which includes many SUV owners who claim to have bought their vehicles specifically to deal with Alberta winters. It's as if they think an SUV is magically immune to weather, without considering what exactly it is that makes an SUV better able to handle certain kinds of driving conditions. And so, one winter a couple of years ago when I had an hour long highway commute, I once counted 38 vehicles in the ditch, the disproportionate majority of which were SUVs, an alarming number of which had rolled over after leaving the road (owing in part to their high center of gravity).
     I'm not a fan of SUVs generally, at least not as private vehicles for ordinary use. It's not that I don't see their utility in certain contexts, but the same can be said of SCUBA gear. If you want to wear an air tank around town as a Cousteau-chic fashion statement, fine. You'll look silly. But if you wear it on a crowded elevator, you can expect to annoy people. Likewise oversize vehicles that take up more parking space than is warranted, with high suspensions that make even your low-beam headlights shine down directly into the eyes of drives of smaller vehicles. (This latter problem is made worse by following too close.) But I'll grant that it's true that the higher wheel base does make it possible in principle to go through deeper snow than I can safely manage in my sanely sized car.
     Last winter, an SUV got stuck in the lane behind our house, and when we went out to help dig it free, it became clear how the driver's confused thinking about snow caused the problem in the first place: he seemed to think that more power was the solution. To be fair, it's not necessarily a completely stupid idea; if you think of snow as creating more resistance to the movement of the vehicle, then greater force to overcome that resistance is a natural inference.
     Of course, snow doesn't just create more resistance; it also decreases traction, and this is usually the bigger problem. Indeed, ordinary small cars like mine have more than enough power to get through even fairly deep snow, provided the tires can firmly grip a solid surface underneath. So one of the tricks to driving in snow is to manage the surface under the tires. Be aware of the effect your wheels are having on the snow, and use it to your advantage.
     An example: Last night, I had to drive to pick up my son at the home of a classmate, which was in a residential neighbourhood where the snow had piled up, especially along the curb where I would normally have parked. I parked in the snowdrift anyway, and had no trouble extricating myself. Here's how I did it.
    First, I approached with just enough power to keep me moving forward into the snow drift, relying primarily on my vehicle's momentum to get me to the parking spot. I was careful to keep the wheels rolling through the snow, not spinning free but maintaining firm contact with the snow underneath. Like a rolling pin going over pie dough, the wheels compressed the snow into a firm track under the tires. I let the snow itself bring me to a halt, not using the brake at all, so that the tires never scraped the snow beneath. Spinning or sliding tires will polish the snow underneath them into slippery ice, so avoid that at all costs.
    Then, when it was time to leave, I knew that there was lots of deep snow ahead of me, and the front of my tires were right up against it. To try to drive straight out, as I would in summer, would require enough traction not just to accelerate the mass of the car but also to overcome the resistance of that snow. However, the way I rolled gently to a stop in the snowdrift meant that there was a flat track of compressed (but not polished slippery!) snow under and behind my wheels. So, I very gently backed up in that track until I had enough room to build up the momentum to roll through the deeper snow and out into the main thoroughfare. And that was that.

     So, to put it another way, the lesson is this: Do not treat the snow as an enemy to be overcome with force. Treat it gently with your tires, so that it becomes your ally.