Wednesday, 19 May 2021

Arguing with Authoritarians

    There are a few themes that come up again and again whenever I find myself arguing certain subjects with people on the internet and elsewhere, and I think I have finally figured out what ties them all together: authoritarianism. This came as a bit of a surprise to me, because many of these people present themselves as very much anti-authoritarian, rejecting the advice of those elitist academics and urging you to "do your own research". But what makes them authoritarian isn't that they necessarily respect any particular authority; it's that they tend to approach things with what I'm calling an authoritarian epistemology, one that's more concerned with power than with truth.

    To start, it seems to me that the model of knowledge and how it is acquired works something like this: One person knows something, and then tells it to another person, and now that other person knows it too. This isn't a bad theory as far as it goes, a decent first approximation for a many simple interactions. Its chief failing is that it's incomplete, but I'll get to that later. For now, the point is that to the authoritarian, this is how it works: someone who knows (an authority) transmits knowledge to someone who doesn't.

    So why do I say the authoritarian is about power? Because in this model, the authority isn't just sharing knowledge; they're actually telling someone what to think. Now, that doesn't necessarily mean that the other person has to believe them, because of course people give orders they're not authorized to give all the time, and people disobey orders all the time too. All I mean here is that the authoritarian tends to regard most or even all interactions this way, as attempted exercises of power by either an authority or someone aspiring to authority.

    And this explains why the "do your own research" crowd so bitterly resents people whose job it is to tell us stuff: the news media and scientists/experts. Because they see the transmission of knowledge in terms of power hierarchies, they take it as an affront when someone tries to tell them something. After all, who are these "experts" to tell me what to think? They're not the boss of me!

    Hence the sneering contempt if you happen to mention something that comports with the Official Story: "Hah! you believe the mainstream media?" See, when you regard every utterance as an assertion of power, then believing what people tell you is a sign of weakness, and conversely, refusing to believe what you are told is strength. It's a year-round form of April Fools Day skepticism, that sees discourse as a childish game of tricking other people into believing falsehoods while trying to avoid believing anything anyone might tell you.

    When you approach discourse this way, any argument becomes a personal attack, an attempt to exploit your perceived weakness. One may peacefully "agree to disagree", but by golly if someone tries to force me to change my mind about something, I need to defend myself, right? So it's almost inevitable that someone with this mindset will come to identify with their opinions.

    Which is another aspect of this authoritarianism that I've been troubled by for some time, the tendency to think of opinions as a matter of identity rather than the result of deliberative process. Ideally, describing yourself as a liberal or a conservative should just be shorthand for "my opinions on issues tend to be more [liberal/conservative]" but very often I find people saying things like, "I'm a conservative, so I'm against abortion" as if being conservative is the cause of the opinion, rather than a description of it. And just as often, I find that regardless of whether they consciously identify themselves with a group or movement, the authoritarians I argue with will try to pigeonhole me as belonging to whatever group it is they happen to identify as their opposition. 

    That's psychological projection, obviously, and of course we all do it to some extent. Indeed, it was projection of my own presumptions about what knowledge is and the purpose of debate that kept me so baffled about the authoritarian mindset for so long. I assumed they were arguing to provide evidence and logic that would lead me to adopt their view as my own, or to gain sufficient understanding of my own position so as to make a better informed decision as to whether to reject or adopt it. And so I was regularly astonished at the sorts of arguments they would make and what to me seemed like brazen hypocrisy.

    For example, I was recently arguing with someone who recited some mortality numbers she claimed were from Statistics Canada, intended to show that the pandemic wasn't real because there was no significant change over previous years. Since she hadn't provided a link or reference to the original source, I went and googled for some appropriate keywords, and found that Stats Can had in fact published a report on excess deaths related to the pandemic. Yet when I cited this source back, with a link, my opponent dismissed it as government propaganda that couldn't be trusted. Note that this was the very same government agency she had cited to prove her position, which she glibly dropped as not credible the instant it went against her. What could possibly be more hypocritical?

    But that hypocrisy disappears when you understand the authoritarian mindset, which values power and doesn't really consider "truth" except as the set of beliefs that distinguishes us (the good guys) from them, and which considers debate to be a kind of assault, a struggle to maintain one's own beliefs/identity against attacks while trying to weaken the resolve/identity of one's attackers. It's not about truth for them, but strength and determination, and all the little rhetorical devices we use in argumentation are just weapons to be used against the enemy, and to be deflected or dodged when they are used against us. There is, after all, nothing in the least bit hypocritical about trying to stab you with my epee while trying to parry your attempts to stab me. My opponent wasn't really making the claim that Stats Can is or isn't a reliable authority; she was simply trying to protect herself from having her Deeply Held Belief weakened by attempting to demoralize and discredit any belief/person (remember they tend to blur together beliefs and identities) that might appear to threaten it.

    I think this is why they use so many basic logical fallacies in argument, and why it doesn't bother them in the least to do so. If you call them on it, pointing out "That's an ad hominem fallacy", they don't pay any real attention to the substance of why ad hominem is a form of non sequitur because the conclusion ("you're wrong") does not follow from the antecedent ("you're stupid"). Rather, they just pick up the term "ad hominem", dimly aware that it has something to do with calling someone stupid, and use it as a weapon against you any time they perceive you to be calling them stupid. And there are countless other examples of terms with legitimate meanings that get coopted this way, stripped of nuance and wielded as cudgels not to prove any substantive point but to wear down and humiliate those perceived as attackers. 


    I'm not sure how best to deal with the authoritarian mindset. They aren't arguing to convince you they're right; they're just arguing to prevent you from convincing them they're wrong, and so all they need to do is spread doubt. (Same strategy as the tobacco companies "questioning" the link between cigarettes and cancer, or the oil industry "questioning" global warming, or creationists' "teaching the controversy", etc.) The sad irony in all this is that, to the non-authoritarian, doubt is already in plentiful supply, and it isn't a weakness but a core assumption about everything. You can't win an argument with me by making me doubt my position if you don't do something to reduce the doubts I have about your position.

Thursday, 22 April 2021

Flat Earth Economics

    Say you're trying to navigate around your neighbourhood and you want to keep the total distance travelled to minimum. Let's pretend you're not limited to roads, and can always travel in a straight line to any particular destination. You go 4 km to your first destination, and then turn 90 degrees to your left to go 3 km to the next destination, and then you can go home. By the Pythagorean Theorem, the straight-line distance to get home is the square root of (3 squared + 4 squared), or 5 km.
    That's the correct answer, or close enough for any practical purpose. In reality the world is not a flat surface, but the curvature is so big that for distances of only a few kilometres we can just treat it as flat without any significant error in calculation. Or, to put it another way, the 40,000 km circumference of the Earth is so big compared to 5 km that we might as well treat it as infinite, which is what it would be if the Earth were in fact a truly flat surface.
    Of course, the math breaks down when you start dealing with bigger portions of the whole planet. If you go 10,000 km on a flat surface, turn right and go another 10,000 km, you'll be about 14,000 km from your starting point, but on the Earth you'll still only be 10,000 km away. And if you travel 20,000 km from your starting point, the next step you take, in any direction, will take you closer to your starting point. The farthest you can possibly be from any other point on the planet is 20,000 km. (I am assuming all distances are measured along the surface, rather than taking a shortcut through the mantle...)

    Scale makes a difference. It's fine to approximate small portions of the planet's surface as flat, but it's a mistake to apply those assumptions to the whole. And the same principle applies to economies.
    When I sit down to balance my household budget, the total number of dollars I have anything to do with is a negligible fraction of all the Canadian dollars circulating around out there. For my purposes, I might as well treat the dollars that aren't mine as infinite in quantity. If I spend $100, it'll take me $100 of income to get back where I started. Simple, straight, flat-plane geometry. 
    And that's more or less true for most businesses and even local municipal governments or individual government agencies. If you spend X dollars, you generally need at least X dollars in revenue to cover it. 
    But that's when the dollars you have anything to do with are a negligible fraction of the sea of dollars sloshing around out there. At the level of a national government with a sovereign currency, that's no longer the case. The federal government of Canada has something to do with literally every Canadian dollar in circulation anywhere, because every Canadian dollar is a creation of the Bank of Canada. To continue with our analogy, the Canadian government operates at a scale that encompasses the entire globe of the Canadian economy. 

    There's a lot more to this than I've described here, and I don't mean to suggest that national governments are completely immune to the sorts of considerations that apply to private individuals and corporations. Nor am I making specific claims here about the mathematics of government debt and deficits and how much spending and taxation is appropriate. All I'm saying at this point, in this post, is that the common rhetoric about government spending is based on an inappropriate comparison. Yes, it's completely reasonable to worry about revenue and expenditures at the scale of the individual household or business, just as it's completely reasonable to use Pythagorean calculations while navigating around your neighbourhood. It's just that the math is different when you get to global scales; you have to take into account curvature if you don't want to get lost.

Thursday, 1 April 2021

Rhetorical Heavy Lifting

      I get into a fair number of arguments, and I daresay I'm reasonably good at it. I tend to "win" more often than I lose, if you define it in terms of persuading people that you're more likely to be right. (I actually think it's counterproductive to think of argument that way, as I elaborate on here.) I prefer to think of it as a process leading to greater understanding of the issue by all parties, a discussion rather than a debate, but there is usually some adversarial element, and I can usually hold my own pretty well when that's the case.

    So one thing that sometimes happens when I appear to be "winning" a debate is this: my opponent expresses some frustration that I'm only winning because I happen to be skilled at rhetoric, and if only they were better able to express their ideas more clearly, they'd be able to convince me. And that's certainly a possibility; there are many subtle concepts that are very difficult to express clearly but which turn out to be true (or at least, to have great explanatory/predictive power). But often difficulty in communicating an idea isn't so much due to a lack of rhetorical ability as it is due to the idea itself just not being as well-formed and coherent as it feels. That is, the feeling of being right or of knowing something to be so isn't identical with actually being right. (Like when I dreamed I came up with a mathematical proof of the immortality of the soul.) Goshdarnit, I know I'm right, but I just can't put it into words!

    A metaphor I've found useful for the way debates like that go is that it's like a rock-lifting contest. I choose a rock and you choose a rock, and the winner is the one who can lift their rock the highest. Now, you might think that what you need to win such a contest is to be very strong, but in fact, most often the winner is the person who is able to choose the lightest rock. 

    Now, skill at rhetoric is like being very strong in that it allows you to lift bigger argumentative rocks higher than you would otherwise be able to lift them. But the bigger impact is that it makes you a better judge of the weight of rocks. So much of the time, when someone tells me they'd be winning this argument if only they were better at rhetoric, I want to say hey, don't feel bad. I couldn't lift that rock, either.

    And to do that, I have to make a good faith effort to try to lift their rock.

    

Monday, 29 March 2021

Not All Humans

     When I was very young, my feelings were hurt when the cute little squirrel ran away from me, when all I wanted to do was pat it. Why was it scared of me? Why did it think I was mean? 

    In time, I understood that in the eyes of a squirrel, I was just a big non-squirrel animal, in a world where squirrels are often eaten by big non-squirrel animals. And so I wished I could talk to animals and they could understand me, so I could tell them that I was a nice human and I just wanted to be their friend. Not all humans are mean!

    But then I learned about lying, and understood that even if I could tell squirrels not to be afraid of me, that's exactly the sort of thing a human would tell a squirrel to make it easier to catch and eat. And so a smart squirrel shouldn't believe me when I said I wasn't going to eat them. And that, too, hurt my feelings, to understand not only that there were bad people who told lies, but that because there were bad people who told lies, I should not expect people (or squirrels) to trust me by default, however earnestly I might believe myself to be worthy of trust.

    I also came to understand that, if I genuinely cared about the well-being of squirrels, I should not want them to trust humans (including me) too easily. It is in their best interest to be wary of potential predators, even if the majority of us larger animals have no interest whatsoever in eating a squirrel. And if I still really want to befriend a squirrel, at the very least I should expect to have to earn their trust.

    This applies to my fellow human beings as well, though obviously with somewhat different parameters, since humans are a social species and live in communities where a certain amount of default trust is essential, but where there is always potential for betrayal. And so there are appropriate boundaries. It's no big deal if a stranger at a bus stop asks you what time it is, but unsettling if even a fairly close friends asks to look through your smartphone. The very attempt to take shortcuts across these boundaries is itself deeply creepy; there aren't many bigger redder flags than "What's the matter, don't you trust me?" 

    And protesting #notallmen! whenever someone talks about misogyny or #metoo or anything of the sort is exactly that kind of red flag. Just as you shouldn't take it personally that a squirrel doesn't trust humans, you shouldn't take it as a personal affront that you're not immediately assumed to be one of the good ones, and you're not entitled to the benefit of the doubt. No one is, and it's childish to expect otherwise.

Saturday, 9 January 2021

What if it wasn't Trumpists?

      Some defenders of Donald Trump have been trying to claim that real troublemakers at the Capitol riot on January 6 of this year might have been Antifa infiltrators, perhaps trying to make Trump look bad. This is a pretty preposterous idea, but let’s take it at face value for the sake of argument. In fact, let’s go all the way and presume that all of the rioters, every single one of them, were there specifically for the purpose of discrediting Trump and his movement, and that all the real Trumpists were all perfectly nonviolent and understood that they should under no circumstances give even the appearance of a threat of violence, and so they scrupulously avoided any kind of unlawfulness. Fine. Let’s say that’s the case, and that 100% of the mayhem was deliberately engineered by a coalition of Antifa, BLM, Democrats and never-Trumpers, just to embarrass Trump.

     That doesn’t exonerate him, and indeed maybe makes it even worse. Why? Because Trump's own actions made it way, way too easy for the rioters to fool us all into believing they were his supporters. He goaded on a crowd with his speech, and whether he actually meant for them to force their way into the Capitol and break stuff, anyone in that crowd could be forgiven for believing that’s what he did mean. It was a completely plausible interpretation, and if it was a misinterpretation, it was a completely predictable one. In failing to predict it, Trump essentially gave his enemies a blank cheque, which they predictably would have cashed. 


     Now, I don’t actually believe that any of the rioters were secretly Antifa. The simplest, most parsimonious explanation is that they were exactly what they appeared to be: a mob of Trump dupes and deplorables. There might also have been a few foreign intelligence agents seizing the opportunity to gain access to the Capitol and plant listening devices or steal laptops, but again, Trump himself bears the blame for creating that opportunity. It never could have happened but for his ill-conceived encouragement. 


     So it really isn’t a defence of Trump to allege that the riot was carried out by his enemies. It’s true, though, because he is and always has been his own worst enemy.

Saturday, 12 December 2020

"Do your own research"

     "Do your own research," they say when you ask for evidence or support for whatever conspiracy theory they're advocating. "I'm not going to do your homework for you," is another one they'll sometimes throw in. 

     This is an evasion tactic, of course. If they had good evidence and understood it well enough to be so confident in their conclusions, they'd be able to explain it to you. But it's not just about evasion; it's also about posturing, trying to imply that they're smarter and more informed than you and their time is too important to be wasted on imparting their vast knowledge to you. 

     As a general rule, if you make a claim then the onus is on you to provide support for the claim, so telling someone to "do their own research" when you tell them the latest conspiracy theory is just lazy bad form. But the reason this ploy resonates is that "do your own research" is generally good advice. The trouble is that the "research" they've done usually amounts to following whatever rabbit hole the YouTube algorithm generates for them, and those rabbit holes often lead to echo chambers.

     So what should it actually mean to do your own research? Broadly speaking, it means to gather evidence, evaluate its credibility and weighting, and attempt to synthesize it all into a coherent theory that allows you to make reliable inferences. The precise methods you use to gather primary evidence may vary from field to field, but in practice most of the evidence you gather will be the reports or testimony or work of other people, and so a good part of your task will be deciding how reliable these sources are.

     And here's where the "do your own research" crowd go horribly wrong, because the source whose credibility it's most important to evaluate is yourself. When they exhort you to do your own research, there's often the implication that it's lazy or stupid to trust the research of other people, as if your own research is inherently better or more reliable. But your opinions and beliefs are just as likely to be wrong as anyone else's, and you should not treat them as the answer key that everyone else's answer should match to be deemed reliable. 

     Here's an example I've used before. A doctor recommends surgery, and describes the procedure to you. Upon realizing that she's proposing cutting into you with a knife, you reject her expertise, because even you as a non-expert know that cutting people is bad. This is the wrong way to judge the doctor's expertise, because while it's true that any expert should know that cutting people is bad, the true expert may also know other things (such as how to stitch people back up, and how to cut them so as to make stitching the back up easier) that make it not so bad. 

     Instead, the better approach is to say, "Wait, you're going to cut me with a knife? Won't that hurt? Won't that put me at risk of bleeding to death?" and listen -- listen -- to understand the answer. Obviously if the doctor is surprised to learn that cutting people is generally bad, they're probably not a real doctor, but a real doctor is prepared to answer this sort of question honestly and reliably. You don't necessarily need to understand every detail of what they're telling you, but if you have a good basic lay-person's grasp of the vocabulary and subject matter, you should be able to tell when they're just making up stuff and bluffing. 

     The trouble is, not everyone has a good basic lay-person's grasp of the vocabulary and subject matter, and even if you do, interrogating an expert to decide if they really know what they're talking about takes a lot of effort and time you don't always have. This is why we have shortcuts: credentials and certifications. If someone has a PhD from an accredited university in the subject matter, or a license to practice the profession in question, it's reasonable to assume they are in fact qualified experts. It's not an absolute guarantee, of course, but in general it's reasonable to defer to their judgment in their particular field, and checking out their credentials usually enough to qualify as having "done your own research" before you adopt the results of their research.

     It's important to be clear here. You do have to trust your own research in one sense: whatever conclusion you adopt, whether it originates with you or some other source, it is inescapably the result of your decision. You're stuck with it; there's no "I was just following orders" absolution. You have a choice what conclusion to adopt, but you cannot choose not to choose; even suspending judgment is a choice (and often the right one). And more often than not, the wisest judgment is to adopt the conclusion of the experts.


Saturday, 5 December 2020

About that vaccine video...

      A friend just sent me a video (which I will not share here) asking me for my synopsis of its content, and it occurs to me that many others may receive this video and be seeking answers to the same question, so I thought I'd do a post about it. The video is from someone examining the package of the new Astrazeneca vaccine, and calling our attention to certain words printed on the packaging they think we should be very alarmed about. 

     Now, the usual caveat should apply here: I AM NOT A TRAINED SCIENTIST. However, I've learned a lot of the very very basics from a lifetime of being a nerd very interested in science, and I have a strong background in general critical thinking and making sense of stuff, which I suspect is why I have friends who come to me to ask about this kind of thing. As well, my son us currently studying cell biology and it's been fascinating talking with him about how much more staggeringly complex and beautiful and interesting these things are than I ever imagined (and I've always imagined them to be pretty darned cool). What I'm offering here is not an expert explanation, because I'm not an expert. It is simply an educated lay-person's reading of the matter, intended as an alternative to the panicky less-educated lay-person's account given in the video. 

     I'm not sharing the video here for a couple of reasons, but mainly because I don't think it needs more bandwidth, and also because I suspect there are multiple similar videos and forwarded emails making the same claims. I will attempt to charitably present the concerns it raises, and then explain why they aren't nearly as problematic as it seems from the video.

     The first word the video attends to in the vaccine packaging is "ChAdOx1-S (recombinant)". The narrator seems to think "ChAdOx1-S" is just a meaningless serial number name of the vaccine itself, and focuses on the word "recombinant" as the frightening bit, which is in a way kind of amusing for reasons I'll get to in a moment. "Recombinant" means more or less what it sounds like: re-combining DNA from two or more different organisms. This is actually not a new process: it happens every time a baby is conceived or a flower is pollinated. Even asexually reproducing bacteria often absorb bits of genetic material from various sources, sometimes incorporating it into their own genome. And naturally-occuring retroviruses splice their own code into the genome of the cells they infect. 

     What is new is our ability to do this artificially in a test tube, which we've only been able to do for a few decades. And it's tremendously useful, first for research and later as we get better at it for practical and therapeutic applications. If you don't know exactly what a strip of DNA does, you can sometimes figure it out by snipping it out of a cell to see what happens when it's gone, and then splice it into a cell that normally doesn't have it to compare the results. And then when you understand things better, you can do stuff like take the gene that produces insulin and splice it into some E. coli to produce this important life-saving hormone in industrial quantities without having to harvest it from animals. 

     So the ChAdOx1-S (recombinant) vaccine is, presumably, a vaccine made by recombining a sample of genetic material from the pandemic virus with some other genetic material that made up the precursor to the ChAdOx1-S vaccine. In other words, the vaccine is the result of a recombinant process, not something that will cause a recombination in your own cells. But if you thought that it was the latter, then the original vaccine is even scarier, because it turns out that "ChAdOx1-S" actually refers to Chimpanzee Adenovirus Vector 1, evoking a terrifying Island of Dr. Moreau scenario. So I find it hilarious that the person in the video missed this detail. But of course, it's not actual chimpanzee DNA; it refers to an adenovirus that infects chimpanzees. And humans and chimps being very very similar, viruses that infect chimps can often infect humans and vice versa.

     A vaccine is often just a de-activated version of a virus, something that resembles the actual virus enough that the immune system learns to recognize it as something to be destroyed, but isn't actually itself infectious. Think of a wanted poster: it has an image of the face of the bad guy, so you know what he looks like, but the wanted poster can't rob your stagecoach. But a wanted poster is more than just a photo: it also contains information that alerts you to why you should beware of the guy in the picture, whom you should call if you see him, maybe a reward or other motivation for doing so, and so on. 

     So it's not enough to just present some molecule to the immune system. You have to present it in a way that the immune system will recognize it as a pathogen and start producing antibodies against it. It's like you have to include all the "WANTED" text from the poster, except that with molecules we don't know how to generate all the relevant text from scratch. So a recombinant vaccine is sort of like taking a successful wanted poster you already have for some other virus, and cutting and pasting a photo of the new virus into it.

     That's what I think they've done with ChAdOx1-S (recombinant). They've taken a vaccine that seems to work for the chimp adenovirus, and spliced in some part to make it work for the new pandemic virus.


     The next bit the video gets very alarmed about is something called "MRC-5", which turns out to be a human cell line derived from fetal lung tissue. It sounds like the person in the video is deeply disturbed that aborted human fetal tissue is an ingredient of the vaccine you'd be injected with. It's not. Human cell cultures are a kind of lab rat: they test the vaccine on those cultures, to see how it affects human cells. They do not use it as an ingredient in actually making the vaccine itself.

     Now, you might have moral reservations about using a product that was tested on aborted fetal cells, but there's an important detail I need to point out here. Nobody is getting pregnant to produce fetuses for the purpose of harvesting tissue. These are fetuses who were going to be aborted for other reasons, possibly even naturally as a miscarriage. Using fetal tissue for medical research is no different, morally, from using the tissues of an accident victim. You can regard the abortion itself as an appalling tragedy, and so it might well be, but so is the death of any other organ donor; that doesn't make deriving some good out of their tragic sacrifice inherently immoral, especially if we are appropriately respectful and appreciative.

     (Also, it's interesting to note that the particular fetus MRC-5 is descended from died in 1966. Fetal cells are particularly useful for culturing because they're so early in the development process, and have so much more growth ahead of them. It's usually easier to make them immortal than it is to do so with cells of a mature adult. That said, the first human cell line to be immortalized came from Henrietta Lacks, a cancer patient who died in 1951. Cancer's weird that way.)


     Finally, the video emphasizes a passage in some of the research documents about the vaccine calling for AI resources to go through the high volume of expected ADR ("Adverse Drug Reaction") reports and make sure every detail is recorded and analyzed. Yeah, at first glance, this sounds scary, like they expect the vaccine to be horribly dangerous and hurt a whole lot of people. But here I want to repeat a theme I brought up here: if they know the vaccine is going to have a lot of ADRs, and they still intend to go ahead with it, what are we missing? Either we should assume they're diabolically evil or stupid, or maybe, just maybe, having a lot of ADR reports isn't quite the terror it seems.

     We already know that developing a vaccine is going to take (has taken) a long time, and a very large part of that is safety testing. They know that there are always risks with developing any new therapy. And they know they're going to get a lot of reports of adverse reactions. A report of an adverse reaction, however, is just that: a report. Many, probably most, of those reports will turn out to be something else. Someone gets a shot, and happens to get totally unrelated food poisoning the next day. Someone else has a heart attack. Someone else doesn't realize yet she's pregnant and reports some of her symptoms as a potential adverse reaction. When you're testing a drug, you want all of this data, whether or not it's actually related to the drug, so you can look through it all for patterns to figure out what, if anything, really is due to the drug and what isn't. And finding those patterns is an absolutely monumental task, which is why an AI system would be so incredibly useful in sifting through all the data.


     I am not saying that there is nothing to be wary of with the Astrazeneca vaccine, or indeed any of the new vaccines they're bringing out. There's a lot of pressure to get these vaccines in use very quickly, and it's not unreasonable to fear that corners might have been cut, or there just hasn't been enough time for unknown side effects to become apparent. Of course there are risks; the real question is, as always, are the risks of not being vaccinated greater or less than the risks of being vaccinated?

     What I am saying is that many of the fears people have of this new vaccine are unfounded and based on an extremely incomplete (even more incomplete than mine) understanding of what these words mean. When terrified people urge you to "do your own research", understand that research involves more than just googling; you need to know how to interpret the words you're looking up, and how they are actually used by the experts doing the work. 

     A little knowledge is a dangerous thing, especially when you think it's a lot.