tag:blogger.com,1999:blog-18835519961266683652024-03-05T22:59:29.365-07:00A Blog of TomMaybe we've been thinking about this the wrong way. An assortment of idle and not-so-idle thoughts on law, philosophy, religion, science and whatever else comes up.Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.comBlogger245125tag:blogger.com,1999:blog-1883551996126668365.post-37538126441447031082023-09-09T12:03:00.002-06:002024-01-22T23:22:41.652-07:00The Arrogance of Common Sense<p> <span> I know I haven't posted here in quite a while. It often happens that I have an idea for something I might want to write here, and then I realize I've already said what I wanted to say about the topic. There's stuff here I've forgotten I wrote, though of course I remember it once I look at it again.</span></p><p> A few days ago, a friend asked me about something he thought he'd seen here on my blog, but couldn't find it, and I realized it's something I had not, in fact, written about, not exactly. One of those frequently forwarded emails, supposedly an "obituary for common sense", the kind of thing I probably <i>would</i> have torn apart here, but in fact never did. So here I am, having thought about it a few days, and at last posting again on this long neglected blog.<br /></p><p><span> I'll not refer here much to the "obituary" or provide a link to it, as it's not especially remarkable in itself. What I want to talk about is not the specific details of that particular document, but rather the attitude it represents, a misplaced elevation of "common sense" above actual expertise.</span><br /></p><p><span><span> Don't get me wrong; I have nothing against common sense. Last week I was having a conversation with someone about the reason why we have jury trials, with juries made up of random citizens rather than recognized experts, and "common sense" is a big part of that. The legitimacy of the law's claim to our obedience depends on us, as common ordinary citizens, recognizing the judgments of the court as coherent and intelligible, and the best way to do that is to have a random selection of citizens actually make the judgment. </span><br /></span></p><p><span> But it's important to point out that experts still play a role in the process. The judge applying and enforcing the rules of court, and the lawyers for each side in the dispute, do need to have some actual expertise in how <i>law</i> is done, and expert witnesses may be needed to help the court to understand the implications of complex or subtle bits of evidence. In all cases, though, the matter must be made intelligible to the ordinary "reasonable person", who is presumed to possess the faculty of common sense.</span></p><p> So common sense is great. I'm all for common sense. But common sense is a <i>faculty</i>, not a set of <i>facts</i>. My complaint here is not about people <i>using</i> common sense, but about people mistaking their preconceptions and first impressions of a subject <i>for</i> common sense. "Gosh, I look out at the horizon and the world kinda seems flat, so common sense said the world is flat."</p><p> No it doesn't. Common sense says that you can be mistaken about stuff. Common sense says that experts who have studied a subject their entire lives probably know more about it than people who haven't. Experts <i>can</i> be wrong, of course. But common sense says you shouldn't just assume they're wrong because your conclusions are different, especially if you <i>haven't</i> spent as much time studying the matter as they have.<br /></p><p> "Common sense" is just used so often as a political rallying cry to reject the advice of experts as "elitist", and the tragedy of it is that it really does not need to be so, as the example of the jury trial demonstrates. At a jury trial, when there is some issue that may be difficult for the average lay-person to understand, you have legal experts examining and cross-examining an expert witness in order to make the relevant aspects of the issue clear enough to the jury to reach an informed verdict. You never have to just trust the expert testimony; the process gives you ample opportunity to decide for yourself if this guy sounds like he knows what he's talking about, especially when he's being cross-examined by the lawyer for the other side doing their best to discredit damaging testimony. <br /></p><p><span> And we can, to a large extent, do that ourselves. I have never known an academic or expert who was not more than willing to talk at length about their research, and explain how they had come to the conclusions they'd reached, and include all sorts of qualifiers and caveats about the uncertainties, and it doesn't take a huge amount of education to be able to ask intelligent questions and make a good faith effort to understand the answers. But it does take honesty and humility, to acknowledge when you don't understand an explanation and ask for clarification. </span><br /></p><p><span><span> Which is probably the point. Humble honesty is very hard for some people, and doesn't sell well to certain voting demographics. It's easier to get behind someone who confidently proclaims he has all the answers than it is to back someone who openly admits to uncertainty. And "common sense" as a phrase tends to suggest a comfortable certainty, so it's no surprise that populist politicians so frequently claim their policies are based on common sense when they reject the consensus of actually knowledgeable experts. </span></span></p><p><span><span> Ultimately, and ironically, this appeal to "common sense" is fundamentally <a href="https://tcantine.blogspot.com/2021/05/arguing-with-authoritarians.html">authoritarian</a>, in that declaring something to be common sense is a way of shutting down any kind of argument about it. It doesn't matter than someone might be able to produce terabytes of data and analysis that, once understood, conclusively proves beyond any doubt that their conclusions, however counterintuitive, are correct, if you can just dismiss it as contrary to common sense. <br /></span></span></p><p><span><span><span> And that strikes me as very dangerous.</span><br /></span></span></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com0tag:blogger.com,1999:blog-1883551996126668365.post-35590499307137630062022-04-21T18:50:00.000-06:002022-04-21T18:50:19.487-06:00For Your Consideration Some questions were raised about a parenthetical remark in a<a href="https://tcantine.blogspot.com/2021/10/making-economics-too-simple.html?showComment=1646058159660#c8319260443224757965"> previous post</a>, about preferring a minimum income to a minimum wage, that I'd like to address in greater detail, which may take a couple of posts. So in this one, I'd like to address the issue raised by the first comment, paraphrased thusly: "What is the value of money given 'free of charge'?"<div><br /></div><div> This is a common objection to minimum income plans, that they amount to giving people something for nothing. The most obvious counter to that is: So what? What's wrong with giving people something they didn't earn? People inherit wealth they didn't earn. People collect rent and interest and other income just from owning stuff, without actually doing anything beyond merely being the legally recognized owner of something, whether or not they did anything to deserve it. In our free market capitalist system, we don't object to people making money through the ownership of capital, including capital they merely inherited. So what exactly is the harm in giving people money they didn't do anything to deserve?</div><div><br /></div><div> But I want to argue that, in fact, we <i>do</i> deserve it, all of us, because of our collective ownership of the enterprise we call the state. And what is it that we have paid for our ownership, or as we say in contract law, what valuable consideration did we provide? I argued in an <a href="https://tcantine.blogspot.com/2014/07/the-liberty-dividend-why-we-ought-to.html">earlier post</a> about the liberty dividend that we invest our liberty, but that's a rather abstract and intangible thing, even if it does count as good consideration at law. When we agree to do or not to do something that we might otherwise have done, such as in a non-disclosure agreement when we agree not to exercise our freedom of speech in a way that reveals someone's secrets, that is a real and valuable consideration capable of supporting contractual obligations on the other side. </div><div><br /></div><div><p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-kerning: none;"> So what I want to argue here is that what we give the state in exchange for our share includes something much more tangible: everything that anyone else owns, that it is possible <i>to</i> own. I mean this literally, because the entire concept of ownership is a creature of the state, of society. What prevents you from using <i>"my" </i>property? You may say it's your morality, and that's all well and good, but people's moralities differ, as do their ideas of who should own what, and it is the courts, applying the laws of the state, which will decide that. In other words, I have surrendered my freedom to use (or attempt to use, and probably end up fighting over) everything in the world that is deemed by the state to belong to someone other than me. Now, the state might deem that there are things which belong to <i>me</i>, which this means that you, too, have surrendered to the state the freedom to use my stuff (again, a subset of all of the stuff that belongs to people other than you).</span></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px; min-height: 19px;"><span style="font-kerning: none;"></span><br /></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-kerning: none;"> Therefore, since it is the state which exercises ultimate authority over who owns what property, we have each of us invested all the property in the state. We don’t usually speak of it this way, but this is exactly what sovereignty is: the exclusive right to make and enforce rules or policies about who can do what with what. In free countries like Canada, the state generally delegates much of the decision-making to individuals by defining in its laws what we think of as private property rights, but these are ultimately conventions, subject to legislative or judicial amendment, and thus in a very real sense we <i>have </i>endowed the state with all the property there is.</span></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px; min-height: 19px;"><span style="font-kerning: none;"></span><br /></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-kerning: none;"> I claim, then, that for this reason every citizen of the state should be considered a shareholder in it. When, as in Canada, the Crown retains the rights to natural resources like forests, fisheries and minerals, it should manage these assets to the benefit of its shareholders (citizens), whom it own a fiduciary duty closely analogous to that owed to a corporation by its board of directors. The shareholders are entitled to a voice in management, which usually involved electing delegates to represent them and advance the policies they favour.</span></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px; min-height: 19px;"><span style="font-kerning: none;"></span><br /></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-kerning: none;"> I do not intend to argue here that issuing regular dividends from the proceeds of the corporation is a good idea. I’ll probably address that in a later post. All I mean to establish here is that doing so would not at all be giving people “money for nothing”. We obtain our shares in the state not “for nothing” but by virtue of being bound by its laws, which is good and valuable consideration. And owning a share in the state means that any dividends that state may issue (such as a universal basic income, for example) are not charity, but a duly earned benefit.</span></p></div><div><span style="font-kerning: none;"><br /></span></div>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com11tag:blogger.com,1999:blog-1883551996126668365.post-51538448373893859162022-04-04T12:17:00.000-06:002022-04-04T12:17:12.683-06:00False Flags<p> Every once in a while I see a meme shared that purports to be a letter from Sir Wilfrid Laurier, who was Canada's 7th Prime Minister, urging that newcomers to Canada from all nations are welcome but they must become <i>Canadian,</i> dagnabbit! and learn to speak English or French and leave behind all them thar furriner customs that clash with our beer, hockey and Timbits. (It didn't actually reference these thing, but it was very much about assimilating to the Canadian way of life, whatever that is.) One version of this meme even came with a grainy black and white photo attached, showing a man apparently giving a speech in what looked like a campaign rail stop. </p><p><span> </span>"Hmmm," I thought upon looking at the picture, "That looks nothing like Wilfrid Laurier. In fact, I think it looks like Teddy Roosevelt." So I googled a few key phrases from the letter, and sure enough, Laurier never wrote it: someone had taken the real letter, which <i>had </i>been written by Roosevelt, and just replaced all the references to the U.S. with Canadianized replacements ("English and French" in place of "English", etc.). Apparently it had been circulating enough to warrant a French news service debunking it <a href="https://factcheck.afp.com/assimilation-immigrants-theodore-roosevelt-quote-falsely-attributed-former-canadian-pm-laurier">here</a>.<br /></p><p> It was at once both infuriating and hilarious. Infuriating because of the sheer dishonesty involved. There's no way this was an innocent mistake; whoever took that Roosevelt letter and altered it to present it as coming from Laurier knew perfectly well that they were fabricating a deliberate lie. What possibly could possess them to think this was an act of Canadian patriotism? If it was meant as a joke, though, it was hilarious, taking an expression of distinctly American nationalism and passing it off as Canadian patriotism by slapping on a few superficial (and <i>false</i>) Canadianisms. </p><p><span> I feel the same kind of insult whenever I see a vehicle drive by with a big Canadian flag waving, often supplemented with "F*CK TRUDEAU" or similar slogans. </span>It's not that I think a true Canadian shouldn't criticize the Prime Minister. Rather, it's that I feel like my flag is being co-opted to stand for something that's not really Canadian, as if I won't notice a cheap "LET'S GO BRANDON" knock-off.</p><p><span> </span><br /></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com4tag:blogger.com,1999:blog-1883551996126668365.post-20625229855517142162022-03-16T23:03:00.004-06:002022-03-16T23:03:59.365-06:00The Nameless Fallacy<p> I haven't seen this particular fallacy described elsewhere. It happens often enough that I was thinking it ought to have a name, and I was thinking of calling it the fallacy fallacy, but that refers to a <a href="https://en.wikipedia.org/wiki/Argument_from_fallacy">different fallacy</a>, and it seems to me it's better to leave it unnamed, in light of its nature.</p><p><span> So what is it? It's the belief that calling out the name of a fallacy is an argument. "That's a straw man" or "That's a No True Scotsman" are things you'll hear all the time in debates, but it's almost never a good idea to use the name of a fallacy in an argument, for several reasons.</span><br /></p><p> First, it's <i>very</i> often misused. Knowing the names of fallacies is no guarantee that you actually know what makes them invalid. Many people seem to think that <i>ad hominem</i> just refers to name-calling, for example, or will call out "No True Scotsman" if you try to define a term in a way they don't like. <br /></p><p> Second, it's lazy. Even if you <i>do</i> happen to properly understand what the fallacy is and why it's a fallacy, simply naming it is seldom an efficient shortcut to making clear your opponent's error, in part because (remember the first reason) there's a good chance your opponent either won't, in which case you're wasting words.<br /></p><p><span> Third, it very often leads to completely unnecessary side-arguments about the definition of the fallacy itself. </span>"What? That wasn't an <i>ad hominem</i>! <i>Ad hominem </i>is when I say you're stupid, <i>therefore</i> your argument is wrong." "No, <i>ad hominem</i> means a personal attack!"</p><p><span> Fourth, it's likely to be seen (often correctly) as showing off, an attempt to telegraph that you know something about the technical aspects of argumentation and therefore are not to be messed with, you master of logic you. </span><br /></p><p><span><span> Finally, it's almost always completely unnecessary. Remember that a fallacy is a </span></span><i>flawed </i>argument, an error in reasoning that makes it vulnerable. You don't need to name the flaw in order to attack it. For example, recognizing that an <i>ad hominem</i> is a form of <i>non sequitur</i> where the premise ("You're stupid") does not lead to the conclusion ("therefore, you're wrong"), you can note that you do not need to refute the premise. "I may or may not be stupid, but stupid people can be right and smart people can be wrong. Show that my argument is wrong." </p><p><span> That is, by the way, why </span>it's definitely useful to have names for the various types of fallacies, so we can discuss and analyze them and learn how to recognize and counter them, and to avoid committing them ourselves. Shop talk about rhetoric benefits greatly from having terms of art like these. But naming them in the middle of an actual argument is rarely a good move. </p><p><span><span> </span><br /></span></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com13tag:blogger.com,1999:blog-1883551996126668365.post-57530656846264052192022-03-04T00:43:00.000-07:002022-03-04T00:43:15.386-07:00Understanding over Believing<p><span style="font-family: georgia;"> <span class="Apple-tab-span" style="white-space: pre;"> </span>I've been thinking about this <a href="https://tcantine.blogspot.com/2021/05/arguing-with-authoritarians.html">authoritarian epistemology </a>idea for some time now, and I think I have identified a potential remedy. The idea is to stop thinking in terms of <i>knowing</i> (or believing-what-is-true), and focus instead on <i>understanding</i>.</span></p>
<p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span class="Apple-tab-span" style="white-space: pre;"> </span>We have long put a great deal of emphasis on the value of knowledge. For most purposes this is a perfectly serviceable value, and the pursuit of knowledge is certainly a noble calling.</span></p>
<p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span class="Apple-tab-span" style="white-space: pre;"> </span>But the problem arises when we take the authoritarian mindset into account. As I argued before, the authoritarian tends to see such things in terms of power, and in particular construes the act of telling someone something and being believed as an exercise of power. That's not a completely irrational model; the ability to persuade someone to do something is very much a kind of power. And as Voltaire said, anyone who can make you believe absurdities can make you commit atrocities. So it's not at all unreasonable to be wary of anyone trying to assert that kind of control over you, even if it's something as innocent as informing you what time it is.</span></p>
<p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span class="Apple-tab-span" style="white-space: pre;"> </span>The problem is this: when you regard the transmission of information as an attempted exercise of power, it becomes very tempting to resist being informed of anything, because being told something and believing it becomes a kind of surrender of autonomy, a failure of will. You feel like someone may be laughing at you behind your back, as if it's April Fool's Day and you're not in on the joke. And so disbelieving whatever you are told becomes an act of defiance and a demonstration of personal strength. What matters isn't so much whether you're right or wrong in any objective sense, but how bravely you stand against a more powerful opponent, which is why this sort of conspiracy theory tends to target The Government, The Mainstream Media, The Academic Elite, or whoever else might represent the dominant establishment view. Which is all well and good, because the establishment really ought to be challenged regularly, but when you adopt your beliefs just to be contrary, you're very likely to be wrong most of the time. Worse, you deprive yourself of the capacity to be right, because the more evidence is compiled in favour of the opposing view, the more your clinging to your wrongness feels like a grand display of heroic resolve. </span></p>
<p style="font-stretch: normal; line-height: normal; margin: 0px; min-height: 13px;"><span style="font-family: georgia;"><br /></span></p>
<p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span> </span>That sense of heroic resolve isn't nothing. It's a legitimate emotional motivation, that need to feel some pride in one's accomplishments or worthiness, so maybe we can find a more constructive way to satisfy it.</span></p><p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><br /></span></p>
<p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span> </span>That's why I propose instead to focus on <i>understanding</i> instead of believing/knowing. Understanding is not a failure of will, but an accomplishment of intellect. If I explain my view to you, and you succeed in making sense of it and how my claims fit into your model of the world, that is <i>your </i>triumph<i>; </i>you <i>own</i> that understanding, and while I may have helped you attain it by making my explanation as clear and accessible to you as possible, I cannot command you to understand me. It is something you can only ever do for yourself.</span></p>
<p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span> </span>Moreover, once you understand a proposition, it's up to <i>you</i> to decide for yourself how likely it is to be true, given all the other things you understand about the world. Deciding that what I say is probably true is no longer a surrender of your autonomy to mine, but your own independent conclusion that you can take ownership of and can revise as you see fit, and thus not a sign of weakness. You can and should feel proud that you have successfully made sense of what someone else has to say, especially if it's something you might have been initially inclined to dismiss as obviously wrong.</span></p>
<p style="font-stretch: normal; line-height: normal; margin: 0px; min-height: 13px;"><span style="font-family: georgia;"><br /></span></p>
<p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span> </span>None of this is to say that you shouldn't be aware of the possibility of people lying. Quite the opposite: you should be always alert to the unreliability of anyone and everyone's testimony, whether they be lying, mistaken, confused or otherwise fallibly human. This is itself an important part of the process of understanding. It's just that you shouldn't fall into the laziness trap of dismissing everyone who disagrees with you as either dishonest or stupid, because that's generally just an excuse not to bother trying to actually understand them. They might well be dishonest or stupid, but it's dangerous to underestimate an opponent, or to see them as an opponent when they might not actually be one. </span></p><p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><br /></span></p><p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span> Nor should you completely abandon the possibility that the person might be mistaken in some way. You should try to start from the presumption that they're at least as smart as you are, and that if something seems wrong it's probably because you haven't properly understood it yet, but they might well have made some error in reasoning they haven't noticed yet. If you can identify and articulate exactly what this error is, that's a particularly glorious victory, but don't be too tempted to take shortcuts to it; you still have to really understand what they're trying to say to be able to pull this off effectively. </span><br /></span></p><p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span><br /></span></span></p><p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><span><span> Once you shift from knowing to understanding, your real opponent is no longer the person you're arguing with, but the problem you're arguing about. If you can defeat the problem, you'll never lose an argument.</span><br /></span></span></p><p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"><br /></span></p><p style="font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: georgia;"> </span></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com44tag:blogger.com,1999:blog-1883551996126668365.post-68376598973081730302022-02-04T13:06:00.001-07:002022-02-04T13:06:07.113-07:00With apologies to Martin Niemöller<p><span style="font-family: inherit; font-size: large;"> <span style="caret-color: rgb(228, 230, 235); white-space: pre-wrap;">First they came for the speeders, and I said nothing because I obey speed limits. </span></span></p><div class="cxmmr5t8 oygrvhab hcukyx3x c1et5uql o9v6fnle ii04i59q" style="caret-color: rgb(228, 230, 235); margin: 0.5em 0px 0px; white-space: pre-wrap; word-wrap: break-word;"><div dir="auto"><span style="font-family: inherit; font-size: large;">Then they came for the people practicing medicine without a license, and I said nothing because I don't practice medicine at all.</span></div></div><div class="cxmmr5t8 oygrvhab hcukyx3x c1et5uql o9v6fnle ii04i59q" style="caret-color: rgb(228, 230, 235); margin: 0.5em 0px 0px; white-space: pre-wrap; word-wrap: break-word;"><div dir="auto"><span style="font-family: inherit; font-size: large;">Then they came for the unvaccinated driving trucks across international borders, and I said, "Oh for Pete's sake, this is not a prelude to genocide you idiots! Most of you have paid speeding tickets in the past, and you're still here!"</span></div></div>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com2tag:blogger.com,1999:blog-1883551996126668365.post-25374407393491768882022-01-29T14:55:00.006-07:002022-01-29T14:55:59.308-07:00Choices and Freedom<p><span> </span>Let me start by reiterating that freedom is, for me, the primary purpose of law and government. As I've argued many times on this blog, laws constraining our freedom are justifiable only if they lead to a net increase in our practical freedom. So freedom is really important to me.</p><p><span> That said, there's a certain kind of appeal to freedom argument that is just total nonsense, because there are some circumstances where alternatives are simply incompatible, where my exercising my freedom of choice makes it impossible for you to exercise yours. The best example of this is the complaint of smokers that their freedom to choose to smoke or not to smoke is violated by the imposition of anti-smoking bylaws to public spaces.</span><br /></p><p><span><span> On the face of it, yeah, it's absolutely true that such rules take away the freedom to choose to smoke in such public spaces. What's not quite so immediately obvious is that, owing to the nature of smoke and air, there is an enormous disparity in the power of smokers and nonsmokers to exercise choice. The choice not to smoke must be unanimous, whereas any single smoker may unilaterally decide that they and everyone else in that space will inhale smoke. </span><br /></span></p><p> I want to go to eat in a restaurant without smoking. You want to go eat in the same restaurant and smoke. We can't both get what we want. <i>Someone's </i>options are going to be limited here, no matter what policy we choose. Allow smoking, and I can't go to a restaurant without giving up my freedom to breathe fresh air. Disallow smoking, and you can't go to a restaurant without giving up your freedom to light up a cigarette. Flag-waving about freedom is pointless; what you need to show is that your freedom to smoke is somehow more valuable than my freedom not to smoke. (This particular question has been mostly resolved by the widespread acknowledgment that smoking really does cause cancer and other diseases.)</p><p><br /></p><p> Although complicated by the fact that you can't see or smell viruses, the situation with vaccine mandates and various public health measures is analogous. I hear people complain that they can't go to church or to a restaurant or anything else without showing their vaccine passport. Oh no. Well, the alternative is that I can't go to these things without taking on a major risk of being infected with a deadly virus. <i>Someone's </i>options are going to be limited, one way or the other; we can't all get what we want. So whose options should we limit?<br /></p><p><span> I don't mean to answer that question here. There are an awful lot of factors involved, and an awful lot of different pandemic measures we could debate. Some may turn out to be good ideas, some may be bad. The only point I want to make here is that freedom can be violated in a whole lot of ways besides just imposing an explicit rule, and if we really want to maximize our freedom, we need to think about more than just whether or not someone is telling us what to do. Viruses can take away your choices, too. </span><br /></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com2tag:blogger.com,1999:blog-1883551996126668365.post-65059692125614160532021-11-01T14:26:00.002-06:002021-11-01T14:26:28.996-06:00A Comment on Executive Privilege<span> The other day, the Washington Post ran an op-ed by law professor Saikrishna Prakash, arguing that former President Trump's claim of executive privilege against releasing documents to the January 6 Commission has some merit. Professor Prakash</span> summarizes the reason for executive privilege:<div><span style="caret-color: rgb(42, 42, 42); color: #2a2a2a; font-family: georgia, "Times New Roman", serif;"><blockquote>"The policy rationale for executive privilege is that presidents will not receive candid, unvarnished advice from their aides if that advice becomes public as a result of subpoenas, judicial or legislative."</blockquote></span></div><div> And indeed, there is some textual support for this in the <a href="https://tile.loc.gov/storage-services/service/ll/usrep/usrep418/usrep418683/usrep418683.pdf">Supreme Court judgment </a>he cites: </div><div><span><blockquote>"Human experience teaches that those who expect public dissemination of their remarks may well temper their candor with a concern for appearances and for their own interests to the detriment of the decision making process."</blockquote></span></div><div> But I'll point out that the Court does not elaborate on this principle much, to the extent that even mentioning it should probably be considered <i>obiter dicta</i>. And indeed, the Court goes on to diminish the relevance of that principle in context:<br /></div><div><blockquote>"However, when the privilege depends solely on the broad, undifferentiated claim of public interest in the confidentiality of such conversations, a confrontation with other values arises. Absent a claim of need to protect military, diplomatic, or sensitive national security secrets, we find it difficult to accept the argument that even the very important interest in confidentiality of Presidential communications is significantly diminished by production of such material for <i>in camera</i> inspection with all the protection that a district court will be obliged to provide."</blockquote></div><div><span><span><span> In other words, yeah, confidentiality is good and all, but it's not enough by itself to override the legitimate interest of the court in deciding whether or not the claim of privilege is legitimate. Courts often do this: acknowledging the existence of a concern or argument without delving into its validity because they consider it irrelevant to the case at hand. (This heads off future objections that the Court failed to consider it. Not as much of an issue at the <i>Supreme</i> Court, from which there is no appeal, but it's still good judicial practice to cover all the bases.)</span></span></span></div><div><span><span><span> And because the issue was only raised cursorily for the purpose of dismissing it, the Court did not go into any kind of depth in analyzing that basis of privilege, which is a shame because I think if they <i>had</i> seriously considered this basis of privilege, they would have phrased the description of it quite differently. The notion of executive privilege here is, after all, closely analogous to attorney-client privilege, and exists for very similar reasons. Importantly, that privilege belongs to the <i>client</i>, and emphatically <i>not</i> to the lawyer. And here's why.</span></span></span></div><div><span><span><span><br /></span></span></span></div><div><span><span><span> We <i>want</i> people to seek out legal advice, so that they can conduct themselves in a lawful manner. The example I like to use is of someone considering whether or not to murder their rich uncle so they can inherit his estate. If they consult a competent lawyer, they will be told "No, that's a crime. You can't lawfully do that, and moreover, there is a longstanding common law principle that you cannot inherit from someone you've murdered, so even after you've served your sentence for the crime you will not receive any money for it." And, presumably, having had proper legal advice, they will know that this is not an option. So if the uncle then dies in suspicious circumstances, the fact that the nephew or niece might have received legal advice on this very subject should not be used as evidence against them. <br /></span></span></span></div><div><span><span><span> The privilege does <i>not</i> exist to protect the lawyer. Yes, of course, it is very important that the lawyer be able to give candid advice and not worry about being embarrassed, but there is something seriously wrong if a lawyer gives advice about which she would be embarrassed if it became public. About the only circumstance I can imagine in which a lawyer should be embarrassed by the advice she gave in private would be if it were <i>bad</i> advice, that is, unethical or incompetent. And she absolutely <i>ought</i> to be afraid of giving such advice, whether or not it is revealed to the public!</span></span></span></div><div><span><span><span><span> The same principle applies to advisors to the President, though obviously the scope of what constitutes good or bad advice may differ somewhat. But we can see clearly why the privilege belongs to the President and not the advisor through a simple example: suppose an advisor just straight up offers a hefty bribe to the President as an inducement to make some executive order. Should the advisor be able to rely on executive privilege to shield such an offer from scrutiny? Or would the President be within their rights to fire the advisor and refer the matter to the Justice Department for prosecution? I would hope it's pretty obvious that I favour the latter answer. </span><br /></span></span></span></div><div><span><span><span><span><br /></span></span></span></span></div><div><span><span><span><span><span> So my reading of the SCOTUS remark on executive privilege is that while there might be legitimate military, diplomatic or other national concerns supporting a claim of executive privilege, the idea that privilege must be protected to save the President's advisors from potential embarrassment has no real legal weight. It is the responsibility of the President to select and earn the trust of advisors who will to give him that candid unvarnished advice, knowing full well that same President has the power to make that advice public.</span></span></span></span></span></div><div><br /></div>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com2tag:blogger.com,1999:blog-1883551996126668365.post-25072550149435693602021-10-24T14:16:00.003-06:002021-10-24T14:16:34.631-06:00Discipline and Dissent A common argument made by various movements, particularly those of a conspiratorial bent, is to complain that certified experts in the relevant field aren't allowed to speak the truth for fear of losing their professional credentials or privileges. Ultimately this boils down to a freedom of speech argument, which when you dissect it, is pretty weak. Freedom of speech means you should be free to express <i>any</i> opinion, so the argument that your opinion should be respected because freedom of speech really offers absolutely no support whatsoever for the substance of the opinion itself. You're free to say you think 2+2=37.998 if you want. And the fact that other people call you an idiot for saying so doesn't in any way add weight to your claim. <div> But let's run with that freedom of speech argument, shall we? Because it does sound just vaguely plausible that if a biology professor is denied tenure for teaching creationism or a nurse is fired for refusing to be vaccinated, you could say they're being punished for their beliefs. After all, losing a job or benefit for what may well be a deeply held belief can certainly be framed as a violation of their right to free speech or conscience. Freedom of speech <i>is</i> in fact quite relevant here. It's just that it's not the speech of the biology professor or nurse.</div><div><br /></div><div> It's the speech of the certifying body. See, when you're licensed to practice medicine (or law or any other regulated profession), the regulating body that grants you that license or certification is, in effect, vouching for you. They are saying, "This person knows what they're talking about in the subject area, and we stand by their professional judgment." They are <i>endorsing</i> you.</div><div><span> Now, you don't actually have any right to that endorsement. You have to qualify. You have to earn their confidence and maintain it, and if they ever lose confidence in your competence, they are absolutely entitled to withdraw their endorsement. Indeed, I'd say they're obligated to do so.</span></div><div><br /></div><div> So if a doctor starts spouting conspiracy theories about vaccines, and has their license revoked as a result, they might well <i>feel</i> like they're being punished for their beliefs and their freedom of speech is being violated. But it isn't. They're still completely free to express those opinions. They just don't get to claim the authority of the certifying board is behind them if they do so. <br /></div>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com7tag:blogger.com,1999:blog-1883551996126668365.post-91307831486770969582021-10-22T00:06:00.004-06:002022-04-13T23:35:38.323-06:00Making Economics Too Simple<p> Reading about the recent Nobel Prize in Economics (and the reaction to it) has got me thinking again about how I was taught introductory physics in high school. They started with the basics of Newtonian mechanics, Force = Mass times Acceleration, and so on. In order to get these ideas across, you kind of have to simplify a lot. In particular, you need to pretend friction doesn't exist. Now, friction isn't an <i>exception</i> to Newton's laws, but rather just a complex manifestation of it, but it really does make the math much harder to grasp, and so it helps to just ignore it for the sake of explaining the basic concepts of mass and acceleration and momentum and all that.</p><p><span> Why did the Nobel Prize talk remind me of this? Well, it comes down to the way we were taught introductory microeconomics, with those nice simple supply and demand curves, and how it was presented as practically a Law of Nature that as the price of a commodity goes up, demand goes down. That is such a simple, elegant and straightforward principle that it's really hard to imagine it being otherwise. It's almost as powerful as the idea of objects in motion tending to continue in a straight line. After all, if the price of apples goes up, you're gonna buy fewer apples, either looking for alternatives or just deciding to do without apples for now.</span><br /></p><p><span><span> Half of this year's Nobel Prize went to David Card "for his empirical contributions to labour economics". It had been assumed that, as with raising the price of apples leading to lower demand for apples, raising the minimum wage would reduce demand for labour and thus cause higher unemployment. But Card and his collaborator (the late Alan Krueger) decided to test this by comparing two very similar areas as one raised its minimum wage and the other left its unchanged. Surprisingly to the microeconomics dogmatists, there was no significant change in unemployment, and in some cases, employment went <i>up</i>.</span><br /></span></p><p><span> I'm not going to talk much here about Card's actual work, except to point out that it should have come as no surprise, even to the armchair theorists whose basic assumptions weren't wrong, but like the high schooler first learning physics and ignoring friction, neglected to think through all the ways the assumptions might actually play out in the real world.</span><br /></p><p> The principle assumption, of course, is that players in the economy are trying to maximize their utility with the scarce resources at their disposal. The simplest and most obvious way this plays out can be seen with the original apples example: if the price of apples goes up, then the utility I get per dollar from apples goes down, while the potential utility of my dollars remains more or less constant (that is, I can still get the same amount of stuff besides apples for those dollars). In other words, I'd rather keep a few dollars that I might otherwise have spent on apples.<br /></p><p> This is correct as far as it goes, but it doesn't go as far as it needs to to understand wages in all cases. Sure, it might apply to the wages you pay to someone to perform personal services for you, such as mowing your lawn or cleaning your house, and you might well consume less of those services as their price goes up. But for most businesses, labour is a factor of production; a business hires someone to produce value which is then sold to the consumer at a profit.</p><p> Let's say I own a facility with enough space and equipment to have ten workers producing widgets. For simplicity's sake, assume each worker can produce 1 widget an hour, which I can sell for $20. And suppose I pay each worker $10 an hour, so I make a profit of $10 on each widget sold. (There are other costs, of course, such as the space and equipment, but we'll ignore those as constants that don't affect the analysis here.) If the wages I must pay to my employees go up to $15, all that means is that I only make $5 profit on each widget. It does <i>not</i> mean I lay off any of my workers, because that would just mean forgoing the $5 they earn me per hour. The only person who's losing out by a rise in wages in this case is me, because I'm making less profit.</p><p><span> Now, it may be that my profit margin on each unit is already low. Maybe with all my other costs, I'm only making $4 profit per widget. In that case, raising my wages cost per unit by 5$ means I'll actually lose money on each widget I produce, and yes, in that case I'll probably shut down and lay off my workers. Aha! Unemployment goes up then, right?</span><br /></p><p><span><span> Well, no, not necessarily. Remember those supply and demand curves, and that they apply to the whole market, not just an individual producer. Maybe I'll drop out of the market, but that means production drops below demand, which means the price rises. It probably won't rise high enough to bring me back into the market, but one of my more efficient competitors will hire the workers I laid off and scoop up more profits until their increased production brings the price back down again.</span></span></p><p><span> But yeah, <i>maybe</i> some workers will be laid off and not replaced, depending on a whole lot of factors, but once we start looking at a whole lot of factors and whole markets, another factor comes into play: workers (including workers in other industries and sectors) now have more disposable income, and will likely be spending that on buying stuff. So demand rises, and as production increases to meet that demand, workers get hired. <br /></span></p><p><span><br /></span></p><p> I'm not going to claim that this is exactly how it will play out in every case at all times when the minimum wage is raised. (In fact, I'm not actually in favour of minimum wages at all, since I much prefer the idea of a minimum <i>income</i> structured as a citizen dividend.) But the point here is that those who authoritatively intone very basic principles from Economics 101 as arguments for or against some policy are almost always committing the same kind of mistake as someone who tries to ignore friction, turbulence and all those other complications when trying to apply Newton's Laws. Real world economies are much more complex and those very same basic laws of supply and demand can produce results that seem utterly inconsistent with the simplest application of those laws. <br /></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com48tag:blogger.com,1999:blog-1883551996126668365.post-13401254250611902882021-10-16T18:35:00.003-06:002022-02-20T16:10:58.651-07:00The Freedom to Swing One's Fist<p> <span> There's a common saying that "your freedom to swing your fist ends at my nose". This is a pretty good way to express the fact that we must have limits on our freedoms, but it seems to me in this age of radically selfish rights-talk, it doesn't quite get through. After all, the radical egoist will say, "Why should I care about your nose? I don't care at all about anyone else's rights; it's MY rights that matter, and I insist upon!"</span></p><p><span><span> So I've been thinking a better way to approach it is to put it this way: You have 100 points to spend on freedoms, and you have two freedoms here to choose from. They are the right to swing your fist, and the right not to be punched in the nose. You can put ALL your points into total freedom to swing your first, but then you have no right not to be punched in the nose, or you can put all your points into the freedom not to be punched in the nose, but you won't be allowed to even make a fist, much less extend your arm without very strict supervision. Or, you can pick some combination of the two, say, 90% fist-freedom and 10% protection-from-punching, or any other mix you prefer.</span></span></p><p> See, laws can't really distinguish between you and me with respect to noses and fists. If you want to suggest that the law should protect your interests but not mine, you'll have to provide some kind of reason why I should agree to recognize those laws as valid. And if that reason boils down to "or else", well, we can just dispense with any pretence of law and commence punching each other.<br /></p><p> So any rule we make that respects the ideally symmetrical nature of this bargain is going to have to be a rule that applies equally to everyone's fists and noses. And so whatever mix we settle upon between fist-freedom and nose-protection is going to apply to you; whatever freedom to swing your fist you demand will take away from the protection of your nose in equal measure.<br /></p><p> Of course, we may not all agree on where to draw that line. You may feel that your freedom to swing your fist is adequate to protect your nose from my fist, and so you might advocate for a greater emphasis on fist-freedom, while I might prefer stricter limits on fist-swinging in favor of greater universal nose-protections. Most likely, the line will be drawn somewhere between our preferred positions in some kind of imperfect compromise; you will feel your fist-swinging interests are being violated, while I feel my nose is inadequately protected. But people cannot live in proximity to one another without some kind of compromise, and we need to be able to step back and assess the compromises from the other person's point of view before we insist our own sacrifice is too much to ask. </p><p><span> Some people think it is a sign of weakness to compromise. But I think it's a bigger sign of weakness to be afraid of appearing weak.</span><br /></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com2tag:blogger.com,1999:blog-1883551996126668365.post-715323619427681002021-09-29T13:31:00.002-06:002021-09-29T13:31:30.720-06:00Sympathy for the Anti-Vaxxer<p><span> </span>I have an irrational fear of eating mushrooms. I've had it since I was a child. Don't know why, though I sort of suspect it might be from having read somewhere about it being dangerous to pick wild mushrooms because it's easy to pick a deadly poisonous one by mistake, and I thought, "Oh no! How do the mushroom farmers know they have the right ones?" Or it could just be that I tried them once as a child and really didn't like them, and never got over that.</p><p> In any event, that childhood dislike and distrust of mushrooms has become an almost quasi-religious taboo. It isn't <i>just</i> that I don't want to eat mushrooms. It's that I feel if I <i>were</i> to eat one, even accidentally, I'd somehow violate the purity of my essence, or betray a deeply held principle, or something like that. There's no undoing the ingestion of a mushroom. I even find it difficult to bring myself to eat virtual mushroom stew in Minecraft. That's how powerful this purity superstition is.</p><p> What would it take for me to overcome this phobia, and just try a mushroom? I dunno. People keep telling me I might find them delicious, and maybe I would, but that's not enough. I already have a whole lot of other foods I find delicious, and limited time on this planet to enjoy <i>them</i>, so adding yet another to that list doesn't seem like a huge benefit in the big picture. Besides, there are countless other delicious foods I'll never get to try, so what makes trying mushrooms take priority over them, especially when the obstacle to doing so is one that would take such an enormous amount of emotional effort to overcome? Is the reward of another food I <i>might</i> enjoy, even one that might be a new favorite, worth that ordeal?<br /></p><p><span> No, it would have to be something much more important than just finding a new favorite food. If you told me that I had to eat a plate of fried mushrooms to save my life or someone else's, that might do it, assuming you had a credible explanation for how these mushrooms would save me. But I'd need pretty good evidence, and even then I'd still find it really difficult.</span></p><p><span> Fortunately, there aren't a lot of plausible scenarios in which anyone's life depends on my eating mushrooms. I'm not confronted with a difficult moral choice here: I can continue to abstain without fear of anything other than the occasional inconvenience for myself or others.</span></p><p> And also fortunately, I <i>know</i> my mycophobia is irrational and foolish. I do not try (except in jest) to justify it with pseudoscientific rationales or conspiracy theories about Big Fungus trying to control us. And there isn't a thriving industry devoted to reinforcing and exploiting my phobia for financial and political gain.</p><p><br /></p><p> So antivaxxers, I do not envy you. If I had strong, convincing evidence that my eating a mushroom would save someone's life, I would really have a hard time of it. I'd be sorely tempted to clutch after any argument that might give me an excuse to doubt the evidence, to give me an excuse not to violate my mushroom-purity, or at least to delay it until I had <i>proof </i>(and delaying something indefinitely is an effective way to just never do it). <br /></p><p> But here's the thing: there really <i>is</i> good, strong, convincing evidence that vaccines are an extremely powerful defense against disease, and that they are orders of magnitude safer than not being vaccinated. I know, you will provide link after link after link to articles and YouTube videos purporting to prove otherwise, but those links are just wrong. Every single one I've looked at has deep, fundamental flaws in methodology, employs embarrassing logical fallacies, or even straight up lies. Every single one. But when I point out these things, you just jump to another video or article or whatever that recycles mostly the same lies. It's exhausting. </p><p> So I <i>do</i> have some sympathy. I know how very difficult it is to try to overcome this purity superstition, and how much pride and honour you may have wrapped up in having kept yourself free of vaccines for so long, and how hard it is to give up that perfect score or end that winning streak. <br /></p><p> I'm sitting here, struggling with offering to post a video of myself eating a mushroom to show that it can be done, to help give you the courage to overcome your vaccine phobia, but I honestly don't think I can do that. There are too many excuses. You might not believe I'm actually afraid of eating mushrooms. You might dismiss it as just a stunt. You might say to yourself, "Yeah, but vaccines really <i>are</i> dangerous, unlike mushrooms, so it's different." <br /></p><p> And so I realize that it really isn't about getting you to take the vaccine. I have to respect that it's really really hard for you, as hard or harder than it would be for me to eat a mushroom. I can't ask you to do that. You have the right to bodily autonomy, and the right not to be vaccinated if you so choose. So that's not what I'm asking.<br /></p><p> Here's what I <i>do </i>ask, though, and I can ask this because it's something I can ask of myself. I have accepted that my fear of mushrooms is <i>my</i> problem, and nothing to do with mushrooms themselves. I don't seek pseudoscientific justifications for why I'm smart to avoid mushrooms, and I don't try to raise the alarm to alert other people to the dangers of a food that actually does them no harm and that they enjoy. I ask you to be honest with yourself, and accept that maybe your fear of vaccines might be the same kind of fear I have of mushrooms. Go ahead and do your own research, but do it properly: don't try to find stuff that validates your belief, try to find stuff that show you're <i>wrong. </i><br /></p><p><i> </i>Heck, I'm not even asking you to acknowledge you're wrong. I just want you to acknowledge the possibility that you <i>might</i> be wrong, and consider that <i>if</i> you're wrong, then promoting fear of vaccines just might be doing a whole lot of harm to people. I know it's a scary thought, especially in a pandemic where we're being told that the unvaccinated are suffering illness and death in such great numbers, to think you might bear some responsibility for that. But I'm urging you to have the courage to consider it seriously.<br /></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com3tag:blogger.com,1999:blog-1883551996126668365.post-16570854839266699972021-08-01T16:31:00.003-06:002021-08-01T16:31:26.111-06:00More Good News about Bad News<p> In a recent argument about alternative medicine, someone showed me the link to <a href="https://www.hopkinsmedicine.org/news/media/releases/study_suggests_medical_errors_now_third_leading_cause_of_death_in_the_us?fbclid=IwAR3ZDUQayT7F1UHmVTrpu-nuErek7xuaRBL3xGCKiD8yEnRvgxnE3C1HJvk">this article</a>, about how medical errors are the third leading cause of death in the U.S. My opponent in this debate was trying to make the point that we shouldn't trust modern medicine so much, because doctors make so many mistakes. Now, this is an infuriatingly common strategy, screaming and hollering about how often science has been wrong and therefore we shouldn't trust those scientists and so by default we should turn to whoever is screaming and hollering as if mistakes not screamed and hollered about don't exist. <br /></p><p><span> But I'm not going to rant about that particular rhetorical ploy now. Instead, I want to talk about why it's actually amazingly good news, if more deaths are attributable to medical errors. Indeed, I would want to argue that the best possible case would be if 100% of deaths were caused by medical errors.</span><br /></p><p> Consider. If 100% of all deaths were caused by medical errors, that would mean that nobody ever died of anything else. In other words, there would be no circumstance in which a doctor could sadly tell a grieving family, "I'm sorry, we did everything we could, but the injuries were just too severe" or the disease had no cure or whatever. <i>Every</i> potential cause of death could be cured if only no errors had been made. If 100% of all deaths were the result of medical error, that would mean that medical science had advanced to a point where we were in principle immortal, because medicine applied <i>without</i> error could always save us. </p><p> Statistics are tricky. It's really easy to misinterpret what they mean, especially if you don't look at the full context. A similar argument can be made about rising cancer mortality rates, which is <i>also</i> potentially good news overall, depending on what is happening with average lifespans. Cancer is one of those things that you can, as a rough approximation, associate with old age: the older you get, the more opportunities there are for oncogenic mutations, and the more time there is for tiny proto-tumours to develop into life-threatening ones. (You, yes <i>you</i>, actually have several of these tiny prototumours right now, but they may take longer than your natural lifespan to become dangerous.) So if it's reported that more people are dying of cancer than they used to, that can actually be evidence that we're getting better at preventing earlier deaths from other things. The more people die of old age related illnesses like cancer, the more successful we have been at helping people live longer. It's not that cancer is becoming a bigger threat; it's that all the other threats are getting smaller.</p><p><br /></p><p> So obviously, if doctors are becoming more careless and making more mistakes, that's a bad thing, and naturally we want them to make fewer mistakes than they are, no matter how many or how few they're making. But statistically, pointing out that a larger proportion of deaths are due to medical error doesn't mean medicine is untrustworthy. It may actually mean the very opposite.<br /></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com3tag:blogger.com,1999:blog-1883551996126668365.post-50280302561878422472021-07-24T14:30:00.003-06:002021-07-24T14:30:35.866-06:00A Little Knowledge is a Dangerous Thing, if You Think It's a Lot<p> <span> Before I went to law school, I thought I knew a little something about the law. And I had every justification for thinking so. I mean, I'd finished an M.A. in philosophy, and written my thesis on principles of rule enforcement, basically a philosophy of law topic. I'd read H.L.A. Hart on the Concept Of Law, and I had a strong grasp of ethics and epistemology. So I did really have a bit of a head start on understanding what law was all about. How hard could it be?</span></p><p> And in fact, it wasn't really that hard. I <i>did</i> have a head start, and a pretty good foundation to build on. There wasn't an awful lot in the little something I knew about law before law school that turned out to be flat out wrong. But it was underdeveloped and incomplete to a degree that is kind of amazing to me now. the little something I thought I knew was akin to knowing that force is equal to mass times acceleration and thinking I knew physics. Because it's true that a LOT of physics follows from Newton's three basic laws, but you really have to put in the effort to understand exactly how that explains gyroscopes and hydraulics and the gas laws. <br /></p><p> So three years of classes and writing papers and exams and arguing in moot court competitions taught me a whole heckuvalot of stuff I didn't realize was out there to learn, such that when it came time to actually practice law I found myself thinking I still didn't really know enough, but the fact is I <i>did</i> know what I needed to know, which was how to find out. I now knew enough of the vocabulary and basic concepts of law and how they related to each other than I knew where to look for the answers, and how to think about questions and try to solve them myself if they hadn't been addressed directly by someone else.<br /></p><p><br /></p><p> I'm mentioning this now not to present myself as an expert on law (I'm not practicing, and let my license to do so lapse, back around the time of that cancer adventure), but rather to emphasize how I know I'm most decidedly <i>not</i> an expert in other things. The little something I thought I knew about law before law school wasn't nothing, but it was nowhere near what I knew after earning my LLB and practicing for a few years. And after having actually studied the subject, I would often run into people without law degrees who would confidently lecture me about the little something they knew about law as if it was all there was to know. And then I knew how ridiculous I must have sounded, talking to lawyers about the law back when I only knew a little something about it. <br /></p><p> And that's the point I want to make here. I also know a little something about science, and a little something about technology, and a little something about medicine, and a little something about history and literature and a whole lot of topics. I may even know more than the average layperson about many of these. I can feel fairly confident about the little something I know, but then I remember how underdeveloped and incomplete my little something about law turned out to be. Now, my son is studying molecular biology at university, and while I'm glad that the little something I know about science is (barely) enough to make his talking to me about these things not be a complete waste of time, it reminds me just how underdeveloped and incomplete my understanding of genetics is. I can ask intelligent questions, and it's good exercise for him to try to make the answers intelligible to me, but I am painfully aware of how much not and never will be an expert I am in this stuff.</p><p> It is important to be aware of this, because it's so easy to feel like you're an expert in a subject just because you know a little something about it. A little something is more than nothing, but it's just enough to know it's more than nothing, and not enough to know it's less than everything. Maybe the most useful thing I gained from law school was not knowing the law, but an appreciation of just how much work goes into understanding <i>other</i> subjects well enough to become a licensed professional. Know enough to ask intelligent questions of the experts and understand the answers, and don't be afraid to challenge things that seem inconsistent with the little something you know, but never forget that it's only a little something.</p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com4tag:blogger.com,1999:blog-1883551996126668365.post-50294522285569087292021-06-28T18:17:00.002-06:002021-06-28T18:17:40.810-06:00Render unto Caesar<p> <span> There is a passage in the Gospel of Matthew in which some Pharisees ask Jesus whether it is right to pay taxes, and he asks them to identify whose head is pictured on a coin. It's Caesar, of course, and Jesus famously tells them therefor to render unto Caesar that which is Caesar's.</span></p><p><span><span> Now, Matthew frames this as Jesus cleverly avoiding a trap. As he tells it, the Pharisees were trying to trick Jesus into saying something that would get him in trouble with the authorities, and he just outsmarted them. Personally, I don't pretend to know what the historical Jesus actually meant here or whether this episode every actually occurred, but there is a pretty important point that "Render unto Caesar" makes about the nature of money: it belongs to the sovereign.</span><br /></span></p><p><span><span><span> I mean that very literally. Not merely that the coinage might display a likeness of Her Majesty, but that the money system itself is created and maintained by the state and its laws, and that the value of money only has meaning within that system. This is especially true with fiat money, but it was still the case for the most part with precious metal coinage, because even though precious metals have some intrinsic value (in that they can be used to make goods people want), most people only use them for trade, <i>i.e.</i> because they know everyone <i>else</i> will accept them as valuable, so in practice the value of those coins is a result of social convention. And insofar as the sovereign state should be understood as representing the people of that society generally, the state is the entity with ownership/responsibility over the money system. </span></span></span></p><p><span><span><span> Think of it like the chips you use in a casino. They are provided by the casino for use according to their rules, within the system of games they operate. And <i>within</i> that system, you can use them to play the games, or transfer them to other people if you want, or use them pretty much however you like. But all the value they have derives completely from the fact that everyone understands they can be cashed in for "real" money as you leave the casino. Maybe you're allowed to take them with you when you leave, maybe they're cancelled when you do, but the only value they have is within the casino, and <i>because</i> it is understood that the casino operators will redeem them for "real" money as you leave.</span></span></span></p><p><span><span><span> Similarly, <i>within </i>the economic system of the sovereign state, its currency is "yours" to play all the various games we call markets, to facilitate various transactions, to settle debts in tort or contract, and otherwise participate in the economic life of the society. But just like the casino tokens, the dollars you're transacting with are in a very real sense the property of the society generally, subject to the rules that society makes just like the rules of the various casino games you might play with casino tokens. <br /></span></span></span></p><p><span><span><span> And among the rules that society can make for the use of its currency tokens is the rule that you have to pay taxes. Those taxes <i>can</i>, in principle, be 100%, because ultimately the state owns all of them, but of course that would utterly defeat the purpose of issuing the tokens in the first place because no one would accept them in trade knowing they would all be confiscated immediately. So in practice, a responsible government will need to design its taxation policy in such a way to preserve the function of the tokens, to keep the games playable and rewarding. <br /></span></span></span></p><p><span><span><span> So the upshot of all this is that there is a right way and a wrong way to advocate for tax policy. Complaining that the gubmint shouldn't be allowed to take MY money dammit is the wrong way, because it's <i>all</i> the gubmint's money. The right way is to argue that this or that tax rate is better <i>policy</i>, because it better serves the interests of society in some way or other. You might argue that tax rates should be low to encourage private enterprise. You might argue they should be higher to fight inflation. You might argue they should be progressive to limit inequality. And we can have those debates. But your <i>property </i>rights to "your" money do not and should not enter into it, because it's ultimately not your money in the final sense. </span></span></span></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com4tag:blogger.com,1999:blog-1883551996126668365.post-88528941622355975732021-05-19T17:49:00.000-06:002021-05-19T17:49:00.132-06:00Arguing with Authoritarians<p> There are a few themes that come up again and again whenever I find myself arguing certain subjects with people on the internet and elsewhere, and I think I have finally figured out what ties them all together: authoritarianism. This came as a bit of a surprise to me, because many of these people present themselves as very much anti-authoritarian, rejecting the advice of those elitist academics and urging you to "do your own research". But what makes them authoritarian isn't that they necessarily respect any particular authority; it's that they tend to approach things with what I'm calling an authoritarian epistemology, one that's more concerned with power than with truth.</p><p><span> To start, it seems to me that the model of knowledge and how it is acquired works something like this: One person knows something, and then tells it to another person, and now that other person knows it too. This isn't a bad theory as far as it goes, a decent first approximation for a many simple interactions. Its chief failing is that it's incomplete, but I'll get to that later. For now, the point is that to the authoritarian, this is how it works: someone who knows (an authority) transmits knowledge to someone who doesn't.</span><br /></p><p><span><span> So why do I say the authoritarian is about power? Because in this model, the authority isn't just sharing knowledge; they're actually telling someone what to think. Now, that doesn't necessarily mean that the other person has to believe them, because of course people give orders they're not authorized to give all the time, and people disobey orders all the time too. </span></span>All I mean here is that the authoritarian tends to regard most or even all interactions this way, as attempted exercises of power by either an authority or someone aspiring to authority.</p><p> And this explains why the "do your own research" crowd so bitterly resents people whose job it is to tell us stuff: the news media and scientists/experts. Because they see the transmission of knowledge in terms of power hierarchies, they take it as an affront when someone tries to tell them something. After all, who are these "experts" to tell <i>me</i> what to think? They're not the boss of me!<br /></p><p><span> Hence</span> the sneering contempt if you happen to mention something that comports with the Official Story: "Hah! you <i>believe</i> the mainstream media?" See, when you regard every utterance as an assertion of power, then believing what people tell you is a sign of weakness, and conversely, refusing to believe what you are told is strength. It's a year-round form of April Fools Day skepticism, that sees discourse as a childish game of tricking other people into believing falsehoods while trying to avoid believing anything anyone might tell you.</p><p><span> When you approach discourse this way, any argument becomes</span> a personal attack, an attempt to exploit your perceived weakness. One may peacefully "agree to disagree", but by golly if someone tries to force me to change my mind about something, I need to defend myself, right? So it's almost inevitable that someone with this mindset will come to identify with their opinions.</p><p> Which is another aspect of this authoritarianism that I've been troubled by for some time, the tendency to think of opinions as a matter of identity rather than the result of deliberative process. Ideally, describing yourself as <i>a </i>liberal or <i>a</i> conservative should just be shorthand for "my opinions on issues tend to be more [liberal/conservative]" but very often I find people saying things like, "I'm a conservative, so I'm against abortion" as if <i>being</i> conservative is the cause of the opinion, rather than a description of it. And just as often, I find that regardless of whether they consciously identify themselves with a group or movement, the authoritarians I argue with will try to pigeonhole me as belonging to whatever group it is they happen to identify as their opposition. </p><p><span> That's psychological projection, obviously, and of course we all do it to some extent. Indeed, it was projection of my own presumptions about what knowledge is and the purpose of debate that kept me so baffled about the authoritarian mindset for so long. I assumed they were arguing to provide evidence and logic that would lead me to adopt their view as my own, or to gain sufficient understanding of my own position so as to make a better informed decision as to whether to reject or adopt it. And so I was regularly astonished at the sorts of arguments they would make and what to me seemed like brazen hypocrisy.</span><br /></p><p> For example, I was recently arguing with someone who recited some mortality numbers she claimed were from Statistics Canada, intended to show that the pandemic wasn't real because there was no significant change over previous years. Since she hadn't provided a link or reference to the original source, I went and googled for some appropriate keywords, and found that Stats Can had in fact published a report on excess deaths related to the pandemic. Yet when I cited this source back, with a link, my opponent dismissed it as government propaganda that couldn't be trusted. Note that this was the very same government agency <i>she</i> had cited to prove<i> her</i> position, which she glibly dropped as not credible the instant it went against her. What could possibly be more hypocritical?</p><p> But that hypocrisy disappears when you understand the authoritarian mindset, which values power and doesn't really consider "truth" except as the set of beliefs that distinguishes <i>us</i> (the good guys) from <i>them, </i>and which considers debate to be a kind of assault, a struggle to maintain one's own beliefs/identity against attacks while trying to weaken the resolve/identity of one's attackers. It's not about truth for them, but strength and determination, and all the little rhetorical devices we use in argumentation are just weapons to be used against the enemy, and to be deflected or dodged when they are used against us. There is, after all, nothing in the least bit hypocritical about trying to stab you with my epee while trying to parry your attempts to stab me. My opponent wasn't really making the claim that Stats Can is or isn't a reliable authority; she was simply trying to protect herself from having her Deeply Held Belief weakened by attempting to demoralize and discredit any belief/person (remember they tend to blur together beliefs and identities) that might appear to threaten it.</p><p> I think this is why they use so many basic logical fallacies in argument, and why it doesn't bother them in the least to do so. If you call them on it, pointing out "That's an <i>ad hominem</i> fallacy", they don't pay any real attention to the substance of why <i>ad hominem</i> is a form of <i>non sequitur</i> because the conclusion ("you're wrong") does not follow from the antecedent ("you're stupid"). Rather, they just pick up the term "<i>ad hominem", </i>dimly aware that it has something to do with calling someone stupid, and use it as a weapon against you any time they perceive you to be calling them stupid. And there are countless other examples of terms with legitimate meanings that get coopted this way, stripped of nuance and wielded as cudgels not to prove any substantive point but to wear down and humiliate those perceived as attackers. </p><p><br /></p><p> I'm not sure how best to deal with the authoritarian mindset. They aren't arguing to convince you they're right; they're just arguing to prevent you from convincing them they're wrong, and so all they need to do is spread doubt. (Same strategy as the tobacco companies "questioning" the link between cigarettes and cancer, or the oil industry "questioning" global warming, or creationists' "teaching the controversy", etc.) The sad irony in all this is that, to the non-authoritarian, doubt is already in plentiful supply, and it isn't a weakness but a core assumption about everything. You can't win an argument with me by making me doubt <i>my</i> position if you don't do something to reduce the doubts I have about <i>your</i> position.<br /></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com9tag:blogger.com,1999:blog-1883551996126668365.post-68334480543809250122021-04-22T16:48:00.005-06:002021-04-22T16:48:53.522-06:00Flat Earth Economics<div><span> Say you're trying to navigate around your neighbourhood and you want to keep the total distance travelled to minimum. Let's pretend you're not limited to roads, and can always travel in a straight line to any particular destination. You go 4 km to your first destination, and then turn 90 degrees to your left to go 3 km to the next destination, and then you can go home. By the Pythagorean Theorem, the straight-line distance to get home is the square root of (3 squared + 4 squared), or 5 km.</span><br /></div><div><span><span> That's the correct answer, or close enough for any practical purpose. In reality the world is not a flat surface, but the curvature is so big that for distances of only a few kilometres we can just treat it as flat without any significant error in calculation. Or, to put it another way, the 40,000 km circumference of the Earth is so big compared to 5 km that we might as well treat it as infinite, which is what it would be if the Earth were in fact a truly flat surface.</span><br /></span></div><div> Of course, the math breaks down when you start dealing with bigger portions of the whole planet. If you go 10,000 km on a flat surface, turn right and go another 10,000 km, you'll be about 14,000 km from your starting point, but on the Earth you'll still only be 10,000 km away. And if you travel 20,000 km from your starting point, the next step you take, in <i>any direction</i>, will take you <i>closer</i> to your starting point. The farthest you can possibly be from any other point on the planet is 20,000 km. (I am assuming all distances are measured along the surface, rather than taking a shortcut through the mantle...)<br /></div><div><br /></div><div><span> Scale makes a difference. It's fine to approximate small portions of the planet's surface as flat, but it's a mistake to apply those assumptions to the whole. And the same principle applies to economies.</span><br /></div><div> When I sit down to balance my household budget, the total number of dollars I have anything to do with is a negligible fraction of all the Canadian dollars circulating around out there. For <i>my </i>purposes, I might as well treat the dollars that aren't mine as infinite in quantity. If I spend $100, it'll take me $100 of income to get back where I started. Simple, straight, flat-plane geometry. <br /></div><div><span> And that's more or less true for most businesses and even local municipal governments or individual government agencies. If you spend X dollars, you generally need at least X dollars in revenue to cover it. </span><br /></div><div><span><span> But that's when the dollars you have anything to do with are a negligible fraction of the sea of dollars sloshing around out there. At the level of a national government with a sovereign currency, that's no longer the case. The federal government of Canada has something to do with literally every Canadian dollar in circulation anywhere, because every Canadian dollar is a creation of the Bank of Canada. To continue with our analogy, the Canadian government operates at a scale that encompasses the entire globe of the Canadian economy. </span><br /></span></div><div><br /></div><div> There's a lot more to this than I've described here, and I don't mean to suggest that national governments are completely immune to the sorts of considerations that apply to private individuals and corporations. Nor am I making specific claims here about the mathematics of government debt and deficits and how much spending and taxation is appropriate. All I'm saying at this point, in this post, is that the common rhetoric about government spending is based on an inappropriate comparison. Yes, it's completely reasonable to worry about revenue and expenditures at the scale of the individual household or business, just as it's completely reasonable to use Pythagorean calculations while navigating around your neighbourhood. It's just that the math is <i>different</i> when you get to global scales; you have<i> </i>to take into account curvature if you don't want to get lost.<br /></div>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com6tag:blogger.com,1999:blog-1883551996126668365.post-42661906329515622592021-04-01T12:50:00.000-06:002021-04-01T12:50:31.709-06:00Rhetorical Heavy Lifting<p> I get into a fair number of arguments, and I daresay I'm reasonably good at it. I tend to "win" more often than I lose, if you define it in terms of persuading people that you're more likely to be right. (I actually think it's counterproductive to think of argument that way, as I elaborate on <a href="https://tcantine.blogspot.com/2013/02/the-zen-of-rhetoric-avoid-bias-card.html">here</a>.) I prefer to think of it as a process leading to greater understanding of the issue by all parties, a discussion rather than a debate, but there is usually <i>some</i> adversarial element, and I can usually hold my own pretty well when that's the case.</p><p> So one thing that sometimes happens when I appear to be "winning" a debate is this: my opponent expresses some frustration that I'm only winning because I happen to be skilled at rhetoric, and if only they were better able to express their ideas more clearly, they'd be able to convince me. And that's certainly a possibility; there are many subtle concepts that are very difficult to express clearly but which turn out to be true (or at least, to have great explanatory/predictive power). But often difficulty in communicating an idea isn't so much due to a lack of rhetorical ability as it is due to the idea itself just not being as well-formed and coherent as it <i>feels</i>. That is, the <i>feeling </i>of being right or of knowing something to be so isn't identical with actually <i>being</i> right. (Like when I<a href="https://tcantine.blogspot.com/2014/10/dreams-of-certainty.html"> dreamed I came up with a mathematical proof of the immortality of the soul.</a>) Goshdarnit, I <i>know</i> I'm right, but I just can't put it into words!<br /></p><p><span> A metaphor I've found useful for the way debates like that go is that it's like a rock-lifting contest. I choose a rock and you choose a rock, and the winner is the one who can lift their rock the highest. Now, you might think that what you need to win such a contest is to be very strong, but in fact, most often the winner is the person who is able to choose the lightest rock. </span><br /></p><p><span> Now, skill at rhetoric is like being very strong in that it allows you to lift bigger argumentative rocks higher than you would otherwise be able to lift them. But the bigger impact is that it makes you a better judge of the weight of rocks. So much of the time, when someone tells me they'd be winning this argument if only they were better at rhetoric, I want to say hey, don't feel bad. I couldn't lift <i>that</i> rock, either.</span></p><p> And to do that, I have to make a good faith effort to try to lift their rock.<br /></p><p><span><span> </span><br /></span></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com3tag:blogger.com,1999:blog-1883551996126668365.post-33646116090975959962021-03-29T17:57:00.001-06:002021-03-29T17:57:26.113-06:00Not All Humans<p> When I was very young, my feelings were hurt when the cute little squirrel ran away from me, when all I wanted to do was pat it. Why was it scared of me? Why did it think I was mean? </p><p> In time, I understood that in the eyes of a squirrel, I was just a big non-squirrel animal, in a world where squirrels are often eaten by big non-squirrel animals. And so I wished I could talk to animals and they could understand me, so I could tell them that I was a <i>nice </i>human and I just wanted to be their friend. Not <i>all</i> humans are mean!<br /></p><p><span> But then I learned about lying, and understood that even if I <i>could </i>tell squirrels not to be afraid of me, that's exactly the sort of thing a human would tell a squirrel to make it easier to catch and eat. And so a <i>smart</i> squirrel shouldn't believe me when I said I wasn't going to eat them. And that, too, hurt my feelings, to understand not only that there were bad people who told lies, but that because there were bad people who told lies, <i>I</i> should not expect people (or squirrels) to trust <i>me</i> by default, however earnestly I might believe myself to be worthy of trust.</span></p><p> I also came to understand that, if I genuinely cared about the well-being of squirrels, I should not <i>want</i> them to trust humans (including me) too easily. It is in their best interest to be wary of potential predators, even if the majority of us larger animals have no interest whatsoever in eating a squirrel. And if I still really want to befriend a squirrel, at the very least I should expect to have to earn their trust.</p><p><span> This applies to my fellow human beings as well, though obviously with somewhat different parameters, since humans are a social species and live in communities where a certain amount of default trust is essential, but where there is always potential for betrayal. And so there are appropriate boundaries. It's no big deal if a stranger at a bus stop asks you what time it is, but unsettling if even a fairly close friends asks to look through your smartphone. </span>The very attempt to take shortcuts across these boundaries is itself deeply creepy; there aren't many bigger redder flags than "What's the matter, don't you trust me?" </p><p><span><span> And protesting #notallmen! whenever someone talks about misogyny or #metoo or anything of the sort is exactly that kind of red flag. Just as you shouldn't take it personally that a squirrel doesn't trust humans, you shouldn't take it as a personal affront that you're not immediately assumed to be one of the good ones, and you're not entitled to the benefit of the doubt. No one is, and it's childish to expect otherwise.</span><br /></span></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com2tag:blogger.com,1999:blog-1883551996126668365.post-30005498432977331902021-01-09T13:40:00.000-07:002021-01-09T13:40:03.237-07:00What if it wasn't Trumpists?<p> <span style="font-family: Times; font-size: 16px;"> </span><span style="font-family: Times; font-size: 16px;">Some defenders of Donald Trump have been trying to claim that real troublemakers at the Capitol riot on January 6 of this year might have been Antifa infiltrators, perhaps trying to make Trump look bad. This is a pretty preposterous idea, but let’s take it at face value for the sake of argument. In fact, let’s go all the way and presume that </span><i style="font-family: Times; font-size: 16px;">all</i><span style="font-family: Times; font-size: 16px;"> of the rioters, every single one of them, were there specifically for the purpose of discrediting Trump and his movement, and that all the </span><i style="font-family: Times; font-size: 16px;">real</i><span style="font-family: Times; font-size: 16px;"> Trumpists were all perfectly nonviolent and understood that they should under no circumstances give even the appearance of a threat of violence, and so they scrupulously avoided any kind of unlawfulness. Fine. Let’s say that’s the case, and that 100% of the mayhem was deliberately engineered by a coalition of </span><span style="font-family: Times; font-size: 16px;">Antifa, BLM, Democrats and never-Trumpers, </span><span style="font-family: Times; font-size: 16px;">just to embarrass Trump.</span></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-kerning: none;"> That doesn’t exonerate him, and indeed maybe makes it even worse. Why? Because</span> Trump's own actions made it way, way too easy for the rioters to fool us all into believing they were his supporters. He goaded on a crowd with his speech, and whether he actually <i>meant</i> for them to force their way into the Capitol and break stuff, anyone in that crowd could be forgiven for believing that’s what he did mean. It was a completely plausible interpretation, and if it was a misinterpretation, it was a completely predictable one. In failing to predict it, Trump essentially gave his enemies a blank cheque, which they predictably would have cashed. </p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px; min-height: 19px;"><span style="font-kerning: none;"></span><br /></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-kerning: none;"> Now, I don’t actually believe that any of the rioters were secretly Antifa. The simplest, most parsimonious explanation is that they were exactly what they appeared to be: a mob of Trump dupes and deplorables. There might also have been a few foreign intelligence agents seizing the opportunity to gain access to the Capitol and plant listening devices or steal laptops, but again, Trump himself bears the blame for creating that opportunity. It never could have happened but for his ill-conceived encouragement. </span></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px; min-height: 19px;"><span style="font-kerning: none;"></span><br /></p>
<p style="font-family: Times; font-size: 16px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-kerning: none;"> So it really isn’t a defence of Trump to allege that the riot was carried out by his enemies. It’s true, though, because he is and always has been his own worst enemy.</span></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com9tag:blogger.com,1999:blog-1883551996126668365.post-2588197349436317972020-12-12T14:12:00.000-07:002020-12-12T14:12:20.519-07:00"Do your own research"<p> "Do your own research," they say when you ask for evidence or support for whatever conspiracy theory they're advocating. "I'm not going to do your homework for you," is another one they'll sometimes throw in. </p><p> This is an evasion tactic, of course. If they had good evidence and understood it well enough to be so confident in their conclusions, they'd be able to explain it to you. But it's not <i>just</i> about evasion; it's <i>also</i> about posturing, trying to imply that they're smarter and more informed than you and their time is too important to be wasted on imparting their vast knowledge to you. </p><p> As a general rule, if you make a claim then the onus is on you to provide support for the claim, so telling someone to "do their own research" when you tell them the latest conspiracy theory is just lazy bad form. But the reason this ploy resonates is that "do your own research" <i>is</i> generally good advice. The trouble is that the "research" they've done usually amounts to following whatever rabbit hole the YouTube algorithm generates for them, and those rabbit holes often lead to echo chambers.</p><p> So what should it actually mean to do your own research? Broadly speaking, it means to gather evidence, evaluate its credibility and weighting, and attempt to synthesize it all into a coherent theory that allows you to make reliable inferences. The precise methods you use to gather primary evidence may vary from field to field, but in practice most of the evidence you gather will be the reports or testimony or work of other people, and so a good part of your task will be deciding how reliable these sources are.</p><p> And here's where the "do your own research" crowd go horribly wrong, because the source whose credibility it's <i>most </i>important to evaluate is yourself. When they exhort you to do your <i>own</i> research, there's often the implication that it's lazy or stupid to trust the research of other people, as if your <i>own</i> research is inherently better or more reliable. But your opinions and beliefs are just as likely to be wrong as anyone else's, and you should not treat them as the answer key that everyone else's answer should match to be deemed reliable. </p><p> Here's an example I've used before. A doctor recommends surgery, and describes the procedure to you. Upon realizing that she's proposing cutting into you with a knife, you reject her expertise, because even you as a non-expert know that cutting people is bad. This is the wrong way to judge the doctor's expertise, because while it's true that any expert should know that cutting people is bad, the true expert may also know other things (such as how to stitch people back up, and how to cut them so as to make stitching the back up easier) that make it not so bad. </p><p> Instead, the better approach is to say, "Wait, you're going to cut me with a knife? Won't that hurt? Won't that put me at risk of bleeding to death?" and listen -- <i>listen -- </i>to understand the answer. Obviously if the doctor is surprised to learn that cutting people is generally bad, they're probably not a real doctor, but a real doctor is prepared to answer this sort of question honestly and reliably. You don't necessarily need to understand every detail of what they're telling you, but if you have a good basic lay-person's grasp of the vocabulary and subject matter, you should be able to tell when they're just making up stuff and bluffing. </p><p> The trouble is, not everyone has a good basic lay-person's grasp of the vocabulary and subject matter, and even if you do, interrogating an expert to decide if they really know what they're talking about takes a lot of effort and time you don't always have. This is why we have shortcuts: credentials and certifications. If someone has a PhD from an accredited university in the subject matter, or a license to practice the profession in question, it's reasonable to assume they are in fact qualified experts. It's not an absolute guarantee, of course, but in general it's reasonable to defer to their judgment in their particular field, and checking out their credentials usually enough to qualify as having "done your own research" before you adopt the results of their research.</p><p> It's important to be clear here. You <i>do</i> have to trust your own research in one sense: whatever conclusion you adopt, whether it originates with you or some other source, it <i>is</i> inescapably the result of <i>your</i> decision. You're stuck with it; there's no "I was just following orders" absolution. You have a choice what conclusion to adopt, but you cannot choose not to choose; even suspending judgment is a choice (and often the right one). And more often than not, the wisest judgment is to adopt the conclusion of the experts.</p><p><br /></p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com2tag:blogger.com,1999:blog-1883551996126668365.post-43728841695305052372020-12-05T15:59:00.001-07:002020-12-05T17:07:41.616-07:00About that vaccine video...<p> A friend just sent me a video (which I will not share here) asking me for my synopsis of its content, and it occurs to me that many others may receive this video and be seeking answers to the same question, so I thought I'd do a post about it. The video is from someone examining the package of the new Astrazeneca vaccine, and calling our attention to certain words printed on the packaging they think we should be very alarmed about. </p><p> Now, the usual caveat should apply here:<b> I AM NOT A TRAINED SCIENTIST</b>. However, I've learned a lot of the very very basics from a lifetime of being a nerd very interested in science, and I have a strong background in general critical thinking and making sense of stuff, which I suspect is why I have friends who come to me to ask about this kind of thing. As well, my son us currently studying cell biology and it's been fascinating talking with him about how much more staggeringly complex and beautiful and interesting these things are than I ever imagined (and I've always imagined them to be pretty darned cool). What I'm offering here is not an expert explanation, because I'm not an expert. It is simply an educated lay-person's reading of the matter, intended as an alternative to the panicky less-educated lay-person's account given in the video. </p><p> I'm not sharing the video here for a couple of reasons, but mainly because I don't think it needs more bandwidth, and also because I suspect there are multiple similar videos and forwarded emails making the same claims. I will attempt to charitably present the concerns it raises, and then explain why they aren't nearly as problematic as it seems from the video.</p><p> The first word the video attends to in the vaccine packaging is "ChAdOx1-S (recombinant)". The narrator seems to think "ChAdOx1-S" is just a meaningless serial number name of the vaccine itself, and focuses on the word "recombinant" as the frightening bit, which is in a way kind of amusing for reasons I'll get to in a moment. "Recombinant" means more or less what it sounds like: re-combining DNA from two or more different organisms. This is actually not a new process: it happens every time a baby is conceived or a flower is pollinated. Even asexually reproducing bacteria often absorb bits of genetic material from various sources, sometimes incorporating it into their own genome. And naturally-occuring retroviruses splice their own code into the genome of the cells they infect. </p><p> What <i>is</i> new is our ability to do this artificially in a test tube, which we've only been able to do for a few decades. And it's tremendously useful, first for research and later as we get better at it for practical and therapeutic applications. If you don't know exactly what a strip of DNA does, you can sometimes figure it out by snipping it out of a cell to see what happens when it's gone, and then splice it into a cell that normally doesn't have it to compare the results. And then when you understand things better, you can do stuff like take the gene that produces insulin and splice it into some <i>E. coli</i> to produce this important life-saving hormone in industrial quantities without having to harvest it from animals. </p><p> So the ChAdOx1-S (recombinant) vaccine is, presumably, a vaccine made by recombining a sample of genetic material from the pandemic virus with some other genetic material that made up the precursor to the ChAdOx1-S vaccine. In other words, the vaccine is the <i>result</i> of a recombinant process, not something that will <i>cause</i> a recombination in your own cells. But if you thought that it was the latter, then the original vaccine is even scarier, because it turns out that "ChAdOx1-S" actually refers to Chimpanzee Adenovirus Vector 1, evoking a terrifying <a href="https://en.wikipedia.org/wiki/The_Island_of_Doctor_Moreau">Island of Dr. Moreau</a> scenario. So I find it hilarious that the person in the video missed this detail. But of course, it's not actual chimpanzee DNA; it refers to an adenovirus that <i>infects</i> chimpanzees. And humans and chimps being very very similar, viruses that infect chimps can often infect humans and vice versa.</p><p> A vaccine is often just a de-activated version of a virus, something that resembles the actual virus enough that the immune system learns to recognize it as something to be destroyed, but isn't actually itself infectious. Think of a wanted poster: it has an <i>image</i> of the face of the bad guy, so you know what he looks like, but the wanted poster can't rob your stagecoach. But a wanted poster is more than just a photo: it also contains information that alerts you to why you should beware of the guy in the picture, whom you should call if you see him, maybe a reward or other motivation for doing so, and so on. </p><p> So it's not enough to just present some molecule to the immune system. You have to present it in a way that the immune system will recognize it as a pathogen and start producing antibodies against it. It's like you have to include all the "WANTED" text from the poster, except that with molecules we don't know how to generate all the relevant text from scratch. So a recombinant vaccine is sort of like taking a successful wanted poster you already have for some other virus, and cutting and pasting a photo of the new virus into it.</p><p> That's what I think they've done with ChAdOx1-S (recombinant). They've taken a vaccine that seems to work for the chimp adenovirus, and spliced in some part to make it work for the new pandemic virus.</p><p><br /></p><p> The next bit the video gets very alarmed about is something called "MRC-5", which turns out to be a human cell line derived from fetal lung tissue. It sounds like the person in the video is deeply disturbed that aborted human fetal tissue is an ingredient of the vaccine you'd be injected with. It's not. Human cell cultures are a kind of lab rat: they <i>test</i> the vaccine on those cultures, to see how it affects human cells. They do <i>not</i> use it as an ingredient in actually making the vaccine itself.</p><p> Now, you might have moral reservations about using a product that was tested on aborted fetal cells, but there's an important detail I need to point out here. <i>Nobody</i> is getting pregnant to produce fetuses for the purpose of harvesting tissue. These are fetuses who were going to be aborted for other reasons, possibly even naturally as a miscarriage. Using fetal tissue for medical research is no different, morally, from using the tissues of an accident victim. You can regard the abortion itself as an appalling tragedy, and so it might well be, but so is the death of any other organ donor; that doesn't make deriving some good out of their tragic sacrifice inherently immoral, especially if we are appropriately respectful and appreciative.</p><p> (Also, it's interesting to note that the particular fetus MRC-5 is descended from died in 1966. Fetal cells are particularly useful for culturing because they're so early in the development process, and have so much more growth ahead of them. It's usually easier to make them immortal than it is to do so with cells of a mature adult. That said, the first human cell line to be immortalized came from <a href="https://en.wikipedia.org/wiki/Henrietta_Lacks">Henrietta Lacks</a>, a cancer patient who died in 1951. Cancer's weird that way.)</p><p><br /></p><p> Finally, the video emphasizes a passage in some of the research documents about the vaccine calling for AI resources to go through the high volume of expected ADR ("Adverse Drug Reaction") reports and make sure every detail is recorded and analyzed. Yeah, at first glance, this sounds scary, like they expect the vaccine to be horribly dangerous and hurt a whole lot of people. But here I want to repeat a theme I brought up <a href="https://tcantine.blogspot.com/2015/03/respect-experts-theyre-not-idiots.html">here</a>: if they know the vaccine is going to have a lot of ADRs, and they still intend to go ahead with it, what are we missing? Either we should assume they're diabolically evil or stupid, or maybe, just maybe, having a lot of ADR reports isn't quite the terror it seems.</p><p> We already know that developing a vaccine is going to take (has taken) a long time, and a very large part of that is safety testing. They <i>know</i> that there are always risks with developing <i>any</i> new therapy. And they know they're going to get a lot of <i>reports</i> of adverse reactions. A report of an adverse reaction, however, is just that: a <i>report.</i> Many, probably most, of those reports will turn out to be something else. Someone gets a shot, and happens to get totally unrelated food poisoning the next day. Someone else has a heart attack. Someone else doesn't realize yet she's pregnant and reports some of her symptoms as a potential adverse reaction. When you're testing a drug, you<i> </i>want<i> all </i>of this data, whether or not it's actually related to the drug, so you can look through it all for patterns to figure out what, if anything, really is due to the drug and what isn't. And finding those patterns is an absolutely monumental task, which is why an AI system would be so incredibly useful in sifting through all the data.</p><p><br /></p><p> I am <i>not</i> saying that there is nothing to be wary of with the Astrazeneca vaccine, or indeed any of the new vaccines they're bringing out. There's a lot of pressure to get these vaccines in use very quickly, and it's not unreasonable to fear that corners might have been cut, or there just hasn't been enough time for unknown side effects to become apparent. Of course there are risks; the real question is, as always, are the risks of not being vaccinated greater or less than the risks of being vaccinated?</p><p> What I <i>am</i> saying is that many of the fears people have of this new vaccine are unfounded and based on an extremely incomplete (even more incomplete than mine) understanding of what these words mean. When terrified people urge you to "do your own research", understand that research involves more than just googling; you need to know <i>how</i> to interpret the words you're looking up, and how they are actually used by the experts doing the work. </p><p> A little knowledge is a dangerous thing, especially when you think it's a lot. </p>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com2tag:blogger.com,1999:blog-1883551996126668365.post-85378363489654714042020-12-02T14:01:00.001-07:002020-12-02T14:01:37.133-07:00Some thoughts on lying, and why I (somewhat) trust the mainstream media I've written elsewhere that lying is the strategy of the stupid. That's not to say that everyone who ever lies is stupid, or that there can never be a time when lying can be done intelligently. Just that, as a general rule, it's better to tell the truth than to lie, and not just for moral reasons. I have two arguments here.<div><br /></div><div> First, I want to suggest that intelligence generally is in some sense about truth. That is, if we define intelligence as an overall problem-solving ability based in the acquisition and application of information, the solutions that intelligence comes up with for problems will tend to be better when the information it uses is true. That's not to say that intelligent people are those who know the truth, or that they don't deal in hypotheticals. Quite the contrary, because being able to entertain and explore counterfactuals is itself an important and useful way to discover more truths. Rather, I'm saying that the enterprise of intelligence is largely about evaluating what things are or are likely to be true, and making decisions or choices that take considerations of truth or falsehood into account. An intelligent solution to the problem "Should I bring an umbrella?" will be one that considers (among other things) the likelihood that "it will rain" is a true statement.</div><div> Now, there is a difference between wanting to know the truth yourself and lying, which is wanting someone else to believe a falsehood, so it doesn't necessarily follow from any of this that smart people would not lie. I'm making a softer claim here, namely, that as intelligence consists in large part of habits of truth-seeking, there is necessarily going to be a certain amount of conflict between one's inner dialog of truth-speaking and an outward practice of lying. The habits interfere with each other, and while that doesn't mean an intelligent person <i>cannot</i> lie (after all, intelligence is about solving problems, including the problem of lying), it does mean it's more work.</div><div><br /></div><div> And that leads into my second argument: it really <i>is</i> more work. Any meaningful lie you might tell has <i>some </i>way it might be revealed as a lie. If I say that it is raining, and you look outside and see it's not, you know I'm an unreliable witness. But most lies are about things that are a little harder to disprove, and the best lies are the ones that go completely undetected as lies, which is more likely if they are nearly impossible to disprove.</div><div> Recall that we're talking about intelligence as an overall problem-solving ability, and note that confirming a statement as true or disproving it as a lie is itself a problem calling for an exercise of that intelligence. When you craft a lie and decide whether or not to tell it, you will want to consider how easy it is to disprove it, but here you are limited by your <i>own</i> intelligence. Just because you think disproving your lie would be prohibitively difficult doesn't mean someone else might not find it trivially easy. </div><div> The problem is compounded for people who, in addition to not being particularly bright to begin with, wrongly think they are significantly smarter than average, because they will tend to believe that a difficult problem for them (unmasking a lie) would be downright impossible for lesser minds. But it's important to recognize that intelligence isn't a simple linear quantity, and smart people recognize that there's a lot even they don't know; some absolute moron might just happen to know the one crucial fact that shatters an otherwise impenetrable lie. An intelligent person knows that for every way they can imagine their lie being discovered, there are a thousand ways they haven't thought of.</div><div> That's why I say that lying is the strategy of the stupid. The stupid tend to think that it's easy to lie, and of course superficially it is: all you have to do is say something that's not true. But that's the shortcut. Telling a robust, consistent lie that will withstand concerted and intelligent scrutiny? That is ferociously hard.</div><div><br /></div><div> And here it's important to point out that the same superficial shortcut applies to the business of rejecting a falsehood. It is <i>also</i> trivially easy to dismiss some claim as a lie or "fake news"; you don't need to consider a shred of evidence. Boom. "Liar!" and you're done. Of course, since you can do this equally well with <i>any</i> claim regardless of its truth or falsehood, it has zero probative value.</div><div><br /></div><div> So why do I tend to (somewhat) trust the mainstream media when they report that, for example, Covid-19 is a pandemic that's killed over a million people worldwide in the past year (1.49 million as of this writing)?</div><div> It's not because I think media companies have our best interests at heart or that they somehow find the idea of lying morally repugnant and would never ever dream of doing such an evil thing. True, I tend to think that the majority of people employed in reporting and publishing the news, or in almost any industry, are probably decent human beings who aren't completely diabolical and might balk at the more obvious sins asked of them, but I'm well aware of how decent human beings can be gradually and subtly corrupted by an unjust system, so I have little doubt that mainstream news sources would lie like crazy if they thought it were in their interests to do so and they could get away with it. </div><div><br /></div><div> But that's just it. It <i>isn't</i> in their interests to be caught flagrantly lying. There are, of course, powerful economic interests behind most every news outlet, and there's definitely a bias in what gets covered and what doesn't, and pretty strong spin in <i>how </i>any particular issue is reported, but when it comes to straight up lies? Those can and usually will be revealed somehow, especially given that there are multiple competing news outlets who would just love to discredit each other. If they could. </div><div> And some "news" outlets do take shortcuts discrediting their competitors, of course. They boast to their viewers that only they can be trusted, that their competitors are just full of lies. Such a lazy shortcut really just discredits <i>them</i>, because it is so lazy and so independent of the truth or falsehood of the claim. </div><div> I don't buy the claim that mainstream media is just full of lies, because the overall coherence of the stories is just too damned hard to fake. Yeah, there's going to be lots of stuff in the papers that's wrong, misreported or spun, and <i>sometimes</i> just plain lies. That's so with all sources of information (including and perhaps especially the ones decrying everyone else as "fake news"), and there's no getting around the hard work of evaluating and assessing and integrating all the data into a coherent world view. Just tossing the bulk of the data into the box marked "lies" is a lazy shortcut. </div>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com2tag:blogger.com,1999:blog-1883551996126668365.post-49468064090339428202020-11-28T17:20:00.006-07:002020-11-28T17:24:08.970-07:00I'm not going to explain why you should care about other people I don't know how to explain to you that you should care about other people. So I'm not going to try. Instead, I'm going to ask why I should care that you don't care.<div><br /><div> You don't want to have to pay taxes to support someone whose interests you don't care about? Fine. Why should I care that you don't want to pay taxes? Explain it to me. I get that you don't <i>like</i> to pay taxes. I get that you feel you have some kind of moral right not to be taxed, but so what? Why should I care about what you think your rights are? Why should I care about your interests? </div><div><br /></div><div> The fact is, though, I <i>do</i> care about your interests. I want you to be happy and prosperous, and with as much opportunity to pursue whatever it is that pleases you as possible, subject only to the limitation that I want this for everyone else, too. I'm not going to try to convince you that <i>you</i> should want these things, just that I do. And so given that, can you explain to me why I should care <i>more</i> about your desire not to pay taxes or your desire not to see same-sex couples on TV or whatever else you're all up in arms about than I should care about whatever someone else is all up in arms about?</div></div>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com4tag:blogger.com,1999:blog-1883551996126668365.post-29674974002367254262020-10-02T16:21:00.007-06:002020-10-02T16:43:36.338-06:00On the N-word<span> <span> </span></span>Something that one often hears white people complain about is the apparent double standard over who can use the N-word. (I'm not going to spell that word out here, but to make sure that everyone knows what word I'm talking about, I'll say that it derives from "negro", the Spanish/Portuguese word for "black", and involves a lazy vowel shift from a long 'e' to a short 'i', and dropping the final vowel, leaving a lazy 'r' as the final syllable.") <div><span> </span>The double standard they complain of is that Black people are allowed to use it, but white people aren't.
And superficially, you can see why they'd think this was a double standard, and a racist one at that. After all, if the only discriminant on who can do something (whether it be using a word or a water fountain) is the colour of their skin, then gosh darnit that's RACIST! But maybe there's a better way to understand this.
</div><div><br /></div><div><span> </span>Let's look at pronouns, particularly first and second person. These are words which literally change their meaning, depending on who is speaking. When I use the word "I", it actually denotes a different individual from when you use the word. It's the exact same word, but the meaning is different. And if I type the sentence "I am the author of A Blog Of Tom", the sentence is true when I say it but (probably) false when you utter it, unless either you're me or you happen to write your own blog which just happens to have the same title as this one. </div><div><span> </span> No one has trouble with this concept, once they master English or any of the hundreds of other languages that have relative pronouns, or indeed words like "here" and "there" or "now" and "then" or "tomorrow" and "yesterday". These words mean different things depending on where or when they are used or who is using them. </div><div><br /></div><div><span> </span>Well, that's kind of how it is with the N-word. Think of it as a special kind of pronoun. When a Black person uses it, it can have a meaning roughly like "one of us", whereas when a non-Black person uses it, it cannot help but mean "one of them". Of course, unlike basic pronouns, this one has a whole lot of other connotations loaded into it. As "one of us", it is inclusive, hinting at shared understanding and experience; as "one of them" it is inherently exclusive, and implies a sense of disrespect, contempt if not outright hatred. </div><div><br /></div><div><span> </span>So the answer to the question about who can use the N-word is really this: anyone can. It's like I taught my son when he was very little about profanity: I don't care what words you use, so long as you use them appropriately and correctly. As a white person, I <i>could</i> use the N-word if I wanted to, if the meaning I was trying to express was "those contemptible people". But I don't feel that way, so it would be a lie for me to use the word. And if I tried to use it to express the sense "one of us", no one would read it that way, because as a white person I don't get to use the pronoun "us" to talk about a group I don't belong to.</div>Tom Cantinehttp://www.blogger.com/profile/06234109728445439457noreply@blogger.com4