|
The BlogON THIS PAGE
- Science Skepticism and Science Literacy Science Skepticism and Science Literacy2020-12-27 [PDF] [PERMANENT LINK]This post originally appeared as a Twitter thread in response to a thread by Dr. Ellie Murray (@EpiEllie). Here's how I understand Dr. Murray's main claim. (Happy to be corrected!) There exists a popular view of what the difference between scientists and science-skeptical people consists in, that is essentially wrong. It goes like this: Scientists rely for their beliefs on careful observation and analysis by a proper method, that maximizes the objectivity and reliability of conclusions. Science skeptics don't care for these methods, and also don't use scientifically credible sources for their information. How is this wrong (or at least too simple)? Firstly, for the vast bulk of their beliefs, scientists rely on the work of others—they read the published peer-reviewed reports, rather than make the systematic observations and analyses themselves. Secondly, science skeptics regularly do exactly that! They are perfectly capable of referring to scientific articles. What they are skeptical about is not proper scientific methods—they just think that the results support their view, against that of the scientific mainstream. If science skeptics do care about proper empirical methods and regularly refer to scientific publications, then what is the relevant difference between them and scientists, that explains their differing views? Dr. Murray's answer as I understand it is, firstly, that due to professional incentives to publish as much as possible, scientists are liable to cut methodological corners and submit a lot of material of poor quality. "Our scientific system is fundamentally broken." Secondly, this results in a huge spread of published conclusions, such that you can likely find some study to support almost any outlier view. The real difference between scientists and science skeptics is that scientists understand the situation I just described, and they therefore know that they must read scientific publications in a critical way and—crucially—they have the expertise required to do that. Dr. Murray suggests that fighting skepticism then entails decreasing the number of poor-quality publications, by educating scientists, and making the public better at reading science. The first part she understands better than I do—I want to focus on the last part. Making the public better at reading science seems to be about imbuing "science literacy." The notion of science literacy, and its perceived importance, can be traced back over a hundred years, to writings such as John Dewey's How We Think (1910). Recently (e.g. in OECD's Pisa 2018 document), the motivation for teaching science literacy has been to enable citizens to participate fruitfully and democratically in a society largely driven and influenced by science. This indeed seems important. Historically, science literacy has been taught mainly by making students emulate the activities of scientists. This could mean knowing what an RCT is, how to interpret a confidence interval and maybe even about effect sizes and R2. But this just won't do for our purposes. If science literacy means understanding general scientific concepts and methods, then it seems obvious that this will never suffice for critically reading actual scientific publications, because doing that requires a deep understanding of the particular domain under study. Writing a PhD thesis usually gives you the equipment needed to start to acquire the domain-specific scientific knowledge required for reading published science critically—for being truly science literate. "Science literacy" as traditionally conceived is by its nature generic, and stems from a narrow and individualistic conception of scientific method. It's invaluable for distinguishing science from obvious non-science, but clearly insufficient for assessing actual research. If we can't realistically expect non-experts to acquire the ability to assess published research, no matter how much general conceptual understanding of science is taught, where does that leave us? After all, in a democratic society the people ultimately must decide (by way of their elected officials) what strategies to pursue, based on value judgments and our scientific understanding of the situation at hand. The answer lies in an understanding of what it is about science that makes it our most reliable source of knowledge about the natural world. It is, ultimately, not the RCTs or even formal causal models that ensure the relative reliability of science—it's the peer-review. I mean peer-review in the broadest sense, just the critical assessment of the works of other scientists that were the centerpiece in the first half of this post. Scientists have incentives to publish quickly—but they also have incentives to find errors in others' work. Our most reliable process for generating knowledge about the natural world is a collective process, the output of which can be found in high-quality meta-analyses and eventually in text books and the very scientific canon—not in individual publications. As non-experts, we need to be able to tell science from non-science, but also where to find the reliable output of science: not in individual publications or claims by individual experts, but in the output of this collective process as a whole. This view of the scientific process as a collective process has been developed most famously in philosophy by Helen Longino, and more recently in Naomi Oreskes's book Why Trust Science (2019). The general field of social epistemology is also important. It is instrumental that we can also make a judgment as to the credibility of a particular scientific field. Tools for making such judgments are developed within meta-science, e.g. by the Meta-Research Innovation Center at Stanford (METRICS) and at Cochrane. The public needs access to more such information. Science skepticism is thus most effectively combated, I think, by making science education more about science's collective method, and about the conditions under which a research field as a whole is credible. Much to be done then, both in science education and meta-science. Understanding the Social Method of Science2020-05-10 [PDF] [PERMANENT LINK]This blog post started out as a thread on Twitter. There have been many complaints during the COVID-19 crisis about "amateur epidemiologists" opining on things they don't understand. Rather than discouraging people from engaging with science, I want to share my views on how to deal with our limitations as non-specialists when we read peer-reviewed scientific reports, preprints, or just interviews with experts in the general media. Approaching science in a good way as an amateur requires that one understands that the scientific method is essentially a social method. I'm a philosopher of science by training. This means that, while I'm not a scientist, science is my object of study. As a consequence, I have to read scientific texts all the time, in addition to philosophical research. I'm always acutely aware of not having specialist knowledge in whatever area of science my attention is currently aimed at, and I've had to teach myself what I can and cannot do with the information I consume. This seems like a great time to share some lessons from being a professional dabbler in science. These are lessons based in my personal experience, but they are also informed by some useful ideas from philosophy, about the scientific method and the conditions for knowledge. But first, do we want non-specialists to engage with science at all? By "engaging with science," I simply mean, not just reading passively, but applying concepts and theories in one's own thinking, as one must do to understand new ideas. While "knowing just enough to be dangerous" may be a real thing, surely the positive correlation between knowledge and good reasoning dominates. That is, people trying to understand science is something we ought to welcome in general, and not discourage. I recognize that there are annoying and potentially harmful side-effects to this. When an understanding of some field of science is severely impeded by unconscious or hidden motives, then this can create an alternative discourse, where data and technical jargon are used in deceptive ways. Superficially credible but ultimately invalid and misleading arguments begin to circulate, turning possibly well-meaning people over to a counterproductive cause—say, a general resistance to vaccines (although that particular movement appears to be in decline right now). First, very generally, even if some bad is unavoidable when non-experts engage with science, I think the beneficial consequences will outweigh the bad ones. More apropos to our current topic, however, one of the best antidotes to contra-scientific alternative discourses is a proper understanding of how to approach science as a non-expert. That is to say, being a good amateur with respect to science itself requires some training. That's a challenge but a worthwhile one. Science should be regarded as authoritative in many respects, but there are ways to misunderstand this. The knowledge that science produces is a social thing. It grows over time in a community of scientists, not in the minds of individual researchers. That science is a collective enterprise at its core has been a fact ever since the dawn of the scientific revolution and Francis Bacon's insistence that scientists freely share their own, and examine each other's, results. Such thoughts inspired the formation of the Royal Society as well as the French Encyclopédistes, and by extension the creation in 1793 of the Royal Swedish Academy of Sciences in my home country. The professionalized science of our time depends essentially on the peer review process and division of labor among researchers that Bacon described in the early 17th century. Despite this, most explicit theories about the scientific method (including Bacon's own theory of induction) focused until quite recently on the individual scientist, theory, or experiment. The many social dimensions of scientific knowledge have been explored by philosophers more recently. For example, that the scientific method is ultimately a social method has been argued by the philosopher of science Helen Longino in her book Science as Social Knowledge (1990, Princeton University Press). Longino claims–in extreme brevity–that only if the results of science come out of a critical discussion among scientists with shared access to the phenomena can they be understood as knowledge rather than opinion, because only then are they objective in a real sense. Philosophical theories about the social aspect of scientific knowledge have largely come out of the widely recognized fact that how we evaluate data as evidence depends on our preexisting background beliefs. But here we can make do with the relatively straightforward idea that scientists who operate within the same field but nevertheless come from different contexts and specializations systematically proof read and double check each other's work. This process doesn't end with some work being accepted for publication in a scientific journal, but goes on as other scientists decide to use the publication in their own work, try to refute it, or (worst of all) ignore it completely. The suggestion, then, is that science, properly pursued, is a social process that produces more credible results than any method that can be employed by an individual, because it tends to over time filter out results that are due to accidents and personal idiosyncrasies. (Different areas of science can be differently successful at this, as is perhaps suggested by the so-called replication crisis of the last decade.) We can learn something from this as science amateurs, and consequently here is our first lesson: A dominant view within a science is authoritative to us amateurs, because the process by which that view comes about is more reliable and credible than whatever belief-forming processes are available to us as individuals. It's important to recognize that the authority of science isn't due to scientific experts constituting some sort of priesthood who have exclusive access to the truth by stipulation. It's authority is due to the (relatively) superior reliability of the process that generates scientific knowledge about the world. The idea that having knowledge depends necessarily on the reliability of the process that generated the corresponding belief is another philosophical theory relevant in our context, that has been extensively discussed by a number of philosophers. It's equally important to recognize that I'm not saying that whatever is the consensus within a scientific community constitutes scientific knowledge, by definition. Everyone can be wrong—and often everyone has been wrong. I'm saying that if there currently exists a consensus, or a majority view, or even a dominant view, in a field of science then this view is our best bet, even if our confidence in that view remains quite low. The amount of disagreement between experts differs between the sciences, and between topics within any science. This complicates things for us amateurs. How do we know if the statement of some particular scientist is representative of the collective understanding within that scientific field? Maybe the only way to know this is by being an expert. But more optimistically, just as a scientist may regard the outcome of a new experiment tentatively until more evidence comes in, we can take the same attitude towards a statement by an individual scientist that we hear for the first time. That is to say, we can tentatively accept the claims made by some particular scientist on some occasion, keeping in mind that this may be an outlier opinion in the scientific community. This is particularly relevant right now, when COVID-19 related news reports often consist in statements by individual experts. But what if there genuinely does not exist a dominant view within the relevant scientific community in relation to some important question—as indeed seems to be the case with respect to many things pertaining to the COVID-19 pandemic when I'm writing this? If we are forced to base a decision on how to act on one view or other of the available alternatives, then we will have to make do as best we can, relying on the expertise we deem most reliable. (And possibly some decision theoretical principles.) There is no way around that. But most of us are not in that situation, and our understanding of scientific knowledge implies that we do best in suspending judgment until the scientific community sorts things out. (Suspending judgment is also a hard mental exercise that builds character.) The fact of our first lesson does not hold in precisely the same way to an expert in the scientific field in question. This has to do with the particular limits of the amateur perspective. As amateurs, we can advance our understanding of the scientific theories, models, and evidence in many ways. (YouTube is full of credible open lectures and tutorials on any scientific subject you can imagine.) This is a Good Thing. But there are things we cannot do. To wit, if I cite a scientific paper in my philosophical work, it's because I take it to express a common enough view in that science. My goal is usually to report what I take to be the state of knowledge within that field at the time of writing. My citation adds no weight to the views expressed in that paper. It's because I can't read that paper critically. An expert, on the other hand, can have good grounds for rejecting a consensus view in their science. Science is a rational enterprise after all, not only a social one, and the overall scientific process ultimately works only because individual scientists can make substantial contributions to the end result. (A contribution that will still have to pass through an extended peer review process before it can be elevated to the status of scientific knowledge.) But the probability of an amateur finding an error that has escaped the scientific community is very low. In short: to be part of the critical peer review process of science, you need the right training. And the best reason we as amateurs can have for being skeptical of the claims of an individual scientist is that we doubt that it represents the dominant view in that science. This is then our second lesson: As amateurs, we can strive to understand what scientists say—but we normally lack the tools for critically evaluating what they say. (This holds when the scientist is speaking on their area of expertise, of course.) So, this suggests at least two ways of being a bad amateur epidemiologist. First, we might confuse the claims by a particular scientist with the knowledge that has accumulated within that research community. The cure is to read more broadly and aim for a sense of what views are dominant within the field. This is hard work, and moreover requires knowledge of what are good, reliable sources. Second, we might underestimate the amount of knowledge and training that is needed for reading science critically. As amateurs, we likely don't even know how much we don't know, and this can mislead us. The cure for this is to make the safe bet, and again hone in on what seems the dominant view in the field, and base any skepticism on that. (Or to get a relevant degree.) Acknowledging these limitations to the amateur point of view, we can strive to better understand, through science, the world in general, and our current situation in particular. And we really should. For the sake of our society and ourselves as individuals. Philosophy and Science2020-04-07 [PDF] [PERMANENT LINK]This post was originally a thread on Twitter. A recent conversation led into the issue of the "gap" between the discourses of practicing scientists and philosophers, and also of the value in general of philosophy for science. Here are some semi-random thoughts on that, aimed toward non-philosophers. They're not deep or heavy thoughts, I'll keep things straightforward. They will also betray my naturalistic leanings—not shared by all philosophers. I'll start with how I place philosophy relative to science very broadly. Here's a picture of how I don't think that philosophy relates to science in general.
On this sort of view the "special sciences" depend on our basic beliefs about being, experience, knowledge, etc. If your philosophy is weak, it clearly can't carry your science. So no point caring about science until you've solved at least the most important metaphysical problems!
Naturalism with regards to philosophy turns things around. This picture makes 3 points:
(Also: philosophy is a black hole.) On the naturalistic view, it's expected that some scientists are engaging with philosophical questions (such as what sorts of things exist within their theoretical domain), and philosophers can't afford to ignore the practices and results of empirical science. Still, many of the questions that appear at the philosophical level have come up repeatedly from antiquity and throughout our intellectual history. Philosophers study that stuff—scientists generally don't as part of their education. The naturalistic view also suggests a way in which different sciences relate differently to philosophy.
This is not the scale of abstractness, but has to do with the goals of a science, from a focus on utility to a focus on pure understanding of the world. The question, say, of whether out best theories truthfully describe the world is irrelevant to a clinical researcher trying to find an effective treatment, but very relevant to a string theorist who doesn't imagine much in the way of practical applications for string theory. Thus, philosophical questions tend to force themselves on scientists to different degrees depending on their goals, and philosophy may look differently useful(!) by the same measure. The applied sciences do touch many contentious issues in philosophy, most obviously that of the nature of causation and causal knowledge. (Also ethics.) But, apart from some contributions on method (more below), I think the influence here is mainly from sci. to phil. Academic training forms our expectations on what questions are worthwhile. I think different scientific disciplines are differently "primed" for philosophical questions. But this is a matter of personal disposition too, of course. When someone asks of the relevance of phil. to science, the expectation often seems to be of examples of philosophers making an appearance in day-to-day science. This happens—here are som examples. A lot of collaboration goes on between philosophers and scientists in interdisciplinary projects. An example from my own department is a project about "knowledge resistance," involving e.g. philosophers, psychologists, and social scientists: Knowledge Resistance One of the preeminent occupations of philosophers is the clarification of concepts. (That's semantics!) It has been argued that such work has lead directly to new ways of formulating scientific questions and designing experiments: Why science needs philosophy The work on causal inference theory by Spirtes, Glymour, and Scheines is seminal in that area, and an example of philosophers contributing to the methods actually employed by scientists (rather than to a more abstract understanding of those methods): Causation, Prediction, and Search Philosophers do make an appearance in day-to-day science, and that's a good sign I think. But, ultimately, the legitimacy of philosophy doesn't hinge on this. Philosophy is mostly basic research, defensible on the same grounds as other basic research. Its perceived irrelevance to some other context doesn't imply its irrelevance tout court. Philosophy is also utterly unavoidable, as long as we keep trying to understand our world at all. I've said little about what philosophers do or what they are good at. There are more philosophers, doing more things, than ever. For a sense of what philosophers do, I suggest a visit to PhilPapers, where articles can be browsed based on topic. I'm not sure if this is useful to anyone. As to cross-disciplinary communication, I think that I need to understand something of the ordinary work and reasoning of scientists—I'm okay with that being a one-way street if you are. But there may be questions at some point, about the interpretation of theories, or the conditions for knowledge, or the theory-ladenness of observation, or some such—and when that happens we are here for you. On the occasion of Francis Bacon's 458th birthday2019-01-22 [PDF] [PERMANENT LINK]This post was originally a thread on Twitter. Today is Francis Bacon's 458th birthday. Let's celebrate with a selection of neat quotes from the great inductivist and proto-empiricist. These are mostly about the scientific method, what Bacon thought was wrong with the natural philosophy of his time, and his understanding of proper reasoning from evidence. “From a few examples and particulars [. . .] they flew at once to the most general conclusions […]. But this was not the natural history and experience that was wanted; far from it. And besides, that flying off to the highest generalities ruined all.” (NOB1, CXXV) “Now my method, though hard to practise, is easy to explain; and it is this. I propose to establish progressive stages to certainty.” (NO, Preface) “[T]hen […] only, may we hope well of the sciences, when in a just scale of ascent, and by successive steps not interrupted or broken, we rise from particulars to lesser axioms; and then to middle axioms, one above the other; and last of all to the most general.” (NOB1, CIII)
"Nothing duly investigated, nothing verified, nothing counted, weighed, or measured, is to be found in natural history: and what in observation is loose and vague, is in information deceptive and treacherous." (NOB1, XCVIII) "[A]ll the truer kind of interpretation of nature is effected by instances and experiments fit and apposite; wherein the sense decides touching the experiment only, and the experiment touching the point in nature and the thing itself." (NOB1, L) "And this induction must be used not only to discover axioms, but also in the formation of notions." (NOB1, CV) “[This Instauration is] by no means forgetful of the conditions of mortality and humanity, (for it does not suppose that the work can be altogether completed within one generation, but provides for its being taken up by another). . .” (The Great Instauration, Preface) Bonus quote: “I remember that this was one of the reasons for which you told me one day you regretted the death of Baron Verulam [Bacon], to see him so careful and so liberal in particular experiments.” (Letter from Christiaan Huygens to Descartes, 1642) |