This week, Dr Darrin Durant gave a talk on "The Symmetry Train". as part of the HPS seminar series.
In addition to the talk, we look back on an interview with Dr Darrin Durant by the HPS podcast's own Samara Greenwood. First published in April of 2022, it was an early example of Samara's interview prowess which she is now applying to the HPS podcast. The original post can be found on the SHAPS Forum Research Blog here.
Dr Darrin Durant is Senior Lecturer in Science and Technology Studies (STS) in the History & Philosophy of Science program. Darrin has published widely on the relation between experts and citizens in democratic decision-making, disinformation and democracy, climate and energy politics, as well as nuclear waste disposal. In this interview, Darrin kindly sat down with PhD candidate Samara Greenwood to discuss his work.
What drew you into Science and Technology Studies?
It started somewhat accidentally. I actually started off wanting to be a journalist. When I was 13 years old I did an internship at the local paper. After the internship the paper kept me on, first as a sports reporter, then for human-interest stories. So, after high school, I enrolled at the University of Wollongong in an Information Technology and Communication degree. This was a relatively new way of doing journalism studies at the time. Two of the new components were Information Technology Studies and Science and Technology Studies.
By the end of first year, I’d become frustrated with some aspects of the journalism course and so swapped into the STS degree. Even though I gave up journalism, the desire to tell stories stayed with me – that has been a constant through all my career. I value the way storytelling can draw people into hidden lifeworlds and I was deeply interested in telling stories about science and technology in particular.
For those who haven’t come across the field before, could you briefly describe the focus of Science and Technology Studies?
I see Science and Technology Studies as having two interlinked focal points.
One is the contingency of knowledge. Often, we have this expectation that when there is scientific disagreement, something must have gone wrong. We often assume nature has the capability of speaking to us very directly – that if something is true in nature, then that truth will be crystal clear and obvious to us. But that’s just not how science works.
Science and Technology Studies recognises that nature does not speak to us clearly, but rather gives us conflicting signals. That is why it typically takes large, team-based research to sort through all those signals. Of course, even after you’ve sorted through those signals, there’s often still a lot of uncertainty, ambiguity, and areas of ignorance. Because of all that, we end up with disagreements.
What STS does is to investigate all that contingency and ask: How can we understand this contingency? How should we respond to it? And what does it mean for our understanding of both how science works as a practice, and how science feeds into policymaking and public deliberation, when we pursue an idea of science that is very different from traditional or popular understandings of science?
The second focal point of STS is studying the role of fields (Science, Technology, Engineering, Medicine and Mathematics) in society. For example, what role do STEM fields play in public policy and government discussions? Alternatively, this might involve looking at the role played by STEM fields at a community level; say, between ordinary people sitting in a bar considering what is the less risky thing to do. Should I wear a mask or not? How likely is it that the floods are going to take my house? In STS, we investigate what role our own political values and judgements, in particular judgements about uncertainty and authority, play in the relation between science and society.
Quoting from a recent book you co-authored, Experts and the Will of the People, you describe a common conception that ‘contemporary STS erodes the cultural importance of scientific expertise and unwittingly supports the rise of populism’. Could you tell us more about this issue and your own position?
I believe this conception arises when false assumptions get tacked on to the idea that scientific knowledge is contingent. In this erroneous way of thinking, because STS shows that scientific knowledge is not absolute, there can be no such thing as a reliable or trustworthy knowledge claim. In other words, knowledge must be absolute to be true and, if not, the contingency of knowledge indicates some kind of problem: lack of validity, incompleteness, bias and so on. However, this conception of knowledge as ‘absolute or nothing’ is a false one.
Reliable knowledge is not somehow ‘downloaded’ from nature; rather, it’s formed through human communities of practice. As a scientific community weighs up the evidence, data and internal differences in a particular domain, they take social bets on what seems to be the best idea right now – which theories best fit the data, which auxiliary assumptions should be changed or retained, and so on.
From that point, they then start refining and testing their own sense of what’s reliable. That constant collective interrogation by the community of practice is what produces reliable knowledge claims. Reliability is something that is generated over time by changing, imperfect humans through collective practice.
In Experts and the Will of the People, we argue that the idea that STS undermines the reliability of science comes from a false conception of nature, a false conception of communities of practice and a false conception of authority claims themselves.
In fact, we argue, humans are able to produce contextually relevant, reliable knowledge claims but that these are continually interrogated and often open to revision.
The COVID-19 pandemic seems to be a very timely case study for many of the issues of interest to STS. Are there any elements of the social response to the pandemic that you have found particularly intriguing and would consider worthy of further research?
One of the issues that has interested me is the non-precautionary stance we’ve seen come out, especially since the Omicron wave. A non-precautionary stance is one that argues that ‘I need to have absolute certainty before I prevent or regulate an activity’. What is often erroneously called a ‘sound science’ approach argues that if there is a lack of absolute certainty, then there are no grounds to regulate against something.
We’ve seen this ‘sound science’ mentality come out with regard to the issue of masks. Is there absolute certainty that masks help to prevent the spread of COVID-19? If not, the argument goes, then there are no grounds to compel someone to put a mask on. Was there absolute certainty that setting a five-kilometre limit would decrease transmission? If not, then there were no grounds for setting a five-kilometre limit. In other words, unless there is absolute certainty, no restrictions should ever be imposed.
What I find interesting about this conversation about COVID-19 is the massive disconnect from the history of the non-precautionary strategy in industry. For example, one famous argument went, if we don’t have absolute certainty that lead in paint is harmful to children in a playground, then this lead shouldn’t be removed. We know what happened with that claim! We no longer put lead in paint.
Regulation of the tobacco industry is another example. The argument used to be made that since there was no absolute certainty that cigarette smoking would kill you, smoking shouldn’t be regulated, and ditto for secondhand smoking.
The same argument also gets put forward when it comes to the connection between human industry and global warming. Here we see it used in the interests of primary industries that want to preserve profit and keep imposing a risky activity on people because you can make money out of it.
In all these cases, the non-precautionary strategy has been totally discredited but, somehow, there is a massive disconnect between our understanding of this and our thinking about COVID-19 regulations. The arguments employed against COVID-19 regulations bear quite striking similarities to the non-precautionary, ‘Merchant of Doubt’ strategy – if there’s any doubt, don’t regulate. We should interrogate the underlying value judgement here, namely, that risking greater transmission was fine, because mitigating against transmission might impact economic activity.
For COVID-19, a claim that asks, ‘Is it certain that mask-wearing will save you? If not, then don’t do anything’ is very similar to claims made by the tobacco people and big oil. Somehow, we’ve let this non-precautionary stance infect our public health discussions. I find that quite interesting and I don’t think it has been well-identified.
Finally, what are you working on now?
I’m working on several overlapping projects, one of which is on disinformation.
The disinformation question is interesting because of the distinction between misinformation and disinformation. With misinformation you just happen to be wrong but you may have been innocently wrong. With disinformation, we are dealing with strategic deception through the intentional spreading of wrong information. What I find interesting is that disinformation and misinformation are often lumped together as the same thing. I believe we should be much more careful to separate them out.
It’s important to recognise that people can be wrong legitimately, in good faith. The harder we come down on them, the more we assume that what social debate is all about having access to the right information or not. Therefore, the more stringent we are, the more someone is going to feel their view is being over-regulated. In that case, they will come to feel politically excluded and that causes even larger problems.
Another important distinction is between impact and influence. For instance, let’s say you stream a piece of misinformation over Twitter and it receives a lot of hits. That is impact, but what influence did it have? Did it change anything in real terms?
Influence can involve a direct movement in social norms, where information provokes a shift in what is considered normal, perhaps by delegitimating what is currently considered normal. We can study impact versus influence by looking at the way information is flowing through our social media channels. I think there is a lot of important work to be done there.
My other projects are in the energy politics domain. I’m writing about why nuclear power is a bad option for Australia. There are multiple reasons why this is the case: first, nuclear is too expensive and takes too long to build to address energy concerns. Also, because each reactor requires the ‘standing reserve’ of equivalent power output in case the reactor goes offline, and because baseload power sources interact poorly with distributed power sources in non-modernised electricity grids, nuclear power is not as climate-friendly with regard to the goal of increasing renewables supply into electrical grids as the pro-nuclear groups claim. This project also involves writing about climate scientists and how they navigate contested and politicised public policy discussions and, into the future, asking the question: how will Australia handle the task of disposing of its own nuclear waste?
Dr Darrin Durant’s most recent book (co-authored with Harry Collins, Robert Evans and Martin Weinel) is Experts and the Will of the People: Society, Populism and Science (Palgrave, 2020). He has also written articles for a more general audience on nuclear power, the role of expertise in a democracy and anti-science. In the HPS program, Darrin coordinates the third-year subject Science and Society (HPSC30023 ) and the post-graduate subjects Trust, Communication and Expertise (HPSC90012 ), Science, Controversy and Public Policy (HPSC90013 ) and Environment and Knowledge (HPSC 90010 ).