top of page

Episode 7 Transcript - Rachael Brown

Welcome back to the HPS Podcast where we discuss all things history, philosophy, and social studies of science. I am your host, Samara Greenwood, and today I am onsite at the Australasian Association of Philosophy Conference, which was held this year in Melbourne, at ACU.

Here, I met up with Dr. Rachael Brown, who is a philosopher of biology and director for the Centre for Philosophy of the Sciences at the Australian National University or ANU in Canberra. Rachael was kind enough to agree to meet with me after a very full schedule over the last couple of weeks. She's attended over 50 presentations across three different meetings.

As we met at the conference, there is some background noise, but we've done our best to keep it as non-intrusive as possible. Here I should also mention Rachael runs her own wonderful podcast on philosophy and science called The P-Value. Her podcast can be found on most podcasting services and we have also provided a link in the show notes.

Rachael has expertise across many areas within HPS, but today she will be talking with us about values in science and in particular, the downfall of the value-free ideal.

The traditional stance is that science works best when scientists do not bring personal or social values to their work, particularly when it comes to interpreting data or assessing hypotheses. But this value-free ideal is challenged on two key fronts. First, at a practical level, can science really be ever conducted without values? And second, would we want it to, even if we could, in other words, if used appropriately can values provide a beneficial component to the scientific process?

Samara Greenwood (01:45)

Hi Rachael. Thanks for joining me on the HPS podcast.

Rachael Brown (01:50):

Thanks for having me.

Samara Greenwood (01:52):

First, I wanted to ask, what was your journey to history and philosophy of science?

Rachael Brown (01:57):

Kind of accidental, kind of not. So, I first studied a history and philosophy of science subject as a year 12 student. We could do a university subject and you could pick all sorts like biology, math, whatever you like. One of the options was history and philosophy of science. In my case, it seemed to have two really good reasons in favour for it. One was that you actually did it at the university. I thought, well, that's pretty good - you get to go to university. But the other reason was that in my family, a lot of people had gone to Melbourne Uni and done history and philosophy of science and they'd liked it. They all did science degrees though, so they had all thought, this was a nice side thing to do. So, I did the subject, I did a couple of subjects, one on history of astronomy and one an introductory subject.

I really enjoyed them. So, I thought, well, when I do my degree, I'll do an arts degree so I can do a major in history and philosophy of science and a science degree as well. I thought I would ultimately be a scientist. I wouldn't do history and philosophy of science after the degree, but it turned out that the questions I was interested in weren't really questions that were being answered in the zoology department. Questions about what methods you might use. So, I ended up doing an honours project, which was in history and philosophy of science, about whether a set of studies showed that chimpanzees have a particular type of cognitive process called a theory of mind. I also had a supervisor in zoology and I did all of those requirements as well. Once I'd done that, then I started a master's and I kept going. It wasn't really intended. In fact, it's quite funny. My grandmother, who was the first person who told me about history and philosophy of science, I remember once she said, it was supposed to be just a side project. It wasn't supposed to be your actual end point. But yes, that's how I fell into history and philosophy of science.

Samara Greenwood (03:53):

And what are the topics that you research most prominently now?

Rachael Brown (03:57):

Well, there's a range. I'm a bit of a magpie. I'm a philosopher of biology and there's a few general domains that I'm interested in. One is still my honours project, the methodology of comparative psychology. Another topic I'm interested in is concepts in evolutionary biology, particularly concepts around what's called evolvability or the potential of lineages to evolve. And then the other questions I'm interested in are more philosophy of mind questions to do with what counts as cognition or not. But to be honest, the thing that most drives me in the projects I pick probably are the scientists or people that I can work with. And so that means I tend to be quite varied depending on who I'm talking to and what I'm talking to them about.

Samara Greenwood (04:46):

So, what's the topic that you are going to be discussing with us today?

Rachael Brown (04:52):

I'm going to talk about the value free ideal, and I would say my parochial view on it is the value free ideal and its downfall <laughter>

Samara Greenwood (05:01):

So, this is the topic of values in science. Can you tell us first what the value free ideal is and then what's wrong with it <laughter>?

Rachael Brown (05:09):

So historically, and I think probably people generally, think that good science is value-free, or we might say objective. The thought is that what scientists say is true should be determined solely by the facts, the evidence that they have about the way the world is, and that their social background or the social context shouldn't play any role in their decision making. So, one question is, is that the ideal we want, even if we could achieve it? Is there such a view that's possible that is a view from nowhere? And then another question is, practically, is it just ever possible to get rid of values in science? There has been a big debate probably over the last 30 years - a big shift in philosophy of science where people have pushed on the idea firstly that a value free science is what we should be aiming for, and secondly that it's achievable in the first place.

Samara Greenwood (06:07):

Mm. And so there's a distinction here, I believe, between epistemic values and non-epistemic values in the literature. Could you tell us a little bit about that?

Rachael Brown (06:16):

So we can think about different types of values. One type of value we might have is when we are deciding between two hypotheses, or whether we are going to accept one hypotheses over another, is whether it's simple or not, for example, or whether it is beautiful or not, say in physics. And the thought is that those are a type of value, which is called epistemic values. Simplicity and beauty are supposed to attract truth in some way. And so those are things that we can prefer in hypotheses above and beyond the evidence we have for those hypotheses. Whereas there are other non-epistemic values, like, one would be, I prefer hypotheses that postulate blue entities. Well, that would be a non-epistemic value. The thought is that's just purely something esoteric about me.

But similarly, we can think about cases in the history of science where people have had a priori expectations about what hypotheses are going to be true or not and those expectations have coloured how they've done science. And we generally think about those as bad cases. So, for example, in the history of primatology, people had a tendency to view groups of primates as being male dominated and having certain types of social dynamics that reflected how people thought that males and females would relate. And it turns out that that's an a priori expectation which isn't born out in the evidence. But it wasn't until people like Jane Goodall went and did studies, who weren't men, that it became apparent that what scientists were saying was value-laden and value-laden by these kind of non-epistemic values.

Samara Greenwood (07:52):

Great. And then, are there ways that values can work in positive ways in science?

Rachael Brown (07:59):

Traditionally what people have said is that it's okay to have the epistemic values and not to have the non-epistemic ones when it comes to theory and hypothesis acceptance, basically. But it might be that what I choose to study can be influenced by the non-epistemic values. So, for example, whether I choose to study the naked mole rat, or the koala looks like that's not a choice battered down by epistemic values. That's something about me as a person and my society. The thought is that's okay, or at least some philosophers say that's okay. Most would say that's fine. The real issue is when your values start impinging upon what you decide is true or false. And so about 20 years ago a philosopher of science called Heather Douglas revived this idea of inductive risk. And the thought is that when we are deciding whether or not to accept a hypothesis, almost always our evidence underdetermines whether or not the hypothesis is true or not, right?

We have to take the evidence and say, how likely is it that we have this evidence but the hypothesis turns out to be false? And so that's why we have hypothesis testing and statistics, et cetera. And we have to set a boundary as to where we think we've got enough evidence. And what Heather Douglas pointed out was that that boundary, so typically for example, you might say you have to have a P value of less than 0.05. She said, ‘well that's arbitrary and it's set by our values’. And they're not epistemic values, they're non-epistemic values and they relate to the risk of error. So how likely is it that I'm going to accept a hypothesis in error? And she says, well, it should differ depending on what the outcomes of accepting the hypothesis in error are likely to be.

The case she points to is the use of dioxin a pollutant. She says, look, there's all this evidence from animals about whether or not it can harm you or not. And there's this question about, well, how good should our evidence be that it doesn't harm you? Given that if we get that wrong we could end up in a really bad situation and it looks like our evidential bar for the dioxin case should be higher perhaps than it might be for something like, did the universe start with a big bang? Because that seems to have no social consequences. And so that seems like a case where values not only do influence science, but they unavoidably do. So, this comes back to that very first distinction. It looks like we can't avoid this influence of values because the world doesn't fix for us what is going to be the right standard, or at least it doesn't fix entirely. And secondly, it looks like a case where values play an important role in making sure that science plays the right role that we want it to play in society, which is guiding good decision making and good action.

Samara Greenwood (10:52):

Hmm. Going back to your earlier example with primatology, it's a subject that I actually study quite a lot. We have this initial period where there is male bias in the way primate behaviour is viewed, and then a lot of scientists enter into primatology that are also engaged in feminism and bring some of the learning from there into primatology. And now for some commentators, this was seen as an interesting case of bringing political values into science and whether that was valid or invalid. How would you see that case?

Rachael Brown (11:27):

That's an interesting case in that I think it shows the importance of different perspectives in science. So, there's a tendency if you think that the value free ideal is right, then a good scientist is one that's disconnected from the facts and extremely emotionally disconnected, right? And the thought is that then that gives us access to this set of mind independent truths. Now I think what those cases show is that different types of background can shine a different light on the same set of facts. One worry you might have is, oh, well it just means then that things that we accept are relativist, it just depends on who we are. That's too strong. What I think of it is like a figure on a stage. In fact, I got this analogy from a colleague, Jeff Brennan, who sadly departed, but I think it's a great analogy.

He used it to talk about different approaches to philosophy, but I think it works for science too. You've got a figure on the stage and there are different spotlights on the figure and one spotlight on the figure casts a certain set of shadows and then another spotlight on the same figure casts a different set of shadows. But ultimately, we end up - through a lot of different viewpoints - getting some picture of the figure without the shadow issue. And I think that's how I view this way of thinking about science. There are going to be some perspectives which are more informative than others. One of the big challenges we have is working out when we're in the good case and the bad case, but merely the fact that someone's values or someone's perspective is influencing how they're viewing a case, it shouldn't rule out that they can provide useful information.

And if anything, it seems like what the history of science has shown us, especially in the last kind of 50 to a hundred years when we've had a real diversification of who's doing science, is that diverse views help us to triangulate to the truth, right? So having one set of views can't tell whether we’re in the good case or the bad case. Whereas, even if it's just having 10 different people look at your data, it still seems like that helps you to decide when you're in the good case or the bad case. So yes, I think I take a perspectival view on this. I think we can be realist; we can think that there are different views and that they're going to shine a different light on the way the world is, but that it doesn't follow that we have to hold all of them to be fully true at the same time. We can think of them all as being approximately true and differentially so. Then the real challenge is picking out from that, what the actual truths are.

Samara Greenwood (13:59):

Mm. Yes, that analogy is fabulous. I haven't heard that one before, with the shadows and the sense that these different perspectives are not only shining a different light, but they're shining light into those shadows behind.

Rachael Brown (14:10):

And casting their own shadows, right? Yes. Which is super important to be aware of. Just because we might socially think the feminist view is perhaps a better view, that doesn't mean it's the view. It seems like we have reasons to think that it might be less biased than other views, but it doesn't follow that there's not going to be some things that it just overlooks or casts in a way that's not quite accurate.

And so that's the tricky part. I think nicely, at least in the case of science, the world pushes back fortunately. So, if we've made big mistakes, it should tell us, right? We can come up with these hypotheses from different perspectives and we can test them. These different perspectives can help us to find out about the way the world is rather than hinder us or bias us in some way.

Samara Greenwood (14:56):

That's fabulous. I was wondering if you have any more examples of ways that values can enter into scientific practice?

Rachael Brown (15:04):

I think it depends on where you draw the lines around scientific practice. So, if we start with what are we going to study, that's a clear place that values influence science. I suppose if we lived in a world where there was no restriction on what we could do, then perhaps you might think ideally, we would do everything, but we don't live in that world, right? That's a complete fantasy.

So, it seems right that there is some influence there of values. I do think there's a challenge there, which is if we go purely instrumental, then it looks like a whole pile of science, which have been very helpful to us, wouldn't have been funded in the first place. So, there's an interesting question there about how we weigh up different kinds of values and whether or not we think science should be done for purely instrumental reasons.

That's one place where values influence science. That's, what do we choose to do? Then there's questions about how we design the experiment. And again, I think values play a role there. Again, this comes back to perspective. There will be things that from your perspective, seem more relevant than other things. And this is what happened in the primate case, right? There were parameters that they just didn't register as being significant, so they didn't measure them.

Now, I do think a good scientist tries to step back and let's their perspective influence them only so much. But here, and again it comes back to these two questions. If we could have a view from nowhere, would that be the best way to start the experiments? And I don't know that it is because it turns out that it's like having multiple models, right? It seems like the same kind of thing. You want to have multiple approaches to the same thing, and that's a good thing. And then we also have this issue of, well, you can't get out of it anyway.

The last thing, which I think is also interesting is what scientists should do with their evidence, right? So, there was a really interesting article, I think it was in Science or Nature, it was a comment by some population geneticists about what ethical responsibility people in their field working on human genetic differences had for presenting their data in a way so that it couldn't be misused by white supremacists. And so, their claim was that we have a moral responsibility. We know that our graphs and things like that are being taken by white supremacists and misused, misrepresented so we have a moral responsibility to present our data in a way that reduces that likelihood.

And I think that again, is super interesting. Some people would push back and say, no, that's not the responsibility of the scientists. That's putting too much onto the scientist. It's like blaming the motor mower manufacturer for the person who does a motor mowing rampage. You produce the data and if it gets misused, it's not your fault. I think that's a mistake, right? I mean, we think in the motor mower case, it's a mistake too, right? If we really thought that people were going to do that, there would be restrictions on what sorts of functions the motor mower could have, et cetera.

So it seems like we can't totally abrogate ourselves from responsibility there. We think that in society when people produce things that they have some responsibility when it comes to misuse. Again though, there are interesting questions about how far that should go.

In bioethics there's a literature on dual use dilemmas. A nice case in Canberra is where some scientists were investigating a way to stop mouse plagues and they were manipulating mouse pox, which is a disease that mice get. And they discovered a way you could manipulate smallpox, to make it worse. Then there was a question about whether they should publish or not. You might think there's also a question whether they should do that science or not.

Again, that raises some of these questions about values because it looks like if we are just going into this purely from an objective view from nowhere standpoint, then none of those questions should be a question for scientists. I have a tendency to think that there's a middle ground there. I don't think we should expect scientists to be ethical experts, but they need to be aware of their role as experts in society and people with a particular type of power.

Samara Greenwood (19:20):

What do you think might be of value for the general public when thinking about values in science? How can this be made relevant for them?

Rachael Brown (19:27):

I think it's quite important when the public is thinking about how to interpret scientific evidence and also how to view scientists who engage in public advocacy. So, there's a tendency to think that if a scientist is getting their hands dirty when it comes to advocacy, that they're overstepping the mark. So, we see this in conservation science and in climate science. In part that's because in the background there's this expectation of the value-free ideal.

We also see this during the pandemic as well. Politicians kept saying, ‘we just need to read off the science what to do’ when the science doesn't actually tell you what to do. It might tell you facts which are useful for decision making, but even then as I've just pointed out, what counts as the facts, depends a bit on what you set your standard of evidence to be.

And so, then there's this question, what role should scientists be playing there? One possibility is you say, well look, the scientists just should be reporting the data and their confidence intervals and then public policy makers make the choices. I think the challenge there is of course that the best people to interpret what the evidence is telling us is scientists, but also, we know that public policymakers and the public are really quite bad at understanding data and kinds of claims about confidence intervals. And so, it seems like then it starts to push on the scientists to do more than just do the science.

Also, in the context of say climate change and conservation biology, very often it's very clear what the implications for public policy will be of a particular finding.

Samara Greenwood (21:07):

<Interjection> At this point, Rachael and I needed to pause the conversation to allow a large crowd of historians to pass by. Once their raucous behaviour died down, we continued.

Rachael Brown (21:17):

So, if we think about these cases like climate change, conservation biology, and in fact I just saw a great talk about this on fire ecology from a woman called Aja Watkins - keep an eye out for her, she's great. In those cases when people are doing their work, very often it's very tightly linked to a particular outcome in public policy. So, I might be doing this particular bit of ecological work in order to assess whether this particular development should go ahead or not. Right? And so, then it becomes very, very clear, what the relationship between my evidence and the outcome is. I think this throws up a whole lot of interesting questions.

One possibility is you say, well look, the risk is too great if the scientist is value-laden in a bad way. So, they should just step back and be purely just reporting the facts. There's another sense in which that's just impossible. So, I think these cases are super interesting. I know Heather Douglas has explored this a lot in her recent book, but I think there is this really interesting question that keeps coming up in public debate about what the role of scientists should be in public policy. A lot of that debate is predicated on this idea that a good scientist is one that keeps their hands clean. And it just seems like that's a real misunderstanding of how science works.

I do think that there's a reverse thing, which is ‘what's the onus on a scientist to be involved’, right? So, if you look at the stuff to do with Covid 19, there was a question, if you were a scientist working on a vaccine or on vaccine efficacy, do you have an obligation to become a public spokesperson all of a sudden? That seems like too much of an obligation.

Samara Greenwood (22:58):

Going back to the very start, when you made that distinction between the value-free ideal and this idea of completely objective science and the alternative, where do you fit? What do you think is a useful framework for thinking about this?

Rachael Brown (23:12):

I think it's clear that we can't hold onto the value-free ideal, both in terms of the practicalities of science - the value free ideal is an illusion - but also, in terms of ideal science, a value-free science or an attempt at value free science, I don't think that that would be a good science. Now it's not to say I think that values should hold sway. There are obvious cases where that's gone wrong, right? So, there are these bad cases where people have deliberately misled with cases of research misconduct to do with values. And in some of those cases it's not nefarious in the sense of the person wanting fame or doing it for their own personal ends, it's because they think that a certain type of scientific result will have a certain type of policy outcome that they take to be a good outcome. So, I think that there has to be a line somewhere as to where you say that use of values went beyond what's acceptable. But I think if we're going to work out what that line is and have a nuanced account of the relationship between science and values, it starts by wholeheartedly just accepting that values are here to stay and they're just a part of science like any other part of science, which we can think in a more nuanced way about.

Samara Greenwood (24:31):

Fabulous. Thank you so much. That seems like a great spot to finish the interview. So, I just wanted to thank you so much for chatting with me today, Rachael. It's been great.

Rachael Brown (24:40):

Thanks for having me.

Samara Greenwood (24:43):

Thank you so much for joining us on The HPS podcast. A reminder that we have provided links to relevant content in the show notes, as well as a full transcript of the episode. Note. We also run a popular blog where we not only talk about the podcast, but other general h p s news and items. You can sign up at our pretty slick website, And yes, we are still on Twitter and Facebook and Instagram, and we have recently joined Threads as well. Finally, my co-host Indigo Keel, and I would like to thank the School of Historical and Philosophical Studies at the University of Melbourne for their ongoing support of the podcast. And we look forward to having you back again next time.


bottom of page