Hannah Fry: The Mathematician Who Knows Uncertainty Is Unavoidable

; Abraham Verghese, MD; Hannah Fry, MSc, PhD

Disclosures

January 18, 2023

This transcript has been edited for clarity.

Eric J. Topol, MD: Hello. This is Eric Topol with my colleague and co-host, Abraham Verghese, for Medicine and the Machine, our Medscape podcast. This is a big one for me, because I've been following professor Hannah Fry for years now, and to get the chance to actually talk with her, oh my goodness!

Professor Fry is one of the world's leading mathematicians — she's pretty young to be one of the world's leading mathematicians. She's a professor at University College London and has a fellowship at the Royal Academy of Engineering. I first met her through one of her books, Hello World: How to Be Human in the Age of the Machine. I can't think of a more perfect person to have on Medicine and Machine than you, Hannah. Welcome.

Hannah Fry, MSc, PhD: Thank you. I can assure you, Eric, the pleasure is entirely mine. I have likewise been following your work for a very, very long time.

Topol: What you've accomplished at such a young age is extraordinary. You make mathematics fun and you're an amazing communicator. You have a series on the BBC, podcasts, print articles, and whatnot. How did you break out from being a mathematician to the kind of multidimensional person you are?

Fry: A series of events conspired in my favor. It was all entirely accidental. After I finished my PhD, the very first thing I did outside of academia was a little TEDx Talk. I had an idea and ran with it, The Maths of Love. It was about the tools and techniques I used when I was single to try to optimize my own dating strategies. It was very tongue-in-cheek, very silly and playful. Every year, they take a handful of TEDx Talks and promote them to being proper TED Talks. Mine ended up being one of those.

All of a sudden, from nowhere, this stupid talk with rubbish jokes (I got in a lot of trouble for it) suddenly became one of the most watched TED Talks of that year. As a result, I got a phone call from the BBC, and they said, "Would you like to have a radio show?" No one says no to something like that.

I've always had the attitude that I want to look back on my life in the future and feel as though I always chose the path of least regret. So whenever I'm presented with an opportunity, even if I'm terrified of it, which I often am, even if it doesn't feel comfortable or like something I can achieve, I always know that I would rather have tried and failed than never to have tried at all. So that essentially is the attitude I've adopted and hence haven't slept for 7 years.

Abraham Verghese, MD: That's a great philosophy. Such a pleasure to meet you. I have a confession to make before we start, which is that both my parents are physicists. I have an older brother who's a professor at MIT and a younger one who's a computer scientist at Google. But I had no head for math, or at least that's what I told myself all these years.

Then, in the last couple of days of researching you and your papers and your documentaries and so on, I had this epiphany that maybe I never was shown how to be excited by math. What do you think about the teaching of mathematics? Could someone like me have been saved from this unfortunate label of having no head for math (which brought me to medicine, by the way)?

Fry: I think so. Here's what I've noticed: Whenever you talk to adults about how they feel about mathematics, they're always very strongly one way or the other. They're either, "Oh I love that subject," or, "I wish I could have done it, I really enjoy it," or they say, "I hated math; it was not for me."

 

You never come across somebody who's ambivalent about the subject, right? You never meet an adult who says, "I can take it or leave it." I believe our experiences end up shaping how we feel about math. But it's kind of an unstable equilibrium.

One day, you are a bit sleepy in math class and you fall a bit behind, and then the next day, you think, I didn't really understand what was going on. Then someone says to you, "Maths is boring," or "Maths is hard," or "No one likes maths." And it adds to this idea in your mind that it's not for you.

It's a self-fulfilling prophecy because the more you don't pay attention in class, the harder you're going to find it, and the more you're going to think that you don't belong. You spiral further down with the feeling that you aren't a maths person.

It just so happened for me that I had the same experience but completely in reverse. My mom has a very interesting idea of what constitutes fun. When I was about 10 or 11, before I was allowed to go out and play during my summer holidays, she made me do a page of a math textbook. Every single day. It was not fun; don't imagine me as a child skipping along to do my math homework.

When I went back to school, I suddenly had this ability. I was one step ahead of everyone. And when you feel like you're doing something well, it feeds into you wanting to do it more and more. Then you start adopting the idea that you're good at maths as part of your personality. I don't think there's anything special about my brain or about the way I work. I just had that little push in the right direction at the point in time where it made the biggest difference. That's the tidal wave of success I've been riding ever since.

Topol: Whether it's the mathematics of love or the joy of data, you're having a big influence on people who would like to follow in your footsteps and love math and see how it intersects with our lives on a routine basis. You did a BBC documentary on artificial intelligence (AI) in medicine. You got into Babylon Health [a "health services provider that combines an artificial intelligence powered platform with virtual clinical operations for patients"] and the lack of data. Do you remember much about that and what your sense has been — it may not have changed that much — about the use of AI to improve medical diagnosis and care?

Fry: I have so many opinions on this. Certainly, I retain my very raised eyebrow about some of the things that Babylon Health is doing. In part, it's about the way the company is run, about where they run their beta testing, and about how they make their money. The way it works in the United Kingdom is that you have these general practitioners across the country, and they get paid per person who is signed up for them. The thing is, you can't join Babylon if you have a chronic condition, if you're pregnant, or over 65 years old, I believe.

So essentially, you can only join Babylon if you are a young, healthy person who doesn't need much medical care except in an accident or an emergency. That's fine, except that those are the patients who, when signed up for practitioners across the country, effectively end up paying for those people who do have chronic conditions, pregnancies, and are older. So Babylon was like a sponge that would take in all of the "profitable" patients without actually dealing with any of the expensive healthcare. I had problems with the way it was run and the way it wasn't being regulated.

With AI medicine, the thing I've been thinking about a lot recently is that quite often what happens is that the technology is really good and impressive and is quite legitimately and understandably something people get excited about. But in getting excited about the technology, we lose sight the actual question we're trying to answer.

Danny Kahneman, the Nobel Prize winning economist, has this great observation. He says that people have a habit of taking a difficult question and swapping it for an easy one without noticing that we've made the substitution. Let's take something like medical imaging — finding tissue abnormalities in mammograms. It's true that we have these exceptional algorithms that are very good, as good as people, at finding these abnormalities within the images. That's fine.

But the hard question you want to answer when it comes to breast cancer care is whose life can be saved with treatment, right? That's the question you ultimately want to answer. Who should receive treatment to save their lives? I believe that question, in the minds of people who are excited about the technology, is swapped for this much easier question, which is: Which images have pixels that indicate abnormalities? Those are fundamentally different questions.

When we mix them up and think that one is a suitable proxy for the other, that is where a lot of problems arise. I believe that is where the reach of AI goes too far, and when we fail to realize that these things have to be about a partnership between human and machine rather than human vs machine.

Verghese: You have been quite candid about your own experience with medicine and your own illness. The late Alvin Feinstein, of Yale, wrote a famous book called Clinical Judgment that Eric and I learned from in our residency years. He was one of the first mathematicians to apply math to medicine in clinical reasoning. He had a Venn diagram on the cover of the book.

He's thought of as the father of evidence-based medicine, but I think he would be shocked by the direction it's taken, where the focus is entirely on numbers and randomized studies. There was a quote from him that I thought of in the context of reading about your own experience. Feinstein says, "The clinician combines treatment for the patient, as a personal case of disease, with the concern for the patient, as a personal instance of mankind, into the unified mixture that is clinical care."

He cared about extracting data from the individual in front of you. But it became perverted into abstractions about numbers that you can easily generate. And your illness, for example, and your particular requirements and desires, your age and your children, these made you a unique individual. That's exactly what Feinstein meant by evidence-based medicine.

Fry: I totally agree with you. I read a paper today in Nature about colonoscopies and this idea that of course if you look at colonoscopies, they are the best way of assessing the need for potential care. Of course they are, right? Compared with stool samples, they're much, much better.

But when people actually looked at the data for what happens when you offer people colonoscopy or stool samples, actually they end up having worse results because what you don't realize is that people are far more likely to send off a stool sample than they are to undergo a colonoscopy. This goes back to what I said earlier about the question you're asking and making sure that you're asking the right question. Because the question is not: Which is the most effective way to determine whether somebody has abnormalities that are worthy of investigation? The question is: How do you take into account the way people are — the people as people, the people as patients — and integrate that into healthcare decisions to make sure that you're saving as many lives as possible?

When I was 36, I was diagnosed with cervical cancer. And it had gotten into the lymphatic system and we just weren't quite sure whether it was established in the nodes or not. We knew it was in the lymphatic system but not whether the nodes would come back positive. I ended up having radical surgery. They took out all my pelvic lymph nodes as a precautionary measure, and I have lymphedema now as a result of the operation. I then went on to make a documentary about this for the BBC. Of all the programs I've ever made, it's the one I'm most proud of.

This documentary was totally and completely about the point that you're making, which is that people are not numbers, and something you might consider a risk worth taking might not be a risk that I consider worth taking. And if we are not taking the time and the care to sit down with people and tailor their treatment for them as an individual rather than just doing what the data and the numbers and the population view tells us to do, and we're not having those conversations with people, giving them a choice about what's right for them, then I don't believe we can claim that we are giving people truly informed consent.

That, ultimately, was my story and the story of other people I spoke to for the documentary. It's not to say that I would have chosen anything differently. If I relived it, maybe I would have done exactly the same thing. But it was a clash between what was right for the population view, which is what the algorithm said I should have done for me, vs the thing that, as you mentioned, took into account who I was, what was important to me, and how I personally viewed my own risks.

Topol: This is one of the most extraordinary documentaries I've seen about medicine in the first person. You were so candid about what you were confronting, as you mentioned, the lymphedema complication that you suffered. It is extraordinary, and it exemplifies what we're trying to get out about medicine, mathematics, and machine — all these things we're talking about.

You wrote a piece for The New Yorker, "What Statistics Can and Can't Tell Us About Ourselves." That was before you wrote about yourself. But in it, you wrote, "Is human behavior predictable? A mathematical analysis of what it is to be human can take us only so far, and in a world of uncertainty, statistics will never eradicate doubt, but one thing is for sure, it's a very good place to start." That was another impactful piece. Can you tell us more about your thoughts there and maybe integrate it with Hello World?

Fry: The deadline for that piece, which included the bit that you just read out, was the day that I was diagnosed with cancer. I went to the hospital and I got the news, and then I came home and I knew I had to get it in before midnight. I was sitting at the computer thinking, Okay, I'm scheduling crying for later. I'm just going to write this thing. So that just gives you a little insight.

In my head, all of these stories are connected because I think as humans, we don't like the idea of uncertainty. I remember having a great conversation with another mathematician, Vicky Neale, about what does 10% mean. Of course, I can write you a formal definition of what 10% means, but what does 10% mean when someone says to you, "there's a 10% chance of this happening," especially if it's about you as an individual.

I believe that in our heads, we can't really handle that, so we round it up or down to 100% or 0%. I believe none of us are very good at thinking, Okay this has a 10% chance. I think we make the same mistake when it comes to the output from algorithms and data. If an algorithm or a statistical analysis says that a particular trend line fits the data, I believe that in our heads, we remove all of the uncertainty remove all the noise, remove all the messiness, and we think, Oh, well then, it's a yes or a no, right?

I believe that is such a big mistake. It's not an argument, in my head, in favor of more statistics. It's an argument in favor of less, because I think that by truly acknowledging that we are not very good at this stuff, then you have to be careful to train yourself to realize that there is irreducible randomness.

Uncertainty is unavoidable. And whatever the numbers say, in pretty much whatever situation you're in, it's only going to take you so far toward the answer. You have to be willing and prepared to have the intellectual humility to step away from what the statistics are telling you and have much more of a human feel around it to go along with it.

There are so many stories and wonderful examples that illustrate this point. I sometimes play this sort of game with audiences, where I tell them that they're the team principal of a racing team, and they have to decide whether they're going to race tomorrow. It's the last race of the season, it's really exciting, except that the engine in their car keeps blowing up. In seven of the last 24 races, the engine in the car has exploded. And they have to decide what they want to do.

I show them some data. I tell them that one of their engineers has a hunch that maybe the track temperature has something to do with it, and I show them some data of the times when the engines have blown up and tell them that tomorrow's going to be really cold. And I take the vote and get them to decide whether they want to race or not. Always without fail, everyone decides that they want to race. It's typically about a 75/25 split in the audience, but collectively they always decide they want to race.

At that point, I reveal that the data they're looking at is real, but it has nothing to do with race cars. I've actually shown them the real data from the Challenger disaster. On the night before Challenger launched and, as we all know, tragically exploded soon after take-off, there was an emergency teleconference between the NASA engineers and management and some other contractors of NASA, and they were looking at precisely the data that I have just shown the audience.

This is not a setup that I made up. It's one that's been used for decades to train people about revealing their own biases. But the thing is, on the night of the teleconference, as with all of the simulations that have been run since, and all the times I've done this with audiences, nobody asks about the data they can't see as well as the data they can.

There are two interpretations of that. The first is that we just need more data. Whatever data you've got, you just need more data. If you've got the right data, then you can always have the answer. But once you accept that uncertainty is unavoidable, once you accept that data will never give you a perfect view of reality, then there is a more important view. I don't think this is an argument for more data. I think it's an argument that data cannot form the entirety of your decision.

On that teleconference, the engineers were nervous, they didn't want to go ahead, they believed there was something that had to do with the cold temperature. But because they didn't have the data to back up what they were saying, the managers refused to pull the launch. I believe this is a story about how you should be aiming always for a data-informed view of things and not a data-driven one. I think data always have to be a piece of the puzzle, but you have to have intellectual humility about how big that piece will ever be. And you have to leave room for the human element too.

Verghese: This leads me to an interesting experiment you did with presenting data to unvaccinated subjects. Talk about that experiment. It must have been an eye opener and also quite frustrating, I would imagine. We spend a lot of time on this show talking about how we have science, and then we have the public's reaction to science. And they're two quite different things.

Fry: They are two different things. This was a program for the BBC. Do you guys have Big Brother in the States? With Big Brother, you lock people in a house and see what happens. This BBC program was essentially Big Brother, but it involved seven unvaccinated individuals and me for a week, and I had to try to persuade them. Why did I want to do it? I've noticed that with a lot of scientists, there can be a bit of snobbery about people who don't immediately understand things the way scientists do — an arrogance, actually. I'm making an argument in favor of intellectual humility. I think scientists are just as capable as anyone of making massive mistakes and misunderstanding things. Reproducibility crisis is one example.

During COVID, I noticed this hard division between the people who were "right" and the people who were "wrong." I care about society being better and about trying to make it so that science has the biggest possible impact and the best possible value for everybody. That means that you can't just decide to ignore a section of society that is not going with you.

You can't just look down on people and call them idiots.

If you want to ensure that more people are vaccinated, you also can't not listen, right? You have to do the work of listening to people but listening in a way where you're not trying to think of the next thing that you're going to say to them. You have to listen, to really hear what it is they're saying.

In spending this time with these people — many with whom I would happily go to the pub for a drink; I got on well with almost all of them — I found that all of them had valuable contributions to make and interesting things to say. I learned from every single one of them. When we started, it felt like we were poles apart. I am fully vaccinated, my children are vaccinated, I very strongly believe in the vaccination program; I struggle to see how rationally you could not be on that side of the argument. But by the end of it, I realized that we actually agreed on almost everything.

We agreed that the pandemic had been terrible. We agreed that we didn't want people to die. We agreed that we didn't want people to be harmed unnecessarily by medication that they didn't necessarily need to take. We agreed on informed consent. We agreed that lockdowns were horrendous. On all of these things, we agreed.

Ultimately, the only thing we disagreed on was how you count whether an illness occurs after the vaccine or because of the vaccine. We even agreed on the number of people who had illnesses. But it was just whether the vaccine was causal. That was literally it. On every other thing, we agreed.

But it was through other things they were telling me that I realized the arrogance of scientists. I don't know whether this is how it worked in the States, but here, certainly, I think I saw the arrogance of people assuming that everybody would be on board with this vaccine program. And I think we'd made some big mistakes.

For example, there was a woman there who was easily my favorite. She was absolutely brilliant. She's a young pregnant Black woman from Lambeth in London. For starters, she points out that there had been lots of campaigns directly targeting young Black men specifically in Lambeth. And she was like, "Look, what has the government ever done for young Black men in Lambeth? And all of a sudden, you want them to do something for you? Come on, right?"

She pointed out that at these vaccine centers, there were perspex screens between every booth. There were people wearing plastic aprons and masks and white coats. And there was a queue and a person going around with a clipboard. She says that when she went into that environment, it felt like she had just walked into a prison. She was like, "It's very triggering to be in that environment. For something I'm already quite scared about, why would I voluntarily enter into that environment if I don't have to?" It had never occurred to me to view that setting in that kind of hostile way because it didn't feel hostile to me.

Look, I don't know whether I should have done that program. I don't know whether I added anything good. I don't know whether, in the end, I made things worse. But I do believe that scientists, in general, should spend a bit more time listening and a bit less time looking down on people who don't see the world in the same way as they do.

Topol: That gets me to a different effort you took on, which was a series of podcasts with DeepMind. Recently, I had the chance to interview Demis Hassabis. I think of him as a DaVinci of our time. He's brilliant.

AI for life science is a different animal than what we've been talking about in medicine, like AlphaFold for predicting protein sequences for amino acids. What are your thoughts? They're located right there in London, and they're certainly a leading edge of AI, not so much in medicine as they are in other areas. What do you think about their efforts?

Fry: I drank the Kool-Aid. I am, as you are, blown away by how impressive Demis is. But not just about how smart he is; he's incredibly smart. He's got good ideas. He's got an incredible team of extremely intelligent scientists around him. And that's very impressive.

But the thing I really am impressed by is that they are doing all of this with that idea of intellectual humility. They are deliberately building things into their models, as much as possible, so their models are wearing their uncertainty with pride. I believe they have psychologists on site who think very carefully about how to set up their system so that it doesn't end up falling into the classic traps of human bias.

I think they know their systems are going to have problems and they are committed to continually hunting for what the problems might be rather than quickly releasing something in beta mode and then worrying about the wider implications of it later.

I'm not saying that they're never going to make a mistake, but I believe they are genuinely committed to thinking about things ethically as much as they can. That is impressive. Just the fact that they publish everything, the fact that they're committed to peer review, illustrates the mindset they have. In a world where people probably wouldn't notice or complain if they didn't publish everything the fact that they are committed to doing that demonstrates something impressive about Demis's vision.

Verghese: What are you working on looking toward the future? Do you typically react to the world around you? It seems that a lot of your work is this exciting reaction to something that's just happened, whether it be dating or marriage or vaccine deniers. What's on your horizon? Or does it depend on what crosses your radar next?

Fry: The American mathematician Steven Strogatz likes to say that he's intellectually promiscuous. I really like that phrase so I'm stealing it. I am too.

In part, the types of things that catch my attention are often reactionary because I live in this world, and I'm part of it and want it to be better. But it's interesting that you ask me this question because I have just finished filming a series for Bloomberg that's coming out in February. It's called The Future.

It is about the impact of technology on society but trying to think about it in a forward-looking way rather than a backward-looking way. So, it's looking at things that are in the short-, medium-, and longer-term, and how it will shape the world.

It's not a happy, clappy, enthusiastic jolly jaunt off into the future. This is supposed to be a bit like those The New Yorker essays you mentioned earlier, Eric. They are these 24-minute-long video essays that are challenging and don't necessarily have all the answers but are at least asking the right questions of the technology that is being developed so we don't end up having to be reactive and think of these questions retrospectively.

Topol: But that's just for February. We anticipate much more from you in the decades ahead. This has been a real treat to have a chance to hear, firsthand, your amazing communication skills. It's extraordinary how you explain things and make it fun and enthralling.

We will continue to follow you and learn from you. Hopefully, you've augmented your US audience here, warming them up for the Bloomberg series. We have to get those BBC productions to be shown in the United States. They should understand that the work you're doing is not just for BBC. It's important for everyone. What a talent. Hannah, thank you for everything you're doing. We'll hope to come back to you in the times ahead to get the next version, the next edition.

Fry: Right back at you, Eric. Thank you for everything you're doing as well. What an absolute treat to join you both on this. Thank you so much.

Follow Medscape on Facebook, Twitter, Instagram, and YouTube

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....