Transkript: Macartan Humphreys: What Good Is Social Science in Times of Crisis? Lessons and Answers from Corona (Moderation: Harald Wilkoszewski)
ACHTUNG: Das Transkript wird automatisch durch wit.ai erstellt und aus zeitlichen Gründen NICHT korrigiert. Fehler bitten wir deshalb zu entschuldigen.
All right, good morning from the Witz at Biberlin Social Science Center.
My name is Harald Wykoszewski, I'm the head of communications at this institute
and I welcome to our today's issue of our webinar series, Unsolvable or Solvable
Problems and Perspectives of Social,
Sciences on our Current Crisis, and there are many of them.
If you have followed the series, you've seen that we touch a whole range of
current challenges, but also social science disciplines. We've talked about
inequalities over the past weeks, about populism.
Last week, Matthias Kuhn gave a presentation on the risk of nuclear war.
And I'm really happy that today McCartan Humphreys is joining us.
He's the Director of the Research Unit, Institutions and Political Inequality
at the WTB. Good morning, McCartan.
And he will speak today, you see already in the title, how useful is social
science in time of crisis or during the pandemic. He has a focus on Corona.
We're very much looking forward to your talk. Today, we have again a classical setup.
McCartan will speak for about 35 to 40 minutes, but afterwards,
there will be time to take questions from you, from the audience.
And if you open your chat window, usually on the lower right corner of your
Zoom window, you can type your questions in.
We will collect them. And after the talk, we will read them out and McCartan will answer.
If you have any technical problems, in the back end, there's Lisa Heineck and
Claudia Vogt from my team. They can help you with that. Everyone is unmuted.
We only see the video, now my video, and then only McCartan during his talk to keep focused.
And I think we are ready to go with this. McCartan, the floor is yours.
Great. Thank you. Thank you very much, Harald. So, I'm talking today about work
that was done during and around the corona crisis by social scientists.
And I took this topic because it seemed a moment where, I mean,
there are obviously huge challenges throughout the world.
And there was a question that I think many of us had, which was,
you know, can social science, you know, help in moments like this,
it's a major challenge, major problem.
Obviously, it's primarily a problem to be addressed by health sciences,
other disciplines are primarily focused, focused on this.
And so social sciences is a second fiddle in a way to this.
But the silver question was, in what ways, can it or be useful,
or was it in fact, useful.
So in the talk today, it's not a global survey at all of social science.
It's a discussion that draws a lot on work that was done in and around people
here and in networks as well as a few,
I think, standard studies that were done more broadly in the disciplines of
political science and economics.
Just give a sense here, there's work being done by people in the IPI group,
we're doing work, there's work being done together with people at Humboldt and Frey to some extent,
and then other work that was done with international colleagues,
including at Baflingen University and US -based institutions.
I've organized the talk a little bit by thinking about how successful was social
science in predicting patterns of COVID mortality, particularly,
which was a question that some of us took up.
How successful was the tracking, what was going on? The idea there is that social
science is actually a large part of social science is about measurement.
And was the measurement activities of social science at this time useful or effective?
How useful was or could social science have been contributing towards solutions here?
Again, playing a second fidget role, but still?
And then, does it or did it, could it play a role in critiquing the kind of
policies research that was that was going on?
And so there's a big question, which maybe we can get back to at the end,
which is, was this useful, where could we have done better, should we have been doing this at all?
For many, there was a diversion in ways from existing agendas,
but it was for many, this was seen as taking up an important challenge at the
time. So those are the major kind of questions.
I'd say a background remark is I was actually broadly incredibly impressed with
the rapidity and often great thoughtfulness of many of the responses by social scientists.
There was initiatives, for example, at Oxford where they very quickly put together
a policy data tracker, which was tremendously useful for all kinds of analysis
that was done. There's a series of tracking dashboards put together.
I'm going to talk about some of those.
There's a whole series of pools and coordinated surveys that were then put out in the field.
So the IPA, Yale groups organized a number of those.
There are various initiatives such as special issues by Perspectives in Politics.
And other journals encouraging research and various funding opportunities that
move fairly quickly to concentrate attention on this major problem.
So a lot of people were worried about staying in lane kind of issues.
My sense is that those were fairly well negotiated. There were some tensions
we saw, and I'll come to some of them, between some public health work and some social science work.
And there was obviously lots of instances, if you follow social media,
where people were speaking outside of their area of expertise,
no question about that. But more narrowly, there was, I think,
good work done within people's areas of expertise.
So for this question of predicting, I'll focus a little bit narrowly on the
question of where will the corona burden lie?
So where should we expect to see mortality rates in greatest hotspots,
vulnerabilities, and so on?
There were lots of epi models that were available and that came out very quickly,
some more successful than others.
And there was a question there, which was raised fairly early,
which is, does social science have added value over these models?
Is this really purely an epidemiological phenomenon that we need to understand?
Or does our knowledge of how societies work, how governments react,
how governments respond, and give added value for understanding which areas
are more or less likely to be at risk.
And, you know, I think the background problem here,
and I think this is kind of borne out by what we saw to some extent,
is that social scientists really are particularly good at predicting regular
interval, regular events after the fact.
So a lot of what we call, you know, prediction is prediction of models that
are trained on historical data, predicting historical events,
perhaps out of sample with respect to the historical events.
But really true, pure prediction is not something that social scientists are
particularly good at or accustomed to.
And so this would be a real challenge to break outside the routine.
And maybe it's maybe it's too much to ask.
But certainly I saw it as as this is a challenge to our conceptualisation of
how societies work. Can we add value here?
There was a project that we did very early on inside this group here,
which the initial part of it consisted of thinking through the lots and lots
of available theory that would speak to this question of why some places would
fare better than other places working through political systems.
There's obviously lots of pure health and demographic aspects that would account
for a large share of this, including disease histories and age profiles and so on.
But the question was, what does theory tell us about how social structures political
structures, and so on, would interact with these features to affect outcomes.
So we, first of all, reviewed a lot of this literature to try and extract the
predictions you might get from this literature.
A certain amount of literature was related to other crises, but a certain amount
of it was simply logics of which sort of governments are more likely to be responsive
to different sorts of concerns.
And we then assembled a lot of data to try and figure out which of these predictions
at least found plausible support within the patterns and then could we predict
in the future based on this sort of theory.
But I would say first of all, just looking at the theory, a very frustrating
aspect was that lots of the theory was going in lots and lots of different directions.
In general, it was hard to take a single idea such as democracy or inequality
and make a clean prediction that it's going to work out having effect one way
or the other or ultimately a correlation in one direction or another.
So, to give a quick sense of the patterns we saw there in this predicting side
of things, we looked first at a set of features of state strength,
which is a central variable in political science.
There was a, first of all, as many people know, there's an astonishing kind
of unconditional relationship going in exactly the wrong direction, where,
in a way, the best predictor down here, but the best predictor of how poorly
you will do is how high you rank in the pandemic preparedness index,
which is kind of an extraordinary thing.
But with a slightly more sophisticated model that takes account of broad features,
epidemiological demographic features of society, you know, that seems to be
something of an artifact.
Once you take account of those things, government effectiveness seemed to matter
early on in terms of dealing with the first sort of shocks by June,
2020, but afterwards, the importance.
So here we see over time, the relationship between various measures and chronomortality,
really weakened, really sort of disappeared.
But stood out here was institutional trust. Institutional trust appeared to
matter and a history of exposure to diseases matter,
but broad effectiveness seemed to be much less important or at least the correlation
is much weaker than we were expecting.
The various political arguments, some of which were very sophisticated,
arguments of exactly which types of political systems are more or less likely
to lead to better responses by governments, really didn't have any predictive
value at all, really at any stage of this.
And so it was a fairly frustrating feature of this now. But of course,
none of this is based on random trials or well -identified patterns.
But the question is, do these theories give us traction or will public expect it to happen?
And the answer there broadly was, really, they don't seem to,
which was a somewhat frustrating aspect.
The third broad class of measures that we're looking at were things like political
priorities and then social structures. And the social structures were the features
that had the greatest predictive power.
So, for example, inequality and interpersonal trust stood out again and again
and became stronger and stronger in societies with stronger trust.
There was lower mortality in societies with greater inequality,
there was higher mortality.
And this pattern, particularly in trust, has been found in many,
So, broadly, we have some predictive power.
And so what we're doing here when I say predictive power, we're predicting here, for example, in June.
2022 based on pre -corona data.
We have something, but a lot of our theories are somewhat disappointing.
We took the project two steps further.
One step further was that we did a survey of about 100 social scientists and
asked them how they expected in the future, a year down the line,
these measures would relate to outcomes.
And so what this graph is showing a little bit is individual responses from
experts or say political science and economics experts, not health experts on which aspects,
which of these three aspects is more or less likely to lead to a positive or
negative relations in the data.
And I think two things kind of stand out from this graph is the expectations
of the experts were really all over the map.
So a lot of experts expected democracy to make things worse,
a lot of experts expected democracy to make things better.
More than everyone expected equality to make things worse and there were very
mixed views on the effects of fragility.
So if you were relying on sort of experts guesses you'd be left very uncertain
about what's likely to be happening and in fact the implied changes of beliefs
that experts should have given the actual data we observe are large.
So experts should be updating and or experts should social scientists,
they should be updating significantly their theories of how societies are working
based on the pattern they're seeing. We're not sure if that is the case or not.
The next step where this was taken forward was a project led by Miriam Golden,
Alex Gacko, Tara Slough, and others, which tried to move beyond what are the
variables that predict things to what are the theories that we have to predict
and explain COVID mortality.
And there, there about 100 entries sent into a model competition where people
provide actual models to try to predict into the future which areas will have,
which countries and then which areas within a set of countries,
India, Mexico, US, are going to have,
are going to suffer more heavily.
And what we see here a little bit is the performance of different sorts of models.
So So, this is predicted cumulative deaths.
The lasso approach, sort of machine learning approach without human engagement
does actually relatively well in explaining variation in mortality.
The best social science model does quite a bit better than the machine learning
model, which is nice in prediction.
The typical social scientist, there's a median model, is really very poorly.
So ask the median social scientist to put together a model to predict mortality,
and it will do relatively poorly, is what this suggests.
But the stacking model, which is a model that tries to aggregate the different
models that social scientists have actually performs relatively well.
So, summarising a little bit from this set of things, first of all,
it's not that clear there's all that much demand for social science predictions.
Social science predictions have quite modest traction.
There is value added over epidemiological and machine learning models.
The purely political arguments, surprisingly, did not do very much,
and the theoretical logics that we saw were generally quite thin or quite ambiguous.
The things that came out consistently here and also in the model challenge were
the importance of trust and inequality.
And those are areas to be understood better.
The second area I want to touch on is tracking, where I think there was more success.
A lot of social science work is about measurement.
And so this question here, which is, did social science effectively help track
the situation, what was going on in a useful way?
So one of the things that one of the sets of tools that was marshaled and produced
by social science at this time was a set of dashboards, tracking behaviors.
Welfare in a series of developing country, developing areas.
And these were developed often by researchers who had projects going on in these
areas, whose projects were interrupted in some kind of way, but had.
Panels of respondents that they could access in more or less in real time,
those that had cell phone connectivity, and figure out how things were during this period.
So here's an example of the Sierra Leone dashboard that was supported here at
Betsy Bay and done with colleagues at Backeningen.
And this was, you can't quite see it here, but this was running from April 20 up to February 2021.
So almost a year of data with very regular updating.
So this is almost daily updating. I think there's weekly in the data weekly
reporting where you have things like the availability of cassava,
which fell rapidly at the early stages of lockdowns, but improved over time or prices,
or for example, the increased hand washing, which was very large at the early
stages and then went down by the end of the year was much less evidence.
So this kind of dashboard kept track of how things were going and they were used by governments.
So in the Sierra Leone case, these were consulted by the Ministry of Finance
in determining decisions, but had to allocate resources across areas,
had to engage in responses.
We also had one in Uganda, which was also the the Ghana government's very incident
also and it's briefed many, many, many times and what's been used by this.
So in a sense, there was a kind of social scientists had a capacity to monitor
to some extent what was going on flexibly and rapidly and it was plausibly useful.
It's hard to know exactly how, you know, what difference any of these things
make, but plausibly was useful and effective. But it was certainly spotty.
This really depended on the existence of projects in these areas and researchers
who had often flexible resources who could do this kind of thing.
One of the, I think, lessons that maybe,
was insufficiently appreciated at the time, I think, was the dashboards were
kind of showing that they have the social situation, but they also have a set
of measures such as illnesses and so on.
And what many of them were showing in sub -Saharan Africa at least was that
there were big deteriorations in people's living standards, but very little
And so there was evidence of a huge economic shock.
Due to the measures that were put in place.
And so we have this piece here, which is highlighting that Sierra Leone,
for example, took extreme measures to lock down, there were massive economic
costs, and there's very little incidence.
Now, you might say, well, that just goes to show that if you lock down,
you don't get much incidence.
There wasn't much incidence after lifting of the measures either.
In Uganda, there was very low incidence, the schools were closed for two years.
And one thing that we were quite conscious of at this time is,
although we could provide this information,
there was no go -to model that governments had to try and aggregate information
they have about the health situation and the economic situation to choose optimal policies.
That model was not available to developing country governments.
And that, to me, that's a weakness of social scientists or social science.
But certainly, it was very obvious this is a problem at this time.
One study we put together here aggregated these sorts of measures across, I think it was 18,
different surveys and countries and found evidence of massive drops across all of these areas.
So I think this was useful in drawing attention to the cost specifically of
the measures that have been taking place in this period.
But as I say, it's one part of the equation.
A second use to which this was put, which was policy relevant,
was we were able to generate again, by pooling resources.
And I think this piece of Nature Medicine had seven or eight Vetebay co -authors
and surveys from five, I think,
five or six different locations in which Vetebay researchers were working that
are able to capture vaccine acceptance across a set of dimensions.
And basically, it's a very simple paper with a very simple message.
But the message was that there was really broad willingness in developing countries to use vaccines.
And if you remember, this is the time where there were massive efforts being
put in to get people to use vaccines in OECD countries in the West.
There was a lot of skepticism about whether it makes sense to share vaccines
with the rest of the world.
One of the arguments for that skepticism and that reticence to share vaccines
was a concern that there was no acceptance.
And of course, people can take out anecdotes of all kinds of people who are
suspicious of vaccines and so on.
But the message here was, it seems that the vaccine acceptance is very,
very high at this stage in the pandemic. And so this was, I think, quite useful.
But it was done, again, on the basis of lots of ad hoc opportunities available
to researchers who happen to be working in these areas, who often had telephone
panels and were able to coordinate.
So if you actually look at the set of countries, it's really very spotty.
This isn't the whole list, but it's a very spotty set of countries.
And some countries have got a few samples. Pakistan's got a couple of samples.
Some of the samples are quite odd samples.
Some are nationally representative. Some are much smaller areas.
Mozambique, for example, is two cities.
So this is about the best there was. And in some ways, that's great.
In other ways, it's pretty shocking that we didn't have the infrastructure in
place to be able to get a more consistent, global, representative view of the
economic situation or attitudes at this time.
So I take that as quite a mixed message.
Okay, there's a lot more that was going on in terms of tracking.
And so I think broadly, people were relatively successful in this area,
but it pointed out holes.
Then the last area I'm going to talk about is whether social science contributions,
were or can be useful here for helping formulate a policy.
And so there are three sorts of studies that I'm going to touch on briefly.
One was there's a lot of survey work. Political scientists especially do quite
a lot of survey work and public opinion work.
And some of this is useful for this purposes.
There was quite a lot of experimental work that was looking at what broadly might be called nudges.
This was a sort of a theory that took particular prominence at this time,
and people did quite a bit of work on this at this time.
And then there were a few field experiments led by social scientists,
economists mostly, that really addressed some of the solution issues,
I think were really very, very impressive.
It and worth highlighting.
In terms of the survey work together with with Hayek and others in Humboldt,
we did a host and thanks to Scripps, by the way, we did a whole series of measures
focused on the germ situation, germinologies.
So for example, here, what incentives are likely to matter for taking up the vaccine in Germany.
Here is a study that we did to try and understand German attitudes towards restrictions,
and there was a lot of debate at the time,
over whether these restrictions are consistent with basic rights and so on.
And the basic answer here we had was that the large majority of Germans who
ultimately were vaccinated were very supportive of relatively high levels of restrictions,
especially if those restrictions were especially placed on people who were not vaccinated.
And the degree of support for restrictions was naturally increasing in the severity
So quite a bit of support for government policy at the time,
despite a lot of vocal criticism. Thank you.
This project also turns attention to German attitudes towards vaccine sharing.
So, as I said already, this was also a time where there was real slowness to
share vaccines internationally, which was regrettable and short -sighted, I think.
But what this survey was able to do was get a sense of to what extent is that,
Although that might have been the position of many governments,
was that also the position of a broader society?
And the answer we got at least suggested that broadly, Germans were very supportive of sharing vaccines.
And even if that meant reduced supply in Germany and the motivations for doing
so seem to be broadly non -strategic motivations. doing so.
Not necessarily that it's going to reduce the risk in Germany or it's going
to have economic benefits for Germany, but really for humanitarian reasons.
So those are surveys. You know, I think it was great that the funding was made
available to be able to do these.
We got a sense these were shared media and shared with government.
Very hard to know whether these had much, much impact, they entered into the general discussion.
But there are just surveys there, they give a sense of public opinion,
but don't really give, tell us, they might tell us what sorts of actions might find support.
They might tell us what people think would make a difference for them and their
decision making, but they're not really trials to figure out whether particular
policies are effective or not.
So, so two sorts of trials that were done at the time that are pushing more in this direction.
One, there was a set of studies looking at nudges.
And so there was, for example, this piece came out here, behavioral nudges increase
COVID -19 vaccinations by Dai et al, focused largely around messaging.
So when you message people, reminding them, for example, go get vaccinated,
you're eligible, or sometimes giving different sorts of incentives,
such as the social incentives and so on, these sorts of ideas.
This seems quite positive, but the record seems to be quite mixed insofar as
when people have gone to replicate these,
the replications have also shown no evidence for the effects of the best intervention
in this case or other interventions that people thought plausible.
A very highly publicised nudge idea based on an interesting kind of The theory
was the vaccine lottery approach, where the interesting idea was that you make it a regret lottery,
which means that people are selected.
By lottery to receive a prize, $50 ,000, which they will receive if they have been vaccinated.
And so it's not selected from among people who've been vaccinated.
So people could anticipate, oh, but if I win that and I'm not vaccinated,
then I'll really regret not had been vaccinated and so on.
Interesting kind of subtle idea and the first evidence suggested possibly big
effects of this, but the longer, bigger rollout and application suggested there's
really no evidence that this works at all.
And I'm pointing to these because I think there's a tendency among social scientists
and policymakers also to kind of like these kinds of interventions which seem
kind of intuitive, simple,
cheap, you know, easy interventions, the kind of cute interventions.
And some of them likely work, but I think the main point I want to make here
is they are quite limited.
The evidence for them replicating is very weak,
and there's a risk that they displace what's kind of a harder type of assessment
of policy effectiveness, which was not sufficiently done by social scientists
or governments during this period,
which is the kind of field experimentation to figure out what kind of policies
would actually make a difference here.
And so here I want to point to two studies that were done, not by us, but by others.
One was looking at the effectiveness of masks. You know, we all spent two to
three years wearing masks and there's this question over how much do they really matter.
In some ways, it's fairly remarkable that so much was done in this area with
what turns out to be a relatively weak evidence base.
The more amazing to me was when you review the Cochrane report on this,
which is kind meta -analysis of masks, which is relatively, and I think.
Too pessimistic about the use of the masks. You see that the set of studies
that they have here, this is the set of studies that they're basing this on.
Most of these studies are these absolutely tiny studies with 26 participants,
61 participants and so on.
And then you have this study here, which was done by Jason Abluck and a set
of economists at Yale and elsewhere, which is 111 ,000 participants in this study.
So on totally, totally different scale.
And this is a field experiment done by economists using the kind of method that
economists have been using to understand development interventions.
So naturally it carries a huge amount of weight here.
It has actually original was 85, but it has positive evidence for moderate effects of masks.
But, coupled with these various other noisy estimates all over the place,
you end up with relatively weak results, overall results.
So when I look at this, I'm, first of all, shocked at how little serious field
experimental evidence there has been of this major question.
I'm struck that the economists
were able to produce evidence of this quality in this period and hopeful that
the kind of methods and approaches that they have been using and that many of
us in physical economy development have been using can be useful in this kind of setting and can be.
Can be extended. One thing I'm also struck by, when you look into a bit more
is this wild heterogeneity across these studies in terms of where they're done,
but also what kinds of encouragements are being used to use masks,
what sort of measures are being used.
And so there's nothing here of the kind of lessons I think we've learned in
social science over the last few years, but had to coordinate trials to figure out outcomes.
So just the scope for cross -pollination here seems really, really large.
A second study I just point to by Mushrik Mubarak and Martin Voorhees and others
is looking at vaccine uptake.
And so there's also been lots of nudge work, sort of messaging kind of work on vaccine uptake.
Voorhees and Mubarak and others took a much simpler approach.
They said, look, we know a lot of the problems with uptake is just lack of availability.
It's just hard to actually, in developing areas, to get access to these things.
So they did a field experiment where they provided what can be called last mile
delivery of vaccine, just bringing the vaccines to people and letting people
have reduced cost of using them.
So not theoretically deep in any kind of way, but hard, hard research to do
and found evidence of massive, you know, uptake and massive effects of this,
the slightly more, this costly intervention.
But something that they point out in this analysis is the cost benefits,
They do kind of very serious cost -benefit analysis, are massive here when you're
comparing it to nudge interventions, which might be done, you know,
really at scale and seem very cheap but have absolutely tiny, tiny impacts.
Okay, so just a couple of summary remarks there.
There's certainly some weaknesses in what social science has contributed towards
thinking about solutions.
I think there's been a lot of survey dependence and a lot of focus on relying on easy fixes.
But even still, I have to think the survey work has been very helpful.
Broadly, social science methods seem to be performing fairly well,
the methods in terms of understanding how to do a scale field experiment and,
analyse that kind of data.
And there's a striking agility of some often US -based social science researchers.
So I'm saying often US -based, there in these studies, there have been Dutch
and others involved also.
But the typical work that I'm seeing around in our group, and I'm seeing around broadly in Germany,
does not do this harder shoe leather type of research, that's going to take
the questions of head on and try to address it at scale.
And so that's something for us to have to think about and discuss.
The last sort of thing I want to area I just want to address very briefly is, is critiquing.
So one sometimes, you know, one aspect of this time,
and in some ways uncomfortable was social scientists did a lot more reading
of work in epidemiology and public health.
I think broadly, we're, we're.
Diplomatic, let's say, in some of their responses, some of their critiques,
but also it became clear that there were quite different understandings of how
to do research and how to understand data, how to analyze data.
And I think, understandably, there was muted criticism at the time,
but there are maybe lessons here for the future about how to make use of different,
say, traditions for understanding this sort of evidence, this sort of data,
how to put together trials that could be helpful.
So some of these were, I think social science were noting a consistent weakness
in cost benefit analysis in some of the research that's been done.
A weird interpretation of null results repeatedly,
and so the Cochrane Report does this regularly when they're very imprecise studies,
they say this is evidence of no effects, which is, which is,
you know, not an interpretation social sciences would take.
A strange sort of focus on baseline balance in experiments where studies are
judged biased when there's balance which is not consistent with the statistical theory and others.
One thing I'd point to is, I think social scientists have been focused on the
importance of contextual heterogeneity for interventions like this.
And so, you know, to be clear, the focus is not on things like what medication works, it's not,
it's on the social aspects of take up and use and attitudes and pushback and
these sorts of features, which have huge contextual dependence.
And social scientists to focus on putting a trial to the solution to this.
And this has been largely absent at this time.
So here's a quote by Andy Gellman, who says, my time to say that people are
making major decisions based on binary summaries of statistical significance
from seriously flawed randomized studies.
But that seems to be what's happening.
And you can see that in a set of the systematic reviews that were done on what
seems like very basic questions about public health.
Great. So I'm pretty much on
time. The last small set of remarks are here oriented towards the future.
So some lessons. First of all, there's the question which we can discuss then later,
which is how much should social scientists be changing their agenda and focusing
on these sorts of questions that are outside of their lane.
But assuming that there should be more focus on this, whether it's more investments
specifically in social science, health research, here are three areas.
What is infrastructure? It was painfully
obvious that we have nothing like a kind of global barometer that can be used
in moments of crisis to figure out what are the household level economic impacts
or behaviors in the presence of a shock like this.
As I said with our nature paper.
In ways I'm very proud that we're able to put this kind of paper together with so many samples,
coordinated across areas, but it's also shocking that the kind of evidence we're
using these decisions is from such a small and heterogeneous and selected set of sites.
We should have a global infrastructure, global panels that can be drawn on in times like this.
What can we do to produce something like that, that kind of infrastructure to
be used by social scientists and health.
Researchers? Second, it also became obvious that the networks between public
health research and economists and political scientists were really pretty weak.
In some of our work, we were successful in working with public health researchers.
For example, in the Nature paper, we had a set of public health researchers
on it and engaged with ministries of health and so on.
But they were kind of new networks largely that were being formed at this time.
And there was a lot of difficulties in translation of how to interpret data and so on.
One thing I should say is I've been talking today largely about the utility
of research, but really just giving it from the researcher's perspective.
You know, we really need to know government's answer to this.
Policymakers answers to this question. I've seen some of it on the developing country side.
I know others here have seen it more from the German government side and others.
I would love to see more of the government side analysis of when social science
research has or would be helpful and why it's being underused if that's the case.
And then last, just to point to a couple of major questions that come out of
this that we might want to take up in the future as sort of agendas.
One of them, and there is work in this area, but I think insufficient work is
the health consequences of social divisions.
All of the work that we looked at predicting the social divisions aspects,
the trust and inequalities, fragmentation, these features seem to be very, very important.
I think insufficiently understood and insufficiently taken into account in policy formulation.
And the last is that the absence of not just predictive models, but normative models,
policy tools that can take account of the economic consequences of health interventions,
which include those that work through the political responses,
And it seems to be an area where social scientists should be able to put their
head together, not just to fight the last battle, but to be able to engage in
future discussions when issues like this arise.
Thank you so much, Mac Carton, for this really comprehensive a few on the contributions
of social science on the pandemic or how to solve the pandemic.
And I think you highlighted that there were contributions, that social science
is good to help in a situation like the pandemic, but there's also a lot of things to be done.
We received a range of questions that are collected in the background.
And as we said previously, we will take them each now. And the first one looks
sort of in the future and says there will be another pandemic one day.
And you already highlighted a couple of things that need to be done on the science side.
But this question asks, are social scientists already better prepared for next time?
Also, when it comes to the link to policymakers and also authorities for data,
et cetera. So, how much have we learned already?
That's a really great question.
So there are two sorts of answers. What is, I think, what I was trying to show
with the predictive model, I think there is updating of those models.
And so there is more attention. Also, the features seem to be mattering more.
So there's some, I think, direct learning from this.
Actually, my first slide had said something like the pandemic was good for social
science, but was social science good for the pandemic?
So there was a whole set of areas in which people that I haven't even touched
on have seen things like how various communities were marginalised and some
of the discrimination that was taking place and some of the policies.
So there are a whole set of more micro lessons like that.
And I think those stand and will be relevant for future pandemics.
I think in this much bigger picture, the lesson I would take away more is that
we don't, is under preparedness,
that we don't have some of the infrastructures we should have.
And we need to start thinking about building those infrastructures if we want
to be able to address this sort of challenge more in the future.
And the two I'd point to most are the kind of data, global data infrastructure,
and the would be better network connections between public health and social science researchers.
I think, maybe I'm unsure which way to go with this.
I have come away feeling that the field of search work that was done by economists
in this area was very strong and should be a wake -up call to some of what's
been going on in public health.
So that could either be that in the future, where social scientists should be
more proactive in saying, yes, actually, we will do these things because public
health, we thought they were doing it, but they're not, it's the same as them.
Or it could be that, no, that's combined forces, which is probably a better way of doing it.
All right, thanks for that. I'm combining now two questions.
One of them is more of a comment, and the other one I think ties nicely.
And the comment says, if psychology is considered a social science discipline,
there are a lot of studies showing that national culture traits,
collectivism versus individualism, tightness versus looseness,
flexibility versus monumentalism, et cetera, are a good predictor of COVID -19 mortality.
And now the question, This participant highlighted that other factors such as
inequality and trust are important predictors of mortality.
And now the question is, did social scientists use these findings for more problem
-oriented view on the matter and or informing policymaking?
Yeah. So, that's true. I'm aware of some of that psychology work.
Some of it actually looks very good. And I'd also say, and so I didn't touch
on it at all because it's, I was really more drawing on work in this sort of
that I've been involved with or the colleagues have been involved with.
But I saw a psychologist also organized actually quite early on to think through
what are the lessons of psychology that apply to this in this domain.
My understanding is actually quite a bit of that was picked up and was used.
I don't know if the person asking is a psychologist,
but the sense I had was that the sort of psychological theories that had more
of an impact or that were obviously picked up were the kind of behavioral economics nudge type ideas.
And so I don't know if there was a sense in psychology that there was a displacement
that was happening there.
All right. Next question is about vaccine acceptance.
And Anja Oppermann asked if there are any data on general attitudes towards the vaccines.
There are, but she's interested in what is the difference between acceptance
of other vaccines or vaccines in general and COVID vaccines? Yeah.
So we have some information from this, from the Nature paper,
had other information about other vaccines. Maybe two things is, you know, positively.
Well, first of all, the acceptance of COVID vaccines was lower than acceptance
of other vaccines, but it was correlated.
So people were more accepting of, that's not surprising, because people were
more accepting of other vaccines were more accepting of COVID. I would say,
One threat, which I've seen expressed by ministries of health in sub -Saharan Africa,
is that the promotion of the COVID vaccine itself has negative consequences
for the uptake of other vaccines.
So, I was involved in a project which is going on at the moment.
I stepped out of the project, actually, because I was uncomfortable,
partly because I was uncomfortable with this aspect of it, where it was really
focused on promoting the COVID,
vaccine in sub -Saharan Africa, kind of right now, so quite a bit after,
you know, really in some ways too late.
It's not prominent in people's thinking. And there's four ministries of health
And they kept on saying, you know, it's the other vaccines that we care about
more, you know, that these are,
but the donor attention was on the COVID vaccine, you know, and partly because
of long -winded implications of incubation and risks for developed countries.
So actually that project is now morphed and they're engaging with COVID vaccine
together with other types of vaccines.
All right. The last question we have concerns sort of. Ah, there's another one coming in directly.
That's a good question. I think, let's see if you also think it's a good question.
Now you gave a couple of lessons and the slide is still up and you talk about
infrastructure and you imagine this global barometer for surveys.
And this question asks actually, who could be the actor to push that?
So who would have to spark that? How would, I mean, can we imagine this?
I guess yes, but how will we go about that? Yeah.
It's very, I mean, so, you know, you could imagine VETCB saying we're going
to invest in this, you know, and it's a committee, it's this kind of a coordinating
kind of, kind of role, you know,
VETCB has had major data projects that involved coordinating costs across institutions.
You could, you can imagine saying we are going to, you know,
put in for a major grant with the European Union or DFG or something else that's
going to provide the coordinating mechanism whereby many,
many other institutions can then feed into this.
You know, there's going to be coordinating meetings where you assess the standards.
This is the kind of core content that should be in all of these.
These are what you would need to meet to enter into this sort of panel and where
you'd have the license, if you like, to be applying more locally for funding
to be a part of this kind of initiative.
You know, my sense kind of is you don't and shouldn't be waiting on a,
say, a government or the UN to do this kind of thing.
That's something that social scientists, you know, can do and have effectively
coordinated very large features like this in the past.
I just point out, for example, in Scripps,
there was a project, you know, that ran a survey in 26 countries,
you know, on a dime, really on a dime, and very effectively,
very, you know, very high standards.
And so this would be a much, much bigger thing. And people have to believe this
is important. And you want to know that it'll be used.
But because I think something you could imagine.
One last question, we have a couple of minutes and I personally also am interested
in that, is what the pandemic did to the social sciences.
You said it was good for the social sciences, but we saw a lot of ideology also
in the discussions, a lot of dynamic pressure to give answers in a very rapid
time, and especially in Germany when you followed the discussion at at the very end,
it was quite a harsh discourse where people fell out over concrete questions,
findings, but my impression was more of a sort of, you're in that camp and you're
in that camp and you have to decide whether you support either of them.
So what's your, what was your experience on that? Yes. We zoomed out a little bit.
More when I said it was good for social science, I mentioned in a more ivory tower kind of way.
You know, so there's a series of many, many studies were published,
you know, APSR has got six or seven COVID studies, there's.
As I said, Perspective and Politics, many journals, and there are studies that
have been able to use this to understand partisan divisions,
you know, or mobilization strategies, or a set of kind of less in the spotlight
kind of social scientific questions.
I think it has been useful for for many of those in a way social science often
make use of shocks of various forms and this is a very complicated shock but
has been very creatively used.
I think you're focusing on more is has this been good for social science in
terms of policy dialogue?
I think that's a great question. I don't know. I don't know the answer to that.
I think it's plausible that some reputations have been made and some reputations have been lost,
but I'd love to think, what I was saying earlier, I'd love to have an understanding
of this from the governmental perspective, and that would be an element that should be part of that.
Great. Thank you so much, McCartan. I mean, we're perfectly on time.
I think this is a great session highlighting one of the aspects of our webinar series, the pandemic.
So thanks, McCartan, for your time, for your presentation, for answering the questions.
Thanks to the audience for being a lively audience and posing your questions.
Thanks to the back end. And I also shall say greetings from our president,
Jutta Allmendinger, who usually takes part in this webinar but right now is
in meetings so she couldn't make it today which she regrets.
Next week we have Wolfgang Merkel who will
talk about democratic resilience so we
move the shift towards the discussion about our political systems so stay tuned
with us you can check out our website wzb .eu there is linked to the blog all
audio files of this webinar series are posted there so can listen to them if you like.
And in general I wish you a nice Friday and a nice weekend and see you next week. Thank you.