COVID-19, Cognitive Science, And Adaptive Responding: What Can The CogSci Community Do?

Two people stand on balconies facing each other. The woman tosses a paper airplane to the man on the opposing balcony.

by Ulrike Hahn

At a time when we are all thinking about how best to respond to the present global crisis, it seems timely to think also about how we, as the Cognitive Science community, can be most effective. What kind of science can we do, and how should we go about doing it? This blog post is an attempt to help fuel discussion on these issues in order to formulate the best community response. It offers a starting point for thinking about cognitive science and coronavirus. 

Though thoughts first turn to medicine, virologists, and epidemologists, the CogSci community has many potential contributions to make. Research areas that are established cognitive science topics, ranging from e-learning, e-delivery, media literacy, through risk analysis, risk perception, decision-making, behaviour change, argumentation, or communication are suddenly in high demand. But as cognitive scientists, we also possess key skills: the ability to interface with AI, handle ‘big data’, engage in computational social science, and maybe first and foremost, modelling skills and an ability for model thinking. And, finally, the sheer disciplinary breadth of Cognitive Science, from computer science, through to anthropology and philosophy, can offer much needed, complementary, perspectives and views.

While cognitive scientists around the world consider how their own research skills and ideas may usefully be applied, we should also spend some time rethinking and looking to adjust, how we go about doing science. 

The science process

Dave Lagnado, Steve Lewandowsky, Nick Chater and I wrote a short paper setting out some discussion points on how we need to change the science process so as to move quickly and effectively in responses to the Covid-19 crisis. We identified the following initial areas for consideration:

1. Knowledge Creation

The crisis demands rapid responding, but ‘fast’ is at odds with many of the things that make good science. And that holds true even more when people are operating under significant psychological pressure and stress (a point made well in this blog post). So, what we need to be looking for is parts of the process that we can trim without cutting unduly into quality: we need a model of “proper science without the drag”.

Publishing. There are surely aspects of the ‘normal’ review process that can be cut out (demands for extensive text rewrites that turn our work into the paper our reviewers’ would have wanted to write, demands for additional experiments and so on), and this can be achieved just by moving toward the yes-no decision model that is common in medical research. And, if we want to keep our practices for our ‘normal science’, we can limit this new approach to crisis-relevant work only. This will also help speed up the turnaround time for reviewers and make it more realistic to meet the expedited timelines many journals have already brought in for such research. At the same time, there might be alternative review models (which we did not consider in our piece) that might be better suited, and that have already been under discussion for several years. 

Post-publication debate. It is not just the publication process that needs speeding up. Equally important are our processes for post-publication critique. At the moment, the main vehicle is through subsequent publications, but more immediate feedback mechanisms and avenues for critique seem desirable, such as comment-facilities post publication. This is all the more important if policy makers base actions on (speedily published) work. 

Even better might be if research development, publication, and critique could be interleaved and made transparent across time, including sharing of data and code, with clarity at each point about what has and has not been reviewed. 

Funders. Finally, to do the research, money is needed. Funding bodies all over the world are rolling out rapid response programmes. One thing we haven’t seen yet, though, is what struck us as particularly useful for the behavioural sciences: namely, small, rapid response proposals. This would help a broad range of researchers, across many institutions, and career stages, to get involved.  This would boost epistemic diversity, likely leading to better outcomes, and provide greater resilience than concentrating funds, given the nature of the crisis itself. 

2. Knowledge integration

Just generating new research, however, will not, be a suitable response. In fact, given the known frailties of research, it seems essential to make aggregation a key feature of our response. We need to avoid needlessly reinventing wheels, we need meta-analyses, and we need to manage the likely flood of new research. This means a degree of synthesis that goes well beyond the slightly haphazard publication of reviews in normal science. It seems hard to see how the required aggregation and integration could be achieved without computational tools, which makes this a key area in which cognitive scientists could contribute. 

Managing expertise. At the same time, researchers will need to think about expertise. It seems vitally important that researchers not get drawn into pronouncing beyond their expertise in such a high-stakes environment, but who are the experts? The community needs databases that have this information on hand and these need to develop dynamically as people acquire new expertise. Extant ‘expert databases’ (e.g., from learned societies) are too restrictive (and too exclusive) for this challenge.

Breaking down silos. Scientists from different disciplines and even subfields within a discipline will address similar questions with different approaches. This brings hurdles for integration, not least because different labels across areas will be barriers to integration, leaving us unaware of related work. To combat this, we need large scale virtual forums for exchange, as well as wiki-like summaries that integrate information to be shared across the community. 

3. Knowledge dissemination

Developing new tools for knowledge aggregation will help disseminate knowledge, certainly to other researchers. But the nature of the crisis will mean that at least some of this knowledge must be disseminated to policy makers, journalists, or the wider public. Here, too, new tools seem desirable:

Supporting policy makers. For policy support, it would be good to create wiki-style research summaries supporting policy-related conclusions. At the same time, it seems desirable to make use of the wider scientific community, beyond those scientists directly advising government, to help identify weaknesses in proposals for policy change or suggest novel policy ideas. Here, we suggest that creating “open think tanks” would be a productive route to explore.

Adversarial disruption/avoiding politicization. In any outreach beyond the scientific community itself, we need to avoid infusing our political views. Codes should be developed so that we remain neutral in our dissemination and advice, while sticking strictly to our expertise. Finally, the early days of the crisis have already seen troubling spread of misinformation, the science that has been developing around this topic in recent years should be put into guidelines for our scientific community, at the same time as we develop new means for combatting misinformation.

4. Consensus and disagreement

Finally, it seems important that at each of the proceeding steps, we learn how to build consensus, shelving theoretical debates that are important to us in “normal science”, but that have little consequence for current action. This means focussing on shared predictions and agreed empirical results. 

There will, however, be contexts where legitimate disagreements remain. These cannot be glossed over, and should be made known to policymakers. It would be desirable, though, to find new forms for managing disagreement such as having other scientists act as mediators to help decide which is the stronger position. And more generally, it seems plausible that it will not necessarily be the most influential authors, or ‘leaders in the field’ who will be best placed to (individually) provide the most balanced, even-handed, assessments of their own areas. 

5. Putting this into action

All of the suggestions made in our paper are really just an initial prompt for a wider discussion within the research community. In order to facilitate this, and help kick-start new ways of doing things, we created a Twitter account (@SciBeh) and a place for discussion and debate in three reddit communities:

  • r/BehSciResearch (for discussion of research, ideas, study designs, and post-publication critique)
  • r/BehSciMeta (for discussion about how to reshape the science process)
  • r/BehSciAsk (a query forum for researchers, policy makers and journalists).

Please join: There is not just discussion here, but also the beginnings of projects for decentralized consolidation knowledge (such as aggregators of aggregators) and attempts to span a range of information hierarchies from raw source such as article/preprint, tweet etc, through aggregators, to high-level consumables like Wikipedias or summaries for policy makers (see e.g., here).  

We are also constantly on the look-out for other efforts and developments that can be brought together to avoid reduplication. 

Finally, in whatever way cognitive scientists can find to contribute to mastering this global crisis, we hope the community can rise to the challenge. 


Featured image by Natalia Vélez

You May Also Like…

No Peace for the Wicked?

No Peace for the Wicked?

Welcome to CogSci Unpacked, an exciting new blog series dedicated to summarizing academic papers from the Cognitive...

How to tell a dualist?

How to tell a dualist?

Welcome to CogSci Unpacked, an exciting new blog series dedicated to summarizing academic papers from the Cognitive...

Share This