Public opinion is central to Russia’s invasion of Ukraine. President Vladimir Putin’s fantasy of “rescuing” fellow Russians portrays Russia as acting on behalf of those who are overlooked and without a voice. By contrast, Ukraine’s leadership and its Euro-Atlantic backers regularly cite public opinion polls to show that Ukraine wants to be free of Russia and move toward the West. The latest example of the latter is former British Prime Minister Boris Johnson’s opinion editorial in the Washington Post in which he justified Ukraine’s NATO bid by citing a sea change in public opinion in Ukraine: “People used to say that the Ukrainian population was too divided on the subject of NATO membership, and before 2014 you certainly could have made that argument. Look at the numbers now. Support for NATO membership in Ukraine is now stratospheric—83 percent, according to one recent poll.”
The December 2022 New Europe Center/Info Sapiens poll cited by Johnson is not alone in finding overwhelming support for NATO membership. The results of our own survey in October 2022 show how Russia’s invasion of Ukraine has dramatically shifted geopolitical attitudes. Indeed, a country that was once divided about its future orientation now appears more united than ever. Most people see their future as part of the West.
An important question for researchers and policymakers who use survey research from charged and violent environments is: Are these public opinion polls that show dramatic shifts in attitudes reliable? In this memo, we draw on our unique longitudinal survey to highlight three methodological challenges that may affect studies on Ukrainian public opinion in wartime. These challenges lead us to call for caution in the interpretation of public opinion data, especially on politically sensitive questions and preferences of Ukraine as a whole. Wartime polls tell us important things—and it is clear that public opinion in Ukraine has shifted as a result of the brutal war—but there are significant challenges related to representativeness and expressed preferences, and, as researchers, we need to communicate the uncertainty these challenges entail.
The Survey
As part of a collaborative research project, we conducted two rounds of public opinion surveys in Ukraine in December 2019 (funded by the US National Science Foundation) and October 2022 (funded by the Norwegian Research Council). The first wave of the nationwide survey of government-controlled areas was conducted during an ongoing civil war between Russian-backed separatists in eastern Ukraine and the Kyiv government. Few people foresaw the Russian invasion of Ukraine back then. However, many of the topics in the survey have emerged as salient issues central to the war just over two years later.
Both surveys were conducted by an experienced and reputable survey firm, the Kyiv International Institute of Sociology (KIIS). The 2019 survey was conducted face-to-face on people’s doorsteps. In October 2022, we conducted a follow-up survey by telephone. KIIS called all respondents surveyed during the first round. Just under 20 percent of respondents from the first round took part. Attrition is always present in panel surveys, but there is no doubt that attrition is especially high due to the Russian invasion and ensuing displacement across Ukraine—displacement on a scale not experienced in Europe since World War II.
The Pitfalls of Wartime Polling
A representative sample?
Our 2022 survey is similar in many respects to the poll cited by Boris Johnson: it does not include “Ukrainians who have gone abroad” and excludes areas under Russian control and with heavy fighting.
What do large movements of people mean for the representativeness of wartime polls conducted in Ukraine? The forced movement of people presents several challenges, but key for surveys about geopolitical attitudes in wartime Ukraine is the risk that surveys do not include people who were oriented toward Russia on the eve of the war. It is possible that Russian-oriented respondents fled to Russian-controlled territory in the east or became refugees, either in neighboring European countries, such as Poland or in Russia. While statistics are debated, a significant number of people fled to Russia either because they wanted to or because they had no choice.
What does it mean that violent areas are not surveyed? In the context of war, fieldwork is often not possible, and if it is possible, the collection of survey data is dependent on conflict dynamics and ethical considerations as researchers or enumerators avoid dangerous areas. Violent areas are, therefore, less likely to be included in wartime polls.
Looking at our longitudinal survey, the main reason for not participating in the follow-up survey is displacement caused by violence. An analysis of attrition between survey rounds reveals that respondents are more likely to drop out in eastern regions such as Luhansk, Kharkiv, and Donetsk, and to a lesser degree in more southern regions such as Zaporizhzhia and Kherson. Combined, these regions account for the majority of violence. Indeed, according to Armed Conflict Location & Event Data Project (ACLED) data, over 59 percent of fatalities from February to October 2022 were in Luhansk, Kharkiv, and Donetsk, while over 24 percent were in Zaporizhzhia and Kherson. Below we map conflict-related violence and attrition rates. The map shows that areas that have experienced the most conflict-related violence are more likely to drop out of the 2022 survey.
Figure 1. Attrition and Conflict-Related Violence in Ukraine
Conclusions drawn from war polling are only representative of the areas that researchers can access and the people that currently reside there. This may seem obvious, and it has implications beyond the academy. Researchers should make clear that surveys conducted during the war do not account for the views of people who have remained, moved to, or will return to territories that are currently Russia-controlled. This is particularly important in the Russia-Ukraine war as decades of research show an important East-West divide in political opinions in Ukraine. Indeed, respondents in the areas with the highest drop-out rates (Lugansk, Donetsk, and Kherson) overwhelmingly rejected Ukraine joining NATO when asked in 2019 (just 16 percent agreed). For the questions that we are interested in —and which are central to public and policy debate—we need to acknowledge that we have a sample only from areas that were already more likely to favor NATO membership and have been less affected by wartime violence.
Non-responses among respondents in government-controlled Ukraine?
A second challenge is related to non-responses in areas that can be surveyed. There are two types of non-responses that are of concern: unit non-response and item non-response. Unit non-response occurs when respondents can take part in a survey but refuse to do so. If certain types of respondents do not take part in the study, then sampling bias is a serious risk that could undermine the conclusions of research projects.
Unit non-response can lead to bias if there is a relationship between the study’s outcome of interest and the characteristics that lead to the non-response. This is likely to be common in war settings. For example, in an armed conflict, those who are cautious, skeptical, and fearful are less likely to be heard or want to speak. Those with strong opinions and emotions are more likely to be those included in the sample. This is more likely to be the case in telephone surveys. Given wartime conditions, most polling in Ukraine is done through computer-assisted telephone interviews (CATI), not face-to-face interviews. While this is randomized and in the best methodologies spread over multiple mobile carrier firms, the act of answering the phone and agreeing to participate in a survey of uncertain duration with a stranger is likely to be attractive only to certain people. Those beyond the world of mobile phones, mainly the elderly and impoverished, are voiceless but part of a Ukraine that is victimized by war and its horrors. Especially relevant for conducting survey research in war zones is non-response due to the perceived sensitivity of the research topic. Some individuals may fear personal harm for taking part in surveys.
Research on unit non-response is difficult in part due to ethical and privacy concerns for those who choose not to participate in research. Statistics on unit non-response tend to be summary statistics, i.e., the percentage of attrition of potential respondents who decline to take part in the research. However, these summary statistics may vary little over time but contain different stratifications within the data.
Our longitudinal survey provides a unique opportunity to analyze unit non-response because data collected during the first wave can be used to provide additional information about unit non-response in the second wave. What type of person did not respond to the survey in the second round? When we control for respondent location in 2019, common measures of Russian ethnicity are not statistically related to attrition. However, our measure for individual language preference (answering the survey in the Russian language) is significantly related to attrition when we do not control for respondent location in 2019. What does this mean for our sample? There is little evidence from our analysis that surveys conducted in 2022 suffer from unit non-response for salient identity groups. With the assumption that one can measure Russian ethnicity using common measures (including language practices), we are confident that our 2022 survey is representative of people who live in government-controlled Ukraine.
Who does not respond to sensitive questions?
Item non-response refers to situations when individuals do not provide answers to specific questions by responding that they do not know or refuse to answer. They are known as the “don’t knows.” Related to unit non-response bias, respondents may take part in a survey but avoid answering sensitive questions. While wartime generates a “rally-around-the-flag” effect, it also brings out strategic hedging on sensitive questions among parts of the population. Those opposed to NATO membership may respond “don’t know” because opposition may be perceived as unpatriotic.
To assess the types of people who may be avoiding sensitive questions, we focus on 62 respondents who answered the 2019 survey in Russian but took the 2022 survey in Ukrainian, which amounts to just under 15 percent of the re-surveyed sample. There is a big change in the language used by participants across the two survey waves. Many respondents who completed the survey in Russian in 2019 were in areas most affected by the war in 2022 and were subsequently not re-surveyed. In the re-surveyed sample, 40 percent fewer respondents completed the survey in Russian in 2022 compared to 2019. We focus on these respondents as they may feel social pressure to respond in certain ways and, thus, avoid questions.
We compare the average number of item non-responses per respondent. To do so, we count the number of times that respondents answered “don’t know” or “refused to answer” across 14 potentially sensitive political questions. On average, respondents who changed their interview language from Russian to Ukrainian between the two waves provided 1.9 non-responses—50 percent more than respondents who did not change their interview language.
This descriptive statistic ignores important variation across questions. The results of a statistical analysis indicate that respondents who changed their interview language from Russian to Ukrainian were more likely to provide a non-response to three questions: (1) where they place their country on a ten-point scale of “towards the West” to “towards Russia”; (2) whether free and fair elections should be safeguarded during the war; and (3) how likely they think it is that Western states will continue to provide military support to the Ukrainian government.
Truthful responses to sensitive questions?
In our surveys, we observe a rise in the number of people declaring that they speak Ukrainian, not Russian. Russian actions and atrocities in this war are likely to have led people to distance themselves from Russia. But there is also the possibility that what we observe is also driven by social desirability bias, and it is difficult for researchers to disentangle genuine changes in preferences from what is known as preference falsification.
War requires patriotic performance, a rallying that induces people to express positions they know are socially and (geo)politically correct. Declaring that they speak Ukrainian, not Russian, may be more socially desirable in government-controlled regions. Also, people may feel the need to project confidence and belief in victory while hiding doubt.
Empirically, preference falsification presents an additional challenge to unit and item non-response explored above. Like non-response, preference falsification is often driven by social desirability bias. Researchers have been aware of preference falsification for some time in authoritarian contexts such as Russia and have developed survey instruments to elicit truthful responses. These instruments, many of which rely on an experimental design, allow respondents to provide truthful answers to sensitive questions. In the absence of an experimental set-up, it is not possible to empirically measure preference falsification. However, based on our analysis of item non-response—the fact that people who switched language from Russian to Ukrainian are more likely to avoid answering geopolitically salient and sensitive questions—it is possible that preference falsification is affecting results. Research on sensitive opinions should employ experimental methods to evaluate this. Experimental designs can be challenging, but not impossible, in phone surveys that tend to be shorter in duration (i.e., include fewer questions) than face-to-face surveys.
What Are the Political Stakes of Conflict Polling in Ukraine?
Wartime surveys in Ukraine are meaningful, and they reveal how ordinary Ukrainians are, justifiably, affected and angered by Russia’s war and its crimes. But conjuring a transcendent Ukrainian general will in wartime from telephone survey findings conducted in government-controlled Ukraine demands skepticism.
Public opinion data—not just in this context but more generally—works in a political culture with a default trust in numbers but often little curiosity about how the data were generated. Summation phrases like “the majority of Ukrainians” or the more directly homogenizing expressions “Ukrainians are united…,” “Ukrainians believe…” and “Ukrainians want…” are what polling data become in political discourse.
This means that partial country polls are taken to represent all of Ukraine. Regions like Crimea, and to a lesser extent, the Donbas, are simultaneously seen as Ukraine but unseen and unheard in public opinion research from Ukraine.
Ukraine is a very large and diverse country, and the least we can do amidst the massive trauma of Russia’s invasion is to acknowledge and respect its socio-cultural and geographic complexity. While there is strong evidence that Russia’s invasion of Ukraine has shifted public opinion towards the West, researchers have an obligation to convey the difficulties in gathering sensitive survey data in war zones and, thus, temper how data are generalized and represented in public discourse. This requires nuance when discussing the preferences of Ukrainians from all areas, including those in exile or living under Russian control, and greater efforts to communicate uncertainty.
Kit Rickard is Research Associate at the United Nations University World Institute for Development Economics Research (UNU-WIDER).
Gerard Toal is Professor in Government and International Affairs at Virginia Tech in the Washington, DC, metro area.
Kristin M. Bakke is Professor in Political Science and International Relations at University College London and an Associate at the Peace Research Institute Oslo (PRIO), Norway.
John O’Loughlin is Professor of Geography at the University of Colorado Boulder.