Response Rate FAQ

1. For our institution to have confidence in our results, is a minimum response rate required?

This depends, in part, on the size of your institution, how you plan to use your NSSE results, and your specific campus context. In 2019, institutional response rates for NSSE ranged from 5% to 81%, with an average of 28%.

NSSE research suggests that both response rate and number of respondents are important in assuring that first-year student and senior institutional estimates are reliable. A NSSE study (Fosnacht,Sarraf, Howe, & Peck, 2017) found that even relatively low response rates provided reliable institution-level estimates, albeit with greater sampling error and less ability to detect statistically significant differences with comparison institutions.

Depending on institution size, as few as 25 to 75 respondents appeared to provide reliable institution-level estimates for most institutions (Fosnacht et al., 2017). This comports with Pike’s (2012) finding that as few as 50 students could provide dependable group estimates of student engagement. However, institutions analyzing subpopulations of students (for example, using NSSE’s Major Field Report) generally should collect data from as many respondents as possible so that each subgroup is adequately represented.

NSSE also recommends that institutions benchmark their response rates in relation to peer institutions with similar enrollments. Institutions with larger enrollments generally see lower response rates (NSSE, 2016) but they enjoy a higher degree of confidence in estimates due to the sheer number of respondents.

2. Does a low response rate mean our results are biased?

A high response rate is no guarantee of data quality, nor does a low response rate automatically mean your results are biased. For results to be biased in any meaningful way, nonrespondents’ level of engagement must be significantly different from that of respondents. In other words, one must take into account both response rate and differences between responders and nonresponders. Although we might feel more confident with a higher response rate, the NSSE study (Fosnacht et al., 2017) found that survey administrations that collected a minimum number of respondents, even with a low response rate, provided unbiased estimates for the majority of institutions.

Many prominent survey researchers have also questioned the widely held assumption that low response rates are associated with biased results (Groves, 2006; Massey & Tourangeau, 2013; Peytchev, 2013).

For additional information related to this question, see the answer to the final question below about respondent representativeness.

3. While reviewing our NSSE results, should we consider data quality indicators besides response rate? Would another indicator provide a better measure of survey data quality?

Response rate, respondent count, and sampling error are all included in your NSSE reports, providing several components of data quality. Results from NSSE’s study on response rates (Fosnacht et al., 2017)indicate that respondent count has particular value and may be more useful for determining the reliability of NSSE estimates than other measures.

Although other data quality indicators exist, because response rate is a key consideration for many  campus constituents, it cannot be ignored—even when other indicators suggest the data are valid and reliable.

A low response rate will influence how results are received, regardless of how many individuals responded or what the sampling error is. For this reason, maximizing response remains a worthy goal, and, importantly, helps ensure sufficient data for analyzing campus subgroups and running statistical analyses.

Information about respondent representativeness across key student subpopulations is also important to assessing data quality. In addition to using the information provided in NSSE reports, we urge users to conduct representativeness studies by comparing the characteristics of NSSE respondents and nonrespondents.

4. Is ours the only institution struggling with a low or declining survey response rate?

Researchers across a number of social science disciplines in the US and abroad have witnessed a steady erosion in survey response rates over time (National Research Council, 2013). Higher education researchers, NSSE included, also have seen a general decline in survey participation.

5. Should we worry that certain campus subpopulations did not participate in the survey in proportion to their overall numbers?

We generally do not find large differences between different types of students on NSSE measures(academic major being an exception), so disproportionate representation should not be particularly troublesome in accurately assessing engagement levels. For example, if an institution’s adult learners were underrepresented among NSSE respondents but the results indicate they interacted with faculty at levels comparable to those of traditional-aged learners, student-faculty interaction scores most likely are unbiased.

Differences in engagement and response rates do exist, however, between men and women, as well as between full-time and part-time students. NSSE addresses these differences by weighting the results. NSSE encourages institutions to dig into their own data to discover meaningful differences between different types of students and, then, to evaluate representativeness.

If institutions discover disproportionate representation and differences in engagement according to particular student characteristics, developing weights to address these imbalances may be warranted.

References

Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 646–675.

Massey, D. S., & Tourangeau, R. (2013). Where do we go from here? Nonresponse and social measurement. The ANNALS of the American Academy of Political and Social Science, 645(2013 January), 222–236.

National Research Council. (2013). Nonresponse in social science surveys:A research agenda. Washington, DC: The National Academies Press.

National Survey of Student Engagement. (2016). NSSE 2016 U.S.response rates by institutional characteristics. Bloomington, IN: Center for Postsecondary Research.

Peytchev, A. (2013). Consequences of survey nonresponse. The ANNALS of the American Academy of Political and Social Science, 645(2013 January), 88–111.

Pike, G. R. (2012). NSSE benchmarks and institutional outcomes: A note on the importance of considering the intended uses of a measure in validity studies. Research in Higher Education, 54 (2),149–170.