Using evidence to inform institutional improvement efforts has been a goal at NSSE since its inception. This is why NSSE data and reports provide actionable information about critical dimensions of learning in higher education.
However, we often get questions about ways to analyze and interpret our data and reports, especially in respect to diversity and inclusion. Some common questions include:
How do we identify subgroups of students struggling or excelling in their experiences?
How do we analyze subgroups with very few responses?
How do we better identify the needs and experiences of students from underrepresented or underserved backgrounds?
How do we avoid approaching these data from a deficit perspective?
How do we better share these data and results with others on campus?
The ways these data are analyzed and interpreted are important. We encourage you to be conscious of the ways our work may perpetuate problematic and limited understandings of already marginalized groups. In this guide, we offer a few tips to consider for more inclusive data sharing and analysis. Whether you are preparing reports for internal stakeholders or conducting research to share externally, we hope these tips allow us all to be more attentive to the ways we engage in this work.
Survey data such as NSSE results can be used to broadly assess the experiences of students in a way that is efficient and accessible. Examining your institution’s results overall and drilling down to disciplinary or departmental subgroups can quickly give you an overview of students’ common experiences. There is a danger, however, in relying on the results of the “average” student, who is likely reflective of an institution’s majority populations. An overreliance on examining the experiences of our average students likely hides the experiences of more vulnerable populations.
One of the easiest ways to be more inclusive in analyses is to disaggregate your data, as aggregated data can mask the variation of experiences within your institution. In your NSSE data files, you can disaggregate based on a variety of subgroups including our expanded student identity items:
Identity characteristics (racial/ethnic identification, gender identity, sexual orientation, first-generation status, veteran status, diagnosed disabilities or impairments, etc.)
Student characteristics (transfer status, major or major field, class level, enrollment status, taking courses online, grades, educational aspirations, living situation, etc.)
Engagement characteristics (participation in high-impact practices, student athlete membership, fraternity or sorority membership, time spent studying, participation in co-curricular activities, etc.)
The intersection of these and other characteristics
Be sure to make use of the more inclusive student-reported variables in NSSE 2023 for gender identity, sexual orientation, Greek-letter organizations (US only), and ZIP code (US only). You might also consider incorporating important subgroups specific to your institution in your NSSE population file as a grouping variable, which is then returned to you in your data file. Contact your Project Services Team for more details.
Those interested in disaggregating survey data such as NSSE results typically encounter subpopulations with small numbers of respondents. This could be due to a variety of reasons such as a low response rate, a small population from which to elicit responses, or data collection methods that make subpopulation respondents difficult to contact (e.g., inviting respondents with rarely checked email addresses) or that create difficulties for subpopulations to respond (e.g., low access to technology for an online survey).
Some things to consider when studying small populations:
Think ahead of time about the small populations you want to better understand, so you can be strategic about data collection from these groups
Consider collecting data from multiple cohorts of your small populations
Triangulate your findings with other forms of data
Keep in mind that your small number of participants may capture most or all of your small population.
Reset the expectations of your audience before sharing results to reinforce the notion of using data to start conversations rather than to generalize or make statistical predictions.
People interested in small populations may wonder what to do about groups that have only a handful of responses. Statistically, methods for analyzing small populations are limited, but it’s important to not disregard them. One of the tendencies in quantitative data analysis is to maximize counts for generalizability purposes. This may lead to a dependence on large numbers, but we must not forget that percentage differences, effect sizes, and descriptive analyses are legitimate forms of analysis that provide us with important information. Telling the story of a subpopulation is important, even if that group is small.
One way to investigate the experiences of small populations is by exploring experiences based on identity, such as gender identity. Updated in 2023, NSSE offers students the option to “select all that apply” to best describe themselves. From a data perspective, each identity now has its own unique variable (gi_woman, gi_man, gi_agender, etc.), allowing for selection based on all students that selected that identity and therefore greater customization. Patterns of engagement for these smaller subgroups are notably different, further encouraging us to continue looking within (BrckaLorenz & Hurtado, 2015).
Although such groups of students are likely smaller at any given institution, the proportion of students who identify in these smaller groups is growing nationwide, and we know these students experience higher education differently. Therefore, it is important that we take the time to understand how they are engaging within our institutions, so that we can ensure our practices are meeting the needs of all students.
How you conceptualize your research is just as important as the methods you choose. Your framework will help you determine what variables to use and how to approach interpretations (Rios-Aguilar, 2014). Many frameworks either do not fully consider the experiences of marginalized groups or approach their experiences from a deficit perspective. Make sure to take some time to thoughtfully select a framework to help answer your research questions.
As an example, when presenting results from your NSSE Snapshot, you might reconsider how you discuss these findings. Bensimon (2007) urged us to think about practitioner knowledge and how it influences student experiences and success. By this, Bensimon argued that we should move away from focusing interventions on students and toward thinking about the ways practitioners could act to facilitate data-informed change. Therefore, when looking at participation in high-impact practices reported in the Snapshot, it is not enough for us to merely point out where students failed to engage. Instead, we should use this information to examine how practitioners can move toward “equity-minded practices” and “contextualized problem-defining and -solving” (p. 447).
In another example, a person-centered approach to critical quantitative research, Malcom-Piqueux (2015) identifies groups of people based on similar experiences or outcomes instead of using variable-based approaches that explore relationships between variables. Using a framework like this that focuses on characterizing differences across groups of students, engagement experiences are examined in a holistic manner so that inequities in educational involvement can be revealed without making assumptions about student identities. Having a framework in mind before and during data analysis can help researchers and audiences better understand and interpret results as well as address issues of purpose and expectations.
Making comparisons between subgroups is a common strategy for analyzing and presenting data. When looking at engagement patterns for a single group of students, researchers and audiences naturally wonder—“Is that normal, too low, too high, better, or worse than other students?” Unfortunately, sometimes the way this strategy is implemented implicitly positions certain groups as normative. For example, when looking at race and ethnicity, White students’ experiences are often held as the norm to which other groups are compared (Mayhew & Simonoff, 2015), carrying the assumption that White students’ experience is “normal” and implying this should be achieved by other student groups.
Statistical comparisons could actually mask challenges or successes in institutional results. Looking through reports and results for statistically significant differences and seeing none might make one believe that things are going well because no one is engaging more or less than anyone else. But this may hide the fact that all students are engaging less than we would want or that students are all engaging more than we expected. Similarly, if there are significant differences, the higher scoring group may still not be engaging at acceptable levels. Even when the goal of an analysis is to make a comparison, it is important that we examine the results independently—without reference to other results. We often encourage institutions participating in NSSE to choose a normative reference before looking at their data and reports. By determining ahead of time what levels of engagement or student experiences would be considered a success or a challenge for your institution, questions of “Is that good?” can be answered without any comparison.
If comparisons are necessary for your audience or your research questions, think carefully about the comparison or reference groups you are using. As stated previously, using a majority group as the reference for comparison (White, straight, cisgender, etc.) implies that these students are the standard against which all others should be measured. In a regression model, for example, using the majority identity group as a reference so that smaller minority groups are compared against them doesn’t allow one to see how these minority groups compare to each another. Even using effect coding—a practice in which groups are coded so that their regression coefficients can be compared to the overall group average (Mayhew & Simonoff, 2015)—essentially compares minority groups to the majority as the overall group average is likely reflective of the majority groups on campus.
It may be useful to consider doing comparative analyses within marginalized subpopulations, thereby, temporarily at least, setting aside the experiences of majority populations that are likely already well known. Analyses that focus on comparisons between subpopulations of a minority population—for example, biracial students with differing racial/ethnic heritages (BrckaLorenz, Harris, & Nelson Laird, 2017)—can help remind audiences that groups of students are often not monolithic and that, although some students within a minority group may find success, others may still be challenged. Understanding why subpopulations of minority groups have different experiences can help improve the experiences of all students.
Whether using one or a combination of techniques mentioned here, we encourage you to be conscious of the way comparisons and findings are communicated to others. Although quantitative data and results may be thought of as objective, the comparisons we make, particularly in our choices of reference groups, can send powerful messages about our students and our beliefs as researchers and assessment professionals.
In our first tip, we suggested responsible disaggregation of data. However, there are some circumstances in which that approach might actually cause more harm. When sharing data, be cautious about making them identifiable. Those with whom data are shared should not be able to attribute responses to any specific person. Intersectional work can be especially susceptible to identifying individuals. If results of particularly small groups should be shared, masking the identity characteristics of the respondents may be advisable. Although it may be less satisfying and feel impractical to think about the experiences of an anonymous, small group of individuals, understanding that these are the experiences of some of your students, regardless of who they are, can still be useful in starting conversations and making improvements.
Depending on your research questions, you may need to use sophisticated statistical methods that require dropping especially small groups of students from the analysis or aggregating data to create larger groups of students. When this is necessary, we encourage you to acknowledge these limitations and be open about how small populations were dropped or aggregated. Acknowledging that these groups either were not included or were combined with others can bring clarity to ambiguous “other” groupings, add context to findings, and generate conversations about how to examine the experiences of students who were not included in the analysis or whose experiences may be hidden by aggregation. Transparency in methodological choices with attention to limitations and future research plans can turn less inclusive analyses into more inclusive conversations.
Evidence-Based Improvement in Higher Education resources and social media channels