Using evidence to inform institutional improvement efforts has been a goal at NSSE since its inception. This is why NSSE data and reports provide actionable information about critical dimensions of educational learning.
However, we often get questions about ways to analyze and interpret our data and reports, especially as it relates to diversity and inclusion. Some common questions include:
How do we identify subgroups of students struggling or excelling in their experiences?
How do we analyze subgroups with very few responses?
How do we better identify the needs and experiences of students from underrepresented backgrounds?
How do we avoid approaching the data from a deficit perspective?
How do we better share these data and results with others on campus?
The ways these data are analyzed and interpreted are important. We encourage you to be conscious of the ways our work may perpetuate problematic and limited understandings of already marginalized groups. In this guide we offer a few tips to consider for more inclusive data sharing and analysis. Whether you are preparing reports for internal stakeholders or conducting research to share externally, we hope these tips allow us all to be more attentive to the ways we engage in this work.
Survey data such as NSSE can be used to broadly assess the experiences of students in a way that is efficient and accessible. Examining your institution’s results overall and drilling down to disciplinary or departmental subgroups can quickly give you an overview of students’ common experiences. There is a danger, however, in relying on the results of the “average” student. An average student is likely reflective of an institution’s majority populations, and an overreliance on examining the experiences of our average students likely hides the experiences of more vulnerable populations.
One of the easiest ways to be more inclusive in analyses is to disaggregate your data as aggregated data can mask the variation of experiences within your institution. In your NSSE data files, you will have the ability to disaggregate based on a variety of subgroups including:
Identity characteristics (racial/ethnic identification, gender identity, sexual orientation, first-generation status, veteran status, diagnosed disabilities or impairments, etc.)
Student characteristics (transfer status, major or major field, class level, enrollment status, taking courses online, grades, educational aspirations, living situation, etc.)
Engagement characteristics (participation in high-impact practices, student athlete membership, fraternity or sorority membership, time spent studying, participation in co-curricular activities, etc.)
The intersection of these and other characteristics
You might also consider incorporating important subgroups specific to your institution in your NSSE population file as a grouping variable which is then returned to you in your data file. Contact your Project Services Team for more details.
Those interested in disaggregating survey data such as NSSE typically encounter subpopulations with small numbers of respondents. This could be due to a variety of reasons such as a low response rate, a small population from which to elicit responses, or data collection methods that make subpopulation respondents difficult to contact (e.g., inviting respondents with rarely-checked email addresses) or create difficulties for subpopulations to respond (e.g., low access to technology for an online survey).
Some things to consider when studying small populations:
Think ahead of time about the small populations you want to better understand, so you can be strategic about data collection from these groups
Consider collecting data from multiple cohorts of your small populations
Triangulate your findings with other forms of data
Keep in mind that it is possible your small number of participants captures most or all of your small population
Reset expectations of your audience before sharing results to reinforce the notion of using data to start conversations, not to generalize or make statistical predictions
People interested in small populations may wonder what to do about groups that only have a handful of responses. Statistically, methods for analyzing small populations are limited, but it’s important to not disregard them. One of the tendencies in quantitative data analysis is to maximize counts for generalizability purposes. This may lead to a dependence on large numbers, but we must not forget that percentage differences, effect sizes, and descriptive analyses are legitimate forms of analysis that provide us with important information. Telling the story of a subpopulation is important, even if that group is small.
One illustration of the value of small population analysis is an exploration of the experiences of those students who select “Another gender identity, please specify” when asked for their gender identity. This group is notably smaller than those who select “Man” or “Woman,” but there are many subgroups within the identities students specify. In the 2017 NSSE administration, common writein identities for those selecting another gender identity included Nonbinary, Gender fluid, Agender, Transgender, Genderqueer, Two spirit, and many more. Patterns of engagement for these smaller subgroups are notably different, further encouraging us to continue looking within (BrckaLorenz & Hurtado, 2015).
Although such groups of students are likely smaller at any given institution, this group overall is growing nationwide, and we know that these students experience higher education differently. Therefore, it is important that we take the time to understand how they are engaging within our institutions, so that we can ensure our practices are meeting the needs of all students.
How you conceptualize your research is just as important as the methods you choose. Your framework will help you determine what variables to use and how to approach interpretations (Rios-Aguilar, 2014). Many frameworks do not fully consider the experiences of marginalized groups or approach their experiences from a deficit perspective. Make sure to take some time to thoughtfully select a framework to help answer your research questions.
As an example, when presenting results from your NSSE Snapshot Report, you might reconsider how you discuss these findings. Bensimon (2007) urged us to think about practitioner knowledge and how it influences student experiences and success. By this, Bensimon argued that we should move away from focusing interventions on students, and think about the ways practitioners could act in order to facilitate data-informed change. Therefore, when looking at participation in high-impact practices on the Snapshot Report, it is not enough for us to merely point out where students failed to engage. Instead, we should use this information to examine how practitioners can move toward “equity-minded practices” and “contextualized problem-defining and -solving” (Bensimon, 2007, p. 447).
In another example, a person-centered approach to critical quantitative research, Malcom-Piqueux (2015) identifies groups of people based on similar experiences or outcomes instead of using variable-based approaches that explore relationships between variables. Using a framework like this that focuses on characterizing differences across groups of students, engagement experiences are examined in a holistic manner so that inequities in educational involvement can be revealed without making assumptions about student identities. Having a framework in mind before and during data analysis can help researchers and audiences better understand and interpret results as well as address issues of purpose and expectations.
Making comparisons between subgroups is a common strategy for analyzing and presenting data. When looking at engagement patterns for a single group of students, it is natural for researchers and audiences to wonder—is that “normal?” Is that too low? Is that too high? Is that better or worse than other students? Unfortunately, sometimes the way this strategy is implemented implicitly positions certain groups as normative. For example, when looking at race and ethnicity, White students’ experiences are often held as the norm to which other groups are compared (Mayhew & Simonoff, 2015). This approach implies that the experiences of White students are “normal” or what should be achieved by other students.
Statistical comparisons could actually mask challenges or successes in institutional results. Looking through reports and results for statistically significant differences and seeing none might make one believe that things are going well because no one is engaging more or less than anyone else. But this may hide the fact that all students are engaging less than we would want or that students are all engaging more than we expected. Similarly, if there are significant differences, the higher scoring group may still not be engaging at acceptable levels. It is important, even when the goal of an analysis is to make a comparison, that we examine the results independently without reference to other results. We often encourage institutions participating in NSSE to choose a normative reference before looking at their data and reports. By determining ahead of time what levels of engagement or student experiences would be considered a success or challenge for your institution, questions of “is that good?” can be answered without any comparison.
If comparisons are necessary for your audience or research questions, think carefully about the comparison or reference groups you are using. As stated previously, using a majority group as the reference for comparison (White, straight, cisgender, etc.) implies that these students are the standard against which all others should be measured. In a regression model, for example, using the majority identity group as a reference so that smaller minority groups are compared against them doesn’t allow one to see how these minority groups compare to each another. Even using effect coding, a practice where groups are coded so that their regression coefficients can be compared to the overall group average (Mayhew & Simonoff, 2015), essentially compares minority groups to the majority as the overall group average is likely reflective of the majority groups on campus.
It may be useful to consider doing comparative analyses within marginalized subpopulations, thereby, temporarily at least, setting aside the experiences of majority populations that are likely already well known. Analyses that focus on comparisons between subpopulations of a minority population, for example, biracial students with differing racial/ethnic heritages (BrckaLorenz, Harris, Nelson Laird, 2017) can help remind audiences that groups of students are often not monolithic and although some students within a minority group may find success, others may still be challenged. Understanding why subpopulations of minority groups have different experiences can help improve the experiences of all students.
Whether using one or a combination of techniques mentioned here, we encourage you to be conscious of the way comparisons and findings are communicated to others. Although quantitative data and results may be thought of as objective, the comparisons we make, particularly in our choices of reference groups, can send powerful messages about our students and our beliefs as researchers and assessment professionals.
In our first tip, we argued for disaggregation of data. However, there are some circumstances where that might actually cause more harm. When sharing data with others, be cautious about making data identifiable. When sharing findings with groups on campus, they should not be able to attribute responses to a specific person. Intersectional work can be especially susceptible to identifying individuals. If results of particularly small groups should be shared, it may be best to mask the identity characteristics of the respondents. Although it may be less satisfying and feel impractical to think about the experiences of an anonymous, small group of individuals, understanding that these are the experiences of some of your students, regardless of who they are, can still be useful to starting conversations and making improvements.
Depending on your research questions, you may need to use sophisticated statistical methods that require dropping especially small groups of students from the analyses or aggregating data to create larger groups of students. When this is necessary, we encourage you to acknowledge these limitations and be open about how small populations were dropped or aggregated. Acknowledging how these groups either were not included or how they were combined with others can help bring clarity to ambiguous “other” groupings, can help add context to findings, and can be used to start conversations about how to examine the experiences of students who weren’t included or may have their experiences hidden through aggregation. Transparency in methodological choices with attention to limitations and future research plans can turn less inclusive analyses into more inclusive conversations
Evidence-Based Improvement in Higher Education resources and social media channels
Center for Postsecondary Research Indiana University School of Education 201 N. Rose Avenue Bloomington, IN 47405-1006 Phone: 812.856.5824 Email: nsse@indiana.edu