Psychometric Portfolio
The Psychometric Portfolio is a collection of evidences developed by NSSE to support the uses and interpretations of NSSE results. The studies included in the portfolio address different facets of the survey and its potential uses and interpretations. Studies are organized into six areas based on the validity evidences as described by AERA, APA, and NCME’s 2014 Standards for Educational and Psychological Testing: (a) Survey Content, (b) Response Process, (c) Internal Structure, (d) Relations to Other Variables, (e) Consequential, and (f) Other Validity Evidence. While NSSE is not a cognitive test, the principles outlined in the Standards are well-suited to validating the use of NSSE survey measures.
Survey Content Evidence
Survey content evidence evaluates the extent to which a survey’s items represent its constructs as understood in the relevant theory and literature; a construct is a descriptor of a concept or characteristic of importance.
>>More
Survey content evidence evaluates the extent to which a survey’s items represent its constructs as understood in the relevant theory and literature; a construct is a descriptor of a concept or characteristic of importance. NSSE constructs include the ten Engagement Indicators like Collaborative Learning, which reflects associated behaviors such as asking another student for help or explaining course material to another student.
An example of NSSE content evidence is the Conceptual Framework which grounds NSSE survey items in prominent college student development and engagement literature. The evidences provided in the Conceptual Framework can be employed by NSSE users to judge the degree of alignment between Engagement Indicators, for example, and the items representing those constructs. This alignment then serves as a validation for the meaning of any scores presented in NSSE reporting.
>>Less
Response Process Evidence
Response process evidence illustrates how respondents interpret and respond to survey questions.
>>More
Response process evidence illustrates how respondents interpret and respond to survey questions. Evidence of response processes typically come from respondents themselves using methods like cognitive interviews and focus groups where respondents talk through the processes they engage in while responding to a particular survey question. This kind of evidence is important in order to verify that respondents answer questions as intended, thus allowing their responses to be interpreted as expected.
For the development of the current version of NSSE (released in 2013), NSSE researchers conducted cognitive interviews and focus groups with representative student populations to assess new items as well as those that underwent significant revision. Cognitive interviews included requests for students to vocalize their thinking aloud while responding to survey questions, and as a follow-up to provide examples of the particular behaviors being explored. Findings from this study contextualized the interpretation of survey responses. For example, 2012 pilot questions about diversity showed similar responses across schools even though the amount of diversity varied based on the institutions’ student populations. This section also includes evidence about how students interpret “how often” response options, the hours they devote to different activities, and the accuracy of self-reported grades.
>>Less
Internal Structure Evidence
Internal structure evidence demonstrates the statistical coherence of survey items as they relate to various survey constructs.
>>More
Internal structure evidence demonstrates the statistical coherence of survey items as they relate to various survey constructs. Internal structure evidence should reflect the item-construct and construct-construct relationships conceptualized by student engagement theory or prior research. For instance, the NSSE survey is designed to assess ten Engagement Indicators. Internal structure evidence to support the use of such measures includes factor analysis to assess how strongly items relate to their respective indicator, and not to other constructs. Other internal structure evidences include measures of internal consistency, differential item functioning, and correlations among constructs.
An example of internal structure evidence supporting the NSSE survey is the report on Construct Validity of NSSE Engagement Indicators report. This two-part analysis first allowed survey items to freely load on all possible constructs; results showed that items generally loaded on the ten Engagement Indicator factors as expected. Thereafter, a confirmatory factor analysis examined the fit of the hypothesized measurement models to the data. The models showed good fit, indicating that a) the items measure the Engagement Indicators as expected and b) the Engagement Indicator scores are valid aggregate scores. Evidence of each Engagement Indicator’s internal consistency is also provided as part of this psychometric portfolio’s internal structure evidence.
>>Less
Relations to Other Variables Evidence
One way to conceptualize constructs is in terms of their relationship to other measures.
>>More
One way to conceptualize constructs is in terms of their relationship to other measures. Such evidence can include relating survey constructs to other instruments that measure similar constructs or to outcomes that are theoretically related (or not) to the construct. Additionally, theory may suggest that different groups should perform differently on measures and evidence of such differences would validate construct interpretation.
NSSE’s study on the Predictive Validity First-Year Retention is an example of this category of evidence. In theory, higher levels of student engagement should result in a greater likelihood of first-year retention. Thus, a measure of NSSE Engagement Indicator validity would be whether increased scores result in higher probabilities of retention. The study found that all NSSE constructs predicted higher levels of retention. A number of NSSE-related studies have been conducted examining how survey constructs relate to other important higher educational outcomes.
>>Less
Consequences of Survey Evidence
Consequences of survey evidence should examine the extent to which claims about data use are warranted.
>>More
Consequences of survey evidence should examine the extent to which claims about data use are warranted. Such evidence attempts to indicate whether intended uses of data, as defined by the conceptual framework, would achieve particular benefits or outcomes. Data can also be used for purposes that extend beyond the intent of the survey and should be assessed as well. Additionally, evidence that examines the unintended consequences of survey use should be examined and considered.
NSSE was developed to provide colleges and universities with measures of student engagement and perceptions of the institutional environment to inform improvement of undergraduate education. To that end, NSSE has collected examples of data use from participating institutions for these purposes. In the Consequential Aspect of Validity report, examples of NSSE use for accreditation, accountability, strategic planning, and program assessment are provided. More demonstrations of effective NSSE data use to inform institutional improvement are catalogued in the Lessons from the Field series.
>>Less
Other Survey Evidence
Some evidences supporting the use and interpretation of the NSSE survey do not fit neatly within the categories above.
>>More
Some evidences supporting the use and interpretation of NSSE do not fit neatly within the categories above. Concerns about sampling approaches, sample sizes, item non-response, and other administration details are relevant to assessing the validity of interpreting NSSE results. Such studies are included in this section.
An example of such a study is an examination of the impact of response rates on NSSE population estimates for various measures. Some evidences supporting the use and interpretation of NSSE do not fit neatly within The study simulated how NSSE construct scores for institutions would differ if smaller percentages of respondents were used for estimation (in 5% increments from the first 5% to the first 35% of respondents). With few exceptions, the study found estimates for engagement constructs to be reliable under low response rate conditions (5% to 10%), provided a sample size of at least 500 students. For smaller administrations, the response rate required for an estimate to be reliable was higher, but estimates were increasingly reliable after receiving responses from 50 to 75 students.
>>Less