Other Quality Indicators
“Other Quality Indicators” includes procedures, standards, and other evaluations implemented by NSSE to reduce error and bias, and to increase the precision and rigor of the data. These studies assess NSSE’s adherence to the best practices in survey design, and cover various stages of the survey, including sampling, survey administration, and reporting.
Are institutions that participate in NSSE different from other baccalaureate granting colleges and universities?
Self-selection bias arises when participants who choose to enter the study are systematically different from those who do not. This can be measured by comparing the sample characteristics to the population characteristics.
Does NSSE take necessary steps in their policies, processes, and reporting to reduce the amount of error in the data?
Measurement error refers to investigations of the precision and accuracy of a measuring instrument, and investigations of the potential uncertainty in a measurement. It can be minimized by designing and implementing the survey carefully, and by analyzing and reporting the data effectively. It can be evaluated by examining policies, administrative processes, and reporting procedures of NSSE.
Do students provide enough answers to the NSSE survey? How many questions do they omit?
Data quality refers to how the data represents the phenomena being measured. One important aspect of data quality is the completeness of data. A large amount of missing data can introduce estimation bias and increase standard error.
Do students who reply using the paper mode of NSSE respond differently than students using the Web mode?
Mode effect refers to the situation where participant responses differ due to the survey administration mode. If a mode effect exists, it may cause difficulty when comparing or combining results from different modes.
Do students that respond to NSSE differ from those that choose not to respond to NSSE?
Nonresponse bias arises when people who choose to participate in the survey are systematically different from those who do not. Nonresponse bias could potentially reduce the generalizability of the results.
- Non-respondents differ from respondents: 2005
- Do more engaged respond more?
- Additional item response bias: 2011
- Prior internet use: 2011
Do institutions participating in NSSE have enough respondents to adequately represent their population?
Sampling error is an estimate of the margin by which the true score on a given item could differ from the reported score. For additional information about specific population and sample calculations, see the sampling error calculator on NSSE website.
Are NSSE scores influenced by a desire to respond in a socially desirable manner?
Social desirability refers to the tendency of respondents to provide answers that they think are more socially acceptable, even if they are not true. Social desirability bias is more likely to occur when the questions are sensitive.