This document, written in 2001, provides background on the original rationale for creating NSSE and what the designers hoped it would achieve. As a historical document, it provides a perspective on the promise and potential peril perceived in NSSE's early years, which continue to shape our work today.
NSSE Origins and Potential
An Initiative of The Pew Charitable Trusts
This paper describes a new approach to gathering information about collegiate quality on a national basis, using a specially-developed survey of good practices in undergraduate education entitled The College Student Report, administered under the auspices of the National Survey of Student Engagement (NSSE) project. The NSSE was conceived in early 1998 and supported by a grant from The Pew Charitable Trusts. The NSSE conducted a successful pilot in 1999 that involved more than 75 selected colleges and universities. Approximately 275 colleges and universities participated in the inaugural launch in the spring of 2000. What follows is a summary of work to date, including a discussion of key design points and pilot administration issues as well as a list of remaining tasks. We hope that this information will serve to prompt additional thinking about these issues by the entire academic community.
Established methods for assuring quality in higher education contain few external incentives for individual colleges and universities to engage in meaningful quality improvement. This is especially true in the all-important area of enhancing undergraduate education. In part, this is because the conversation about "quality" has been centered on the wrong things. Institutional accreditation processes, despite their recent emphasis on assessing student learning and development, deal largely with resource and process measures. Government oversight as manifested in license requirements and program review mechanisms, in turn, continues to emphasize regulation and procedural compliance. Third-party judgments of "quality" such as media rankings continue to focus on such matters as student selectivity and faculty credentials. None of these gets at the heart of the matter: the investments that institutions make to foster proven instructional practices and the kinds of activities, experiences, and outcomes that their students receive as a result.
As one step toward addressing this condition, The Pew Charitable Trusts convened a working group of higher education leaders in February 1998 to discuss this issue and, more particularly, the kinds of college ranking systems employed by publications like U.S. News and World Report. After a thorough discussion, one conclusion of the Pew working group was that results of a survey of undergraduate quality, if available, could provide colleges and universities—as well as a potential range of stakeholders—with far more valuable information about institutional quality than established measures of reputation.
This proposed data collection initiative, now known as the National Survey of Student Engagement, is designed to query undergraduates directly about their educational experiences. An extensive research literature relates particular classroom activities and specific faculty and peer practices to high-quality undergraduate student outcomes. For example, we know that level of challenge and time on task are positively related to persistence and subsequent success in college. Another conclusion of this body of research is that the degree to which students are engaged in their studies impacts directly on the quality of student learning and their overall educational experience. As such, characteristics of student engagement can serve as a proxy for quality. At least as important, calling attention to the presence or absence of such practices can highlight specific things that individual colleges can do something about and provide information that external constituencies will readily understand. If technically sound and broadly representative, a national survey focused on such practices can begin to focus current quality debates around the right questions rather than falling back upon traditional reputational answers.
Cast in this way, the potential of the NSSE goes well beyond "fixing the rankings." Instead, it offers an alternative tool for gathering information with a wide range of uses and provides an important occasion to re-frame both local and national conversations about collegiate quality. In particular, three possible uses for the data are now envisioned. First, results are expected to be useful to institutions themselves in improving undergraduate education. For example, the data will be especially useful to colleges and universities in gauging the degree to which they foster practices consistent with particular institutional characteristics and commitments, in order to improve their performance. Second, results from The College Student Report should be helpful to a range of external stakeholders of higher education, including accrediting bodies and state oversight agencies. For example, the data could be used as part of an assessment of "institutional effectiveness"; component of a self-study or to strengthen benchmarking processes. Third, if the results from the NSSE project were made public, they might prove interesting to the media, including news magazines and college guides. Between the two extremes of proprietary, institutionally-owned data and publicly-reported data incorporated into the college rankings of the mass circulation magazines, lie many other potential uses for the data. Through substantial discussion in the coming months, the NSSE partners expect that both institutions and stakeholders will weigh in to help clarify the center of this effort.
The NSSE instrument, The College Student Report, was a national survey of undergraduate quality that would eventually be administered to representative samples of students at American colleges and universities by an independent (not-for -profit) authority. The field tests were coordinated by Peter Ewell of the National Center for Higher Education Management Systems (NCHEMS) and George Kuh of the Center for Postsecondary Research and School of Education at Indiana University.
To begin the process of developing the survey, The Pew Trusts engaged NCHEMS to coordinate the development of a survey instrument, to convene a series of meetings designed to test its utility and feasibility, to select a strategy for pilot administration, and to determine who should pilot the survey. In the late spring of 1998, NCHEMS project staff convened the Design Team consisting of Alexander Astin, Gary Barnes, Arthur Chickering, Peter Ewell, John Gardner, George Kuh, Richard Light and Ted Marchese with input from C. Robert Pace to help draft a survey instrument.
Consists principally of items that are known to be related to important college outcomes. NSSE intends to provide information about the extent to which different colleges exhibit characteristics and commitments known to be related to high-quality undergraduate student outcomes. To that end, The College Student Report is relatively short and contains items directly related to institutional contributions to student engagement, important college outcomes, and institutional quality. The Design Team had three general criteria in mind when selecting items that might be used, including: (1) Is the item arguably related to student outcomes as shown by research?; (2) Is the item useful to prospective students in choosing a college?; and (3) Is the item straightforward enough for its results to be readily interpretable by a lay audience with a minimum of analysis?
Items on actual student behavior and perceptions of the extent to which the institution actively encourages high levels of engagement are included in The College Student Report. In general, the questions fall into three broad categories. Institutional actions and requirements include specific items about the curriculum (e.g., how much reading and writing have you done?) and about faculty behavior (e.g., have you worked with a faculty member on a significant scholarly task such as a research project?). Student behavior includes items about how students spend their time inside and outside of the classroom (e.g., have you worked with other students outside of class to prepare class assignments?). Student reactions to college include items that seek students’ perceptions about the quality of their own experiences (how would you rate the overall quality of your experience here?). This last category also includes questions about self-reported gains in skills that students feel they have developed as a result of attending college (e.g., has college helped you to develop your ability to think critically and analytically?).
Concentrating on only a few carefully-chosen items in a special-purpose survey should both focus attention and promote high rates of response. The emphasis on student engagement and good practice is intended to shift the focus of current conversations about quality away from resources and inputs and toward outcomes, while being specific enough about processes to indicate concretely the kinds of improvements in which colleges should invest. The ability to compare results among peer institutions to identify best practices is also an important feature.
Be administered to students at both public and private four-year colleges and universities. Excluding two-year institutions altogether—at least at first—helps avoid the problem of multiple educational missions. Most students attending four-year institutions intend (eventually) to earn a baccalaureate degree and are not simply engaging in classwork to enhance job skills or to pursue a personal interest. At the same time, baccalaureate-granting institutions share common curricular features at the undergraduate level, including general education and an upper-division major, and all purport to prepare students in similar areas consistent with similar objectives. Moreover, virtually all claim to enhance student abilities in such areas as communication, critical thinking, and higher-order reasoning.
Be administered to freshman- and senior-level students who have attended the institution for at least two terms. We know from research that the experiences of lower-division and upper-division students are quite different at most colleges and that what happens in upper-level courses in a student’s major is especially distinctive. Such variations will be captured by sampling students at two points in their academic careers in order to paint a fair picture of an overall collegiate experience. Deliberately sampling students at different levels will also help adjust for the fact that "survivors" have generally had more successful experiences than dropouts at any given institution.
Be administered to adequate samples at participating institutions. To ensure meaningful and credible results, random samples, typically ranging from 450 to 1,000 students based upon total undergraduate enrollment, are drawn from each institution's pool of freshmen/first-year students and seniors. While smaller samples might produce consistent results, sufficient numbers of cases are needed to allow the kinds of disaggregation (e.g., by student level or major) required to make sense of the data and to guide meaningful discussion and improvement on both the local and national level. As a result, the NSSE incorporates "best practices" in its survey administration in order to maximize institutional response rates.
Be flexible. Recognizing that institutions also need tailored information to guide improvement, The College Student Report is designed to accommodate alternative sets of questions especially suited to particular types of institutions—as well as the ability to add questions designed by colleges and universities themselves. A layered data design permits identification of a common core of questions appropriate for universal distribution and broad comparison while also permitting the addition of tailor-made questions posed by consortia. One can imagine a range of different attributes that might be of interest, including attributes related to community involvement, attributes related to the attainment of particular student goals, or "process" measures, such as the number of times students use the library or the ease with which students can design their own major.
Be administered by a credible third-party survey organization. The eventual administering organization for the NSSE, a joint venture between the Indiana University Center for Postsecondary Research, the Indiana University Center for Survey Research, and the National Center for Higher Education Management Systems, is not part of the existing accountability structure of colleges and universities. As such, it is in a position to report results to the public with high credibility and remain free from the direct control of outside stakeholders. A high visibility National Oversight Board composed of educational researchers and public representatives will ensure the effort’s independence and objectivity.
Many of the items included on the current version of the NSSE are derived from existing student questionnaires including the College Student Experiences Questionnaire (CSEQ), the Cooperative Institutional Research Program (CIRP) Freshman and follow-up surveys, and student and alumni surveys administered by the University of North Carolina system.
The NSSE instrument went through several drafts and revisions by the Design Team and was reviewed by several groups of potential NSSE users such as representatives of the press including U.S. News and World Report, of accrediting agencies like the Middle States Association, of state higher education oversight agencies, and of higher education constituency organizations including the American Council on Education. Institutional representatives from potential institutional participants were also provided with the opportunity to review and react to The College Student Report.
The Design Team also decided that it was important to collect institutional data in addition to student data. As a result, an institutional data form, to be completed by the administrator designated as the NSSE contact, was developed and additional background data on each participating institution will be assembled from published sources.
The pilot project in 1999 involved administering the survey in two waves: a "tryout" phase in the Spring of 1999 involving a small number of institutions and a larger pilot test in the Fall of 1999. The primary objectives of the pilot project as a whole were to test the survey instrument and associated administration procedures from a technical standpoint and to examine the utility of the NSSE as a national approach to collecting data about college quality.
For the spring "tryout" phase, 12 institutions participated representing a range of institutional types, including three research universities (Tulane, Indiana University Bloomington, and the University of North Carolina, Chapel Hill), three public comprehensive universities (Truman State University, Stephen F. Austin State University, and the University of Massachusetts, Boston), and six liberal arts colleges (St. Michael’s College, Millikin University, Eckerd College, Williams College, Wittenberg University, and Puget Sound Christian College). Approximately 56 colleges and universities participated in the fall 1999 pilot.
A series of meetings was held in the Fall of 1998 to build support for the NSSE initiative and, more specifically, to plan for the pilot in 1999. These included: (1) an invitational gathering held in Washington, D.C. with representatives of accrediting bodies, state/system higher educational agencies, and the press; and (2) two "stakeholder" meetings with potential institutional participants drawn from the Council of Independent Colleges and the Annapolis Group. The purpose of these formal dissemination meetings was to introduce the concepts behind the initiative to a wider institutional audience, to obtain feedback on the draft survey instrument and administration arrangements, and to build constituency support for the project. In addition, project staff engaged in numerous one-on-one conversations with potential stakeholders and participants throughout the summer and fall.
Consistent areas of concern regarding the NSSE emerged at these meetings. They include:
- a concern that the NSSE might create pressure to homogenize curricular practices,
- the need to clarify the NSSE’s purpose and to develop safeguards against the misuse of survey results,
- the recognition that institutions might try to manipulate the results—especially if the survey is used in rankings or other "high-stakes" settings, and
- a concern that The College Student Report is really a "reputation/selectivity" measure in another guise.
These are legitimate concerns and are being closely monitored by the project team. These concerns aside, all constituency groups contacted to date strongly support the NSSE’s basic thrust. Indeed, most feel that The Report represents one of the best available alternatives to productively shift the focus of "quality" discussions in higher education from resource/reputation indicators toward the things that really matter in an undergraduate education.