NSSE's Conceptual Framework (2013)
In 2013, colleges and universities across the US and Canada began administering a substantially revised version of the National Survey of Student Engagement (NSSE), otherwise known as NSSE 2.0. To mark that event, the following updated conceptual framework revisits the instrument’s conceptual and theoretical roots, explains why and how the survey was updated, and briefly reviews the literature for arguably the most vital features of the update, the Engagement Indicators and High-Impact Practices. Appendices cover more detailed ground relating to the NSSE 2.0 development process and, more specifically, the creation of the ten Engagement Indicators. Portions of this conceptual framework have been adapted from earlier related pieces (McCormick, Gonyea, & Kinzie, 2013; McCormick, Kinzie, & Gonyea, 2013).
The Conceptual Lineage of Student Engagement
Student engagement, as reflected by NSSE, is not a unitary construct but rather an umbrella term for a family of ideas rooted in research on college students and how their college experiences affect their learning and development. It includes both the extent to which students participate in educationally effective activities as well as their perceptions of facets of the institutional environment that support their learning and development (Kuh, 2001; 2009). Central to the conceptualization of student engagement is its focus on activities and experiences that have been empirically linked to desired college outcomes. These influences go back to the 1930s, and span the fields of psychology, sociology, cognitive development, and learning theory, as well as a long tradition of college impact research. The concept also incorporates contributions from the field in the form of practical evaluations of the college environment and the quality of student learning, pressure for institutions to assess and be accountable for educational quality, concerns about student persistence and attainment, and the scholarship of teaching and learning.
The historical roots of student engagement can be traced to studies in the 1930s by educational psychologist Ralph Tyler, who explored the relationship between secondary school curriculum requirements and subsequent college success. At The Ohio State University, Tyler was tasked with assisting faculty in improving their teaching and increasing student retention, and as part of this work he designed a number of path-breaking “service studies” including a report on how much time students spent on their academic work and its effects on learning (Merwin, 1969). Joining C. Robert Pace and other noted scholars, Tyler contributed his expertise in educational evaluation and the study of higher education environments to the Social Science Research Council’s Committee on Personality Development in Youth (1957–63), which furthered the study of college outcomes by turning attention to the total college environment. The committee concluded that student outcomes in college do not result exclusively from courses but rather from the full panoply of college life (Pace, 1998). Focusing his research on both student and environmental factors related to college success, Pace went on to develop a number of questionnaires for students to report on the college environment. Pace’s studies of college environments documented the influence of student and academic subcultures, programs, policies, and facilities, among other factors, and how they vary among colleges and universities.
Tyler’s early work showing the positive effects of time on task for learning was explored more fully by Pace (1980) who showed that the “quality of effort” students invest in taking advantage of the facilities and opportunities a college provides is a central factor accounting for student success. He argued that because education is both process and product, it is important to measure the quality of the processes; he used the term quality of effort to emphasize the importance of student agency in producing educational outcomes.
Pace’s instrument, the College Student Experiences Questionnaire (CSEQ), was created with substantial conceptual backing to operationalize “student effort”—defined as a straightforward measure of facility use so that students “would immediately know whether they had engaged in the activity and about how often” (Pace, 1998, p. 29). The quality of effort construct rested on the assertion that the more a student is meaningfully engaged in an academic task, the more he or she will learn. Pace found that students gained more from their college experience when they invested more time and effort in educationally purposeful tasks such as studying, interacting with peers and faculty about substantive matters, and applying what they are learning to concrete situations. Importantly, he distinguished quality of effort from motivation, initiative, or persistence. Although quality of effort incorporates these elements, it takes place within and its strength depends on a specific educational context.
Student engagement is also rooted in the work of Alexander Astin (1984), who articulated a developmental theory for college students focused on the concept of involvement—or “the amount of physical and psychological energy that the student devotes to the academic experience” (p. 297) —and that what students gain from the college experience is proportional to their involvement. This involvement can be academic, social, or extracurricular. Astin hypothesized that the more involved the student is, the more successful he or she will be in college. He acknowledged that the concept of involvement resembles that of motivation, but distinguished between the two, arguing that motivation is a psychological state while involvement connotes behavior. These key ideas of time on task, quality of effort, and involvement have all contributed to the conceptualization of student engagement.
Both Pace (1969, 1980) and Astin (1970, 1984) emphasized the important role of the college environment and what the institution does or fails to do to in relation to student effort and involvement. Also, in contrast to models of college impact that viewed students as passive subjects, Pace (1964, 1982) conceived of students as active participants in their own learning, and that one of the most important determinants of student success is active participation of the student by taking advantage of a campus’s educational resources and opportunities. Pace (1998) characterized his work as an examination of relationships in their “natural setting,” between environments and attainment, effort and outcomes, and patterns of college students’ activities and institutional influences. Astin (1984) further articulated the vital role of the institution, in stating that the “effectiveness of any educational practice is directly related to the capacity of that policy or practice to increase involvement” (p. 298).
Another root in the student engagement family tree is Tinto’s concept of integration, wherein integration refers to the extent to which a student (a) comes to share the attitudes and beliefs of peers and faculty and (b) adheres to the structural rules and requirements of the institution (Pascarella & Terenzini, 1991; Tinto, 1975; 1993). Tinto (1975; 1993) proposed his theory of academic and social integration to explain voluntary student departure from an institution. He defined integration with regard to a student’s social and academic connection to the campus. Social integration refers to a student’s perceptions of interactions with peers, faculty, and staff at the institution as well as involvement in extracurricular activities. Academic integration refers to a student’s academic performance, compliance with explicit standards of the college or university, and identification with academic norms. Tinto’s was one of the first theories that viewed voluntary departure as involving not just the student but also the institution. Described as “interactionist” because it considers both the person and the institution, Tinto’s theory (1986) shifted responsibility for attrition from resting solely with the individual student and his or her personal situation to include institutional influences. Informed by Tinto’s work, the conceptualization of student engagement incorporates a student’s interactions with peers and faculty and the extent to which the student makes use of academic resources and feels supported at the institution.
Pascarella’s (1985) “general causal model for assessing the effects of differential college environments on student learning and cognitive development,” or, more simply, the “general causal model,” expanded on Tinto’s work by incorporating institutional characteristics and quality of student effort, and by linking to other outcomes in addition to retention. Pascarella theorized that students’ precollege traits correlate with institution types and that both of these influence the institutional environment and interactions with agents of socialization, such as faculty members, key administrators, and peers. Pascarella also acknowledged that student background has a direct effect on learning and cognitive development, beyond the intervening variables. By including quality of student effort, Pascarella affirmed Pace’s (1984) notion that students’ active participation in their learning and development is vital to learning outcomes. Pascarella viewed quality of effort as influenced by student background and precollege traits, the institutional environment, and by interactions with agents of socialization. Tinto’s and Pascarella’s emphasis on students’ interactions with their institution and on institutional values, norms, and behaviors provide the basis for the environmental dimensions of student engagement.
In The Impact of College on Students, Feldman and Newcomb (1969) synthesized some four decades of findings from more than 1,500 studies of the influence of college on students. Subsequent reviews by Bowen (1977), Pace (1979), and Pascarella and Terenzini (1991, 2005) synthesized research on college students and collegiate institutions from the mid-1920s to the early 21st century. One unequivocal conclusion wholly consistent with the work of Pace and Astin is that the impact of college on learning and development is largely determined by individuals’ quality of effort and level of involvement in both the curricular and co-curricular offerings on a campus. Rather than being mere passive recipients of college environmental effects, students share responsibility for the impact of their own college experience.
The literature on effective teaching and learning has also contributed to the conceptualization of student engagement. In just seven principles of good practice in undergraduate education, Chickering and Gamson (1987) distilled 50 years of educational research on the teaching and learning activities most likely to benefit learning outcomes: (1) student-faculty contact; (2) cooperation among students; (3) active learning; (4) providing prompt feedback; (5) emphasizing time on task; (6) communicating high expectations; and (7) respecting diverse talents and ways of learning. This concise distillation—only four pages of text—has had a notable impact on how educational effectiveness is understood and promoted in higher education. In a footnote, the authors acknowledge the assistance of a virtual “Who’s Who in Higher Education Research and Policy,” including Alexander Astin, Howard Bowen, Patricia Cross, Kenneth Eble, Russell Edgerton, Jerry Gaff, C. Robert Pace, and Marvin Peterson. Chickering and Gamson’s common-sense principles were intended to guide faculty members, administrators, and students, with support from state agencies and trustees, in their efforts to improve teaching and learning. They argued that, while each practice can stand alone, when all are present their effects multiply and exert a powerful force on undergraduate education. They also underscored the responsibility of educators and college and university leaders to foster an environment favorable to good practices in higher education, so as to ensure that students engage routinely in high levels of effective educational practice. Multivariate longitudinal analyses of these practices at a diverse group of 18 institutions have shown them to be related to cognitive development and several other positive outcomes, net of a host of control variables (Cruce, Wolniak, Seifert, & Pascarella, 2006).
Similarly, as part of their comprehensive reviews of research on college impact, Pascarella and Terenzini (1991, 2005) identified a range of pedagogical and programmatic interventions—such as peer teaching, note taking, active discussion, integration across courses, and effective teaching practices—that increase students’ engagement in learning and academic work and thereby enhance their learning and development. In How College Affects Students (1991), these authors concluded that “the greater the student’s involvement or engagement in academic work or in the academic experience of college, the greater his or her level of knowledge acquisition and general cognitive development” (p. 616).
From the perspective of involvement, quality of effort, time on task, academic and social integration, as well as the principles for good practice in undergraduate education, student engagement can be seen as encompassing the choices and commitments of students, of individual faculty members, and of entire institutions (or schools and colleges within larger decentralized institutions). Students’ choices include their quality of effort and their involvement in educational experiences and activities both in and outside of class. In choosing courses or course sections, students may consider not just the course content, schedule, and what they know about the instructor, but also the amount and type of work required. Once enrolled in courses, students make decisions about how to allocate their efforts and about whether and how to associate with their fellow students, be it through formal co-curricular activities or informally.
The relevant choices and commitments of faculty and institutions, on the other hand, relate primarily to the principles for good practice in undergraduate education. Faculty members make choices about the learning activities and opportunities in their courses, their expectations of students, the nature and timing of their feedback to students, their formal and informal facilitation of student learning outside of class, and so on. Institutional leaders and staff make choices about establishing norms and allocating resources to support student success. For example, library and student affairs professionals may consider evolving student needs when choosing how to create supportive learning environments and provide programs, speakers, and events that enrich the undergraduate experience. Through their policies and practices, institutional leaders communicate shared norms and standards for students, faculty, and staff with regard to student challenge and support.
The intellectual heritage reviewed heretofore establishes the conceptual underpinnings that undergirds student engagement as an agenda for promoting student success. It also links student engagement to the world of practice, thereby connecting to contemporary reform movements such as the scholarship of teaching and learning. If individual effort is critical to learning and development, then shaping college and university experiences and environments in order to promote student involvement is essential.
The Emergence of NSSE
Besides the conceptual lineage of student engagement, it is useful to consider the context in which it emerged during the 1990s as a framework for understanding, diagnosing, and improving the quality and effectiveness of undergraduate education. This is a story of the confluence of two streams, and closely intertwined with the development of NSSE: one involving increasing interest in so-called “process indicators,” the other related to mounting frustration with the dominant conception of college and university quality in the United States.
National Education Goals and the Use of “Process Indicators”
In 1989, President George H. W. Bush and the governors of the 50 states articulated a set of National Education Goals. The subsequent work of the National Education Goals Panel culminated in the Goals 2000: Educate America Act, signed into law by President Bill Clinton in 1994. The legislation set forth eight goals for American education to achieve by the year 2000. Although most of the goals focused on elementary and secondary education, the goal related to adult literacy and lifelong learning specified that “the proportion of college graduates who demonstrate an advanced ability to think critically, communicate effectively, and solve problems will increase substantially.” The sustained discussion of national goals created the need to monitor progress toward their achievement. As related by Peter Ewell (2010) in his account of NSSE’s origins, “The implied promise to develop the metrics needed to track progress on these elusive qualities… stimulated thinking about how to examine them indirectly by looking at what institutions did to promote them” (p. 86). Ewell and his colleagues at the National Center for Higher Education Management Systems (NCHEMS) produced a series of articles and reports proposing how “indicators of good practice” or “process indicators” might be productively deployed without the long delay and expense required to develop direct assessments of the outcomes set forth in the national goals (although they also endorsed the development of such assessments) (Ewell & Jones, 1993, 1996; National Center for Higher Education Management Systems [NCHEMS], 1994). Ewell and Jones (1993) also articulated the virtue of process measures for contextualizing what is learned from outcomes assessments, noting that “it makes little policy sense to collect outcomes information in the absence of information on key processes that are presumed to contribute to the result” (p. 125). Indeed, citing Astin’s (1991) work on assessment in higher education, they asserted that “information on outcomes alone is virtually uninterpretable in the absence of information about key experiences” (p. 126). They suggested that process indicators related to good practices in undergraduate education have practical relevance, because their linkage to concrete activities offers guidance for interventions to promote improvement. In a report for the National Center for Education Statistics on the feasibility of “good practice” indicators for undergraduate education, the NCHEMS team undertook a comprehensive review of the knowledge base and available information sources (NCHEMS, 1994). In the discussion of available surveys of current students, the Cooperative Institutional Research Program (CIRP) surveys and the CSEQ were identified as bearing on a number of dimensions of “instructional good practice.”
Kuh, Pace, and Vesper (1997) implemented the process indicators approach using CSEQ data from a diverse sample of institutions and students. They created indicators to tap three of Chickering and Gamson’s (1987) seven principles for good practice in undergraduate education (student-faculty contact, cooperation among students, and active learning) and examined their relationship to students’ self-reported learning gains in general education, intellectual skills, and personal and social development. The researchers concluded that CSEQ items could be combined to produce indicators of good practice in undergraduate education and that these indicators showed positive and consistent relationships to self-reported learning outcomes. Although the term “student engagement” did not appear in their article, their conclusions offered proof of concept of the process indicator approach and foreshadowed the development of a survey designed explicitly to provide process measures related to good practice in undergraduate education.
Discontent with the National Discourse on College Quality
The other stream contributing to the emergence of student engagement as a framework for assessing educational quality emerged from mounting discontent over the dominant conception of college quality in the national mindset. Beginning in the 1980s, U.S. News and World Report began publishing annual lists that purported to identify “America’s Best Colleges” through a numeric ranking. Although the rankings received extensive criticism from both inside and outside the academy, they proved popular with the general public (McDonough, Antonio, Walpole, & Perez, 1998). Despite their popularity, the rankings have been subjected to a variety of philosophical and methodological objections (for example, see Gladwell, 2011; Graham & Thompson, 2001; Machung, 1998; Thacker, 2008; and Thompson, 2000). An enduring complaint has been their emphasis on reputation and input measures to the exclusion of any serious treatment of teaching and learning. Indeed, the first issue of the rankings was based solely on a reputation survey sent to college and university presidents, and, when the rankings methodology was later expanded to include other criteria, it was specifically engineered to reproduce the conventional wisdom that the most elite institutions are the best (Thompson, 2000). If the rankings were no more than an innocent parlor game, their shortcomings would not have raised much concern. But repeated reports of strategic action by institutional personnel to influence their placement raised serious concerns about the rankings’ indirect influence on matters of institutional policy and resource allocation (Ehrenberg, 2002).
To be sure, U.S. News was not alone in motivating perverse choices in the pursuit of higher ranking and prestige. Rankings and classifications based on research activity have been another source of status competition that can lead administrators to allocate more resources to schools and departments that bring in high-dollar-value grants and contracts. But U.S. News was the self-proclaimed national arbiter of college quality, and its ranking criteria explicitly rewarded a narrow, wealth- and selectivity-based conception of quality that gave short shrift to teaching and learning. It was within this context that the Pew Charitable Trusts undertook to fund the development and implementation of a survey project focused on process indicators related to educational effectiveness at bachelor’s degree-granting colleges and universities, and subsequently at community colleges. A fundamental design principle was that the survey would be heavily focused on behavioral and environmental factors shown by prior research to be related to desired college outcomes. About two-thirds of the original survey’s questions were drawn or adapted from the CSEQ (Kuh, 2009).
NSSE’s founding director, George Kuh, promoted the concept of student engagement as an important factor in student success and, thus, a more legitimate indicator of educational quality than rankings based on inputs and reputation. He described student engagement as a family of constructs that measure the time and energy students devote to educationally purposeful activities—activities that matter to learning and student success (Kuh, n.d.; 2001). From the outset, then, student engagement was closely tied to the purposes of institutional diagnosis and improvement as well as to the broader purpose of reframing the public understanding of college quality. But it was also explicitly linked to a long tradition of prior theory and research, as previously described. Thus, the concept of student engagement and the university-based research and service project organized around it, NSSE, represent an attempt to bridge the worlds of academic research and professional practice—to apply longstanding conceptual and empirical work on college student learning and development to urgent practical matters of higher education assessment and improvement.
Measuring Student Engagement with NSSE
Although the measurement of student engagement is rooted in a long tradition of survey research in higher education, the NSSE instrument is a relatively recent manifestation of the construct. As the Director of Education for the Pew Charitable Trusts, Russ Edgerton (1997) proposed a grant project to improve higher education focused on the belief that what students learn is affected by how they learn. Edgerton argued for “new pedagogies of engagement” to help students acquire the abilities and skills for the 21st century. Launched in 2000 with support from the Pew Trusts, NSSE reflects both behavioral and perceptual components. The behavioral dimension includes how students use their time in and outside of class (e.g., asking questions, collaborating with peers in learning activities, integrating ideas across courses, reading and writing, interacting with faculty) as well as how faculty members structure learning opportunities and provide feedback to students. Because beliefs and attitudes are antecedents to behavior (Bean & Eaton, 2000), students’ perceptions of the campus environment are a critical piece in assessing their receptivity to learning. The perceptual dimension, thus, includes students’ judgments about their relationships with peers, faculty, and staff; their beliefs that faculty members have high expectations of students; and their understanding of institutional norms surrounding scholarly activities and support for student success. A key criterion in NSSE’s design was that the survey content would be selected based on prior empirical evidence of a relationship to student learning and development—research emerging from the conceptual traditions previously discussed (Ewell, 2010).
Because of its strong emphasis on student behaviors, NSSE differs markedly from other surveys of college students that examine their values and attitudes or their satisfaction with the college experience. The focus on behavior is both concrete and actionable: When results fall short of what is desired, the behavioral measures suggest avenues of intervention. The NSSE instrument may be viewed at nsse.indiana.edu
NSSE as a Benchmarking Tool
The effort to focus the attention of campus leaders and faculty members on student engagement is ultimately about creating campus environments that are rich with opportunities for engagement. Because the institution has a substantial degree of influence over students’ learning behaviors, perceptions, and environments (Pascarella & Terenzini, 2005), student engagement data provide valuable diagnostic information for institutional leaders, faculty, and others to consider how and where to exert their efforts. For this reason, assessments of student engagement are said to provide actionable information for the institution (Kuh, 2009); extensive evidence of institutions using their NSSE data for this purpose can be found on the NSSE website. To facilitate this process, NSSE was designed from its inception to serve as a benchmarking tool that institutional leaders can use to gauge the effectiveness of their programs by comparing first-year and senior students separately to those at comparison institutions. Assessing the college experience at these points in time has been intentional, as first-year students are laying the foundation for future success and seniors have had the most exposure to an institution and, therefore, are best positioned to reflect on it (NSSE, 2000). As one might assume, these two populations’ engagement results vary (NSSE, 2009), thus providing additional justification for analyzing and reporting by class level.
A benchmarking approach assumes that the unit of analysis is the institution and that the group-level score is reliable. Generalizability studies have shown that NSSE’s measures are dependable measurements of group means (Fosnacht & Gonyea, 2012; Pike, 2006a, 2006b). Of course, group scores need not be limited to the group of all respondents at institutions. Institutions can and should drill down into their engagement data by computing group scores for different types of students such as by sociodemographic characteristics, transfer status, residence, college or major, or participation in special programs such as a learning community or a student-faculty research initiative.
As a survey-based assessment intended to inform educational practice, NSSE confronts the challenge of condensing results from a large number of individual items into readily understood summary measures for use by institutional personnel with varying levels of quantitative skills. For NSSE’s first 13 administrations, summary measures combined thematically related items into what were called “Benchmarks of Effective Educational Practice,” or “benchmarks” for short. The benchmarks included Level of Academic Challenge, Active and Collaborative Learning, Student-Faculty Interaction, Enriching Educational Experiences, and Supportive Campus Environment. In describing the benchmarks, Kuh (2001) wrote that they “represent educational practices that resonate well with faculty members and administrators” while they are also “understandable to people outside the academy like parents of prospective students, accreditors, and so on” (p. 14). Although factor analytic procedures informed their creation, these results were combined with expert judgment to create clusters that would have clear face validity and actionable import for institutional users (Kuh, 2003, 2009). The benchmarks were organized thematically, but they did not necessarily represent unitary constructs as is the case for many scales used in educational research. Although close examination of the constituent elements of some benchmarks made this plain (see McCormick & McClenney, 2012), confusion about the dimensionality of the benchmarks arose among NSSE users and higher education researchers (see LaNasa, Cabrera, & Trangsrud, 2009). As described in the next section, the five benchmarks evolved into Engagement Indicators (EIs) and High-Impact Practices (HIP) starting with the 2013 administration.
The Introduction of NSSE 2.0
The first few years of the NSSE project witnessed a series of modifications and refinements. Beginning in 2005, however, the project adopted a policy of continuity. This kept the survey largely unchanged until NSSE’s major revision in 2013, enabling institutions to track their results over time. During the interim period, NSSE staff analyzed the survey’s properties and performance, collected input from users about valued items and recommended changes, and carried out research and development to inform the next version of NSSE.
A strong rationale for revising NSSE existed at the time. NSSE staff knew more about what mattered to student success, institutional improvement efforts, and the properties of the NSSE instrument itself. Moreover, as higher education faced increasing demands for assessment data, NSSE needed to reflect current issues and concerns. The development model was akin to the concept of “punctuated equilibrium” from evolutionary biology: a long period of stability, followed by a burst of change. The development effort had four primary goals:
· Develop new measures related to effective teaching and learning,
· Refine the existing measures,
· Improve the clarity and applicability of the survey language, and
· Update the terminology to reflect current educational contexts.
The update process began in NSSE’s tenth anniversary year—2009. In contrast to NSSE’s initial development—carried out with generous but time-limited startup funding from the Pew Charitable Trusts and a small but dedicated staff—the NSSE 2.0 development effort had an extended timetable, a deep pool of test institutions, and a large research staff due to the scale of the mature project. The multi-year process involved consulting with campus users and experts from the field, reviewing recent literature, conducting research, gathering ideas from our national and technical advisory boards and other interested partners, analyzing several years of experimental questions, conducting focus groups and cognitive interviews with students, and two years of pilot testing and analysis followed by extensive psychometric testing. Nearly 80 institutions participated in the development effort, whether by administering pilot instruments or hosting cognitive interviews of students. (See Appendix A for additional details about the development process.) Comparing NSSE 2012 to the revised instrument, launched in 2013, about a quarter of NSSE 2013 questions are new and nearly the same proportion are unchanged. Of the half that were changed, an equal number were modified in major or minor ways. Important changes ensured the appropriateness of items for online students (i.e., using “course” instead of “class” to remove the reference to physical classroom spaces), thus reflecting the changing landscape of higher education. Of course, some items were deleted to keep the overall length of the survey about the same.
NSSE’s Engagement Indicators and High-Impact Practices
A guiding principle in the revision was to maintain NSSE’s signature focus on diagnostic and actionable information related to effective educational practice. This resulted in one of the most significant transitions introduced with the updated survey, and alluded to previously: the shift from the familiar five NSSE benchmarks to a new set of ten “Engagement Indicators,” nested within four broad themes that echo the benchmarks:.
Theme 1. Academic Challenge
Higher-Order Learning
Reflective & Integrative Learning
Learning Strategies
Quantitative Reasoning
Theme 2. Learning with Peers
Collaborative Learning
Discussions with Diverse Others
Theme 3. Experiences with Faculty
Student-Faculty Interaction
Effective Teaching Practices
Theme 4. Campus Environment
Quality of Interactions
Supportive Environment
While the NSSE benchmarks had high face validity, were simple to recite, and provided an easily digested overview of the results, many users reported that their value for institutional improvement was limited: They lacked specificity about where to concentrate improvement efforts. The new Engagement Indicators combined high face validity with a more coherent framework and specific measures for the improvement of teaching and learning; in addition, six items related to High-Impact Practices began to be reported separately.
The new Engagement Indicators provided faculty members, department chairs, deans, provosts, and presidents with new opportunities to dig into results and formulate plans to increase the prevalence of effective educational practices. In addition, their strong reliability and validity properties have proved useful for supplemental analyses by institutions and higher education researchers (for details, see Appendix B or NSSE’s online Psychometric Portfolio). With each major update of the NSSE survey comes a chance to explore emergent areas of interest in higher education. For example, the new Engagement Indicators include areas such as quantitative reasoning, effective teaching practice, discussions with diverse others, and learning strategies. The update also introduced a new menu of Topical Modules to permit deeper exploration of topics of wide interest to colleges and universities, such as academic advising, first-year experiences and senior transitions, global learning, and experiences with writing.
The Literature on Engagement Indicators and High-Impact Practices
An extensive literature exists for the various forms of engagement represented on the updated NSSE instrument that institutions began using in the spring of 2013. In the following section, the related literature informing Engagement Indicators (EIs) and High-Impact Practices (HIPs) is presented. As mentioned previously, each EI is grouped within a general thematic area closely tied to the original benchmarks that were discontinued after the 2012 administration. Short definitions and literature summaries related to each EI are included below, clarifying the scope of each construct and how each relates to important personal development and institutional outcomes. Additional information about EIs and HIPs can be found in the growing body of NSSE research results and resources on the NSSE website.
Engagement IndicatorsTheme 1: Academic Challenge
Challenging intellectual and creative work is central to student learning and collegiate quality. Colleges and universities promote student learning by challenging and supporting them to engage in various forms of deep learning. Four Engagement Indicators make up the Academic Challenge theme: Higher-Order Learning, Reflective & Integrative Learning, Learning Strategies, and Quantitative Reasoning.
Higher-Order Learning. The Higher-Order Learning EI captures how much students’ coursework emphasizes challenging cognitive tasks such as application, analysis, judgment, and synthesis. Calling on students to engage in such tasks requires more than their mere memorization of facts. According to Lewis and Smith (1993), higher-order learning reflects a pattern that students proactively integrate new knowledge and existing information, and connect and extend this information to seek answers to perplexing issues during the learning process. By engaging in higher-order learning, students also need to make decisions on what to believe and what to do, initiate new ideas or generate new objects, make predictions, and solve problems. Challenging students to engage in higher-order learning practices helps students to approach learning in a deep way and to gain knowledge beyond a surface-level understanding (Marton & Säljö, 1976b, 1997; Nelson Laird, Shoup, & Kuh, 2005b). NSSE (2013) found that students who engage in courses that emphasize higher-order learning do a better job applying acquired knowledge to practice, analyzing ideas, reflecting on experiences, as well as evaluating and viewing information and new ideas from various sources critically.
Bloom’s Taxonomy illustrates six categories of cognitive domain ordered from simple to complex and from concrete to abstract: knowledge, comprehension, application, analysis, synthesis, and evaluation (Krathwohl, 2002). NSSE’s Higher-Order Learning EI is in accord with Bloom’s Taxonomy’s complex and abstract categories, which emphasize how students utilize the knowledge they have gained in real world practice. Additionally, higher-order learning is also consistent with the notion of “deep approaches to learning” described by Marton and Säljö (1976a). Students that use deep approaches to learning are more likely to have stronger motivation and interest in learning, to focus on understanding multiple parts in learning materials, and to connect new knowledge to previous knowledge and experiences (Biggs, 1988; Marton, 1983).
To measure Higher-Order Learning, NSSE asks:
During the current school year, how much has your coursework emphasized the following?
- Applying facts, theories, or methods to practical problems or new situations
- Analyzing an idea, experience, or line of reasoning in depth by examining its parts
- Evaluating a point of view, decision, or information source
- Forming a new idea or understanding from various pieces of information
Reflective & Integrative Learning. Personally connecting with course materials requires students to relate their understandings and experiences to the content at hand. Instructors who emphasize reflective and integrative learning try to motivate students to make connections between their learning and the world around them, to reexamine their own beliefs, and to consider issues and ideas from others’ perspectives. Since Marton and Säljö (1976a) distinguished different approaches students use when responding to a learning task, the concept of “deep processing” has been a topic of interest in higher education research, evaluation, and practice. Deep approaches to learning, as opposed to surface approaches to learning, are educational processes that go beyond memorizing information and that focus on connecting with the information’s underlying meaning (Nelson Laird, Shoup, & Kuh, 2005b). Deep learning includes a variety of learning approaches, such as integrating new knowledge with existing knowledge or practical issues, and reflecting on one’s own views while considering views of others.
The abilities to integrate and reflect when encountering new information are crucial for student success both in school and postgraduation. Empirical studies have shown that students benefit from deep approaches to learning to obtain higher academic achievement (Zeegers, 2004). Deep approaches to learning also increase students’ retention and help them integrate and transfer information faster (Nelson Laird, Shoup, & Kuh, 2005a). Apart from success in college, the habit of reflective and integrative learning can help one become a lifelong learner as a qualified professional and a responsible citizen. With accelerating technology development, the exploding volume of information, the speed of globalization, and the rapid change of the social and economic environment, modern society relies on its members to integrate information from different sources and to reflect on the meanings of information from various perspectives to reach informed conclusions (Bourner, 2003; Huber, & Hutchings, 2004).
Higher education organizations have placed great emphasis on implementing faculty development programs that encourage students to use reflective and integrative learning approaches. According to the report Greater Expectations: A New Vision for Learning as a Nation Goes to College (Association of American Colleges and Universities, 2002), many colleges and universities designed first-year seminars, capstone experiences, learning communities, etc., to help students integrate knowledge learned from different courses. Higher education institutions have also developed courses for faculty from a variety of disciplines to help them design teaching strategies that promote reflective learning (Bourner, 2003).
To measure Reflective & Integrative Learning, NSSE asks:
During the current school year, about how often have you done the following?
- Combined ideas from different courses when completing assignments
- Connected your learning to societal problems or issues
- Included diverse perspectives (political, religious, racial/ethnic, gender, etc.) in course discussions or assignments
- Examined the strengths and weaknesses of your own views on a topic or issue
- Tried to better understand someone else’s views by imagining how an issue looks from his or her perspective
- Learned something that changed the way you understand an issue or concept
- Connected ideas from your courses to your prior experiences and knowledge
Learning Strategies. College students enhance their learning and retention by actively engaging with and analyzing course material rather than approaching learning as absorption. Examples of effective learning strategies include identifying key information in readings, reviewing notes after class, and summarizing course material. Knowledge about the prevalence of effective learning strategies helps colleges and universities target interventions to promote student learning and success. Learning strategies are specific patterns or combinations of activities that learners use to gain knowledge (Vermetten, Lodewijks, & Vermunt, 1999; Vermunt, 1996). There are a variety of methods that students can use when studying and learning. Learning strategies can range from taking notes when reading and in class, to summarizing and organizing new information, to creating an environment that is conducive to studying (Ormrod, 2011). Learning strategies contribute to regulating and monitoring time, increasing concentration, making great efforts, and enhancing comprehension (McKeachie, Pintrich & Lin, 1985). Students’ use of learning strategies is closely related to their perception of an emphasis on mastery or performance goal orientation in the classroom (Ames & Archer, 1988).
Effective learning strategies incorporate the use of metacognition, or “thinking about thinking,” that demonstrate the ability to reflect on, understand, and control one’s own learning. Metacognition can directly impact effectiveness of student study, preparation, and classroom time, including how information is learned and retained; consequently, it relates to learning outcomes and success in college. Research suggests that students with greater metacognitive skills earn higher grades on classroom exams (Isaacson & Fjuita, 2006), grades in individual courses (Young & Fry, 2008), and cumulative grade point averages (Everson & Tobias, 1998; Hall, 2001). Students with these skills are also better at accurately predicting test performance and using formative feedback (Ibabe & Jauregizar, 2010). Furthermore, metacognitive skills are effective across a variety of domains (Everson, Tobias, & Laitusis, 1997), types of tasks (Young & Fry, 2008), and levels of student ability (El-Hindi & Childers, 1996). Increasing the effective use of learning strategies is actionable for institutions, too, as these skills can be developed through a variety of instructional strategies (Schraw, 1998). Research has indicated success in teaching metacognitive skills to students through online self-assessment programs (Ibabe & Jauregizar, 2010), academic support courses for at-risk students (El-Hindi & Childers, 1996), direct tutoring sessions (DeKonty Applegate, Benson Quinn, & Applegate, 1994), and classroom learning contracts (Chiang, 1998).
To measure Learning Strategies, NSSE asks:
During the current school year, about how often have you done the following?
- Identified key information from reading assignments
- Reviewed your notes after class
- Summarized what you learned in class or from course materials
Quantitative Reasoning. Quantitative reasoning—the ability to use and understand numerical and statistical information in everyday life—is an increasingly important outcome of higher education. All students, regardless of major, should have ample opportunities to develop their ability to reason quantitatively—to evaluate, support, and critique arguments using numerical and statistical information. The quantitative demands imposed by today’s society and in the modern workforce are great and growing (Dingman & Madison, 2010; Madison, 2009; Madison & Steen, 2008; Steen, 2001). Today’s job market demands quantitative skills from college graduates, regardless of career (Dingman & Madison, 2011; RiveraBatiz, 1992; Steen, 2001). In addition, the ability to use and understand quantitative information, also known as quantitative literacy, is increasingly important for effective democratic participation (Steen, 2001). As such, there is a growing consensus that to be able to function in today’s society, people need to be able to process and understand quantitative information (Shavelson, 2008).
The concept of quantitative reasoning transcends the mere ability to perform mathematical computations to include a deeper understanding of quantitative data. Quantitative reasoning encompasses the ability to use numerical, statistical, and graphical information in everyday life as well as in the workplace (Steen, 1997, 2001; Wilkins, 2000, 2010). Wilkins (2000) describes a quantitatively literate person as one who possesses “a functional knowledge of mathematical content, the ability to reason mathematically, a recognition of the societal impact and utility of mathematics, and a positive disposition towards mathematics” (p. 406). Similarly, Steen (1997) described quantitative reasoning as a “walking around” knowledge of mathematics or the ability to handle quantitative information that one might encounter in everyday life. A rich understanding of quantitative information is also a necessity in the workplace. Employees at all levels and in all fields must be able to identify problems, analyze and interpret information, and make decisions based on that information (Wilkins, 2000).
Despite these arguments about the increasing importance of quantitative reasoning, the 2003 National Assessment of Adult Literacy (NAAL) found no significant gains between 1992 and 2003 at any education level (Kutner et al., 2007) and further found that only about one third of college graduates demonstrated proficiency. A more recent assessment of adult literacy from the Organization for Economic Co-operation and Development (OECD, 2013), found that U.S. adults ranked near the bottom in quantitative literacy compared to other developed nations. Findings from the NAAL and OECD point to the need for colleges and universities to enhance students’ ability to make sense of, effectively use, and be knowledgeable consumers of quantitative information (Dingman & Madison, 2010, 2011; Taylor, 2008). While a number of colleges and universities have instituted programs designed to ensure that graduates develop these skills regardless of major (Gillman, 2006), findings from NAAL and OECD suggest an urgent need for all colleges and universities to assess the opportunities they provide to students in all majors to develop facility with quantitative reasoning.
To measure Quantitative Reasoning, NSSE asks:
During the current school year, about how often have you done the following?
- Reached conclusions based on your own analysis of numerical information (numbers, graphs, statistics, etc.)
- Used numerical information to examine a real-world problem or issue (unemployment, climate change, public health, etc.)
- Evaluated what others have concluded from numerical information
Other Measures of Academic Challenge: Reading & Writing. Two other significant content areas represented by NSSE that are not encompassed within any Engagement Indicators, but are related to academic challenge, are targeted in sets of questions about reading and writing. Studies show a connection between academic performance and the amount of reading time spent by college students (Dretzke & Keniston, 1989; Gallik, 1999; Gauder, Giglierano, & Schramm, 2007; Landrum, Gurung, & Spann, 2012) as well as exam and final course grades (Landrum, Gurung, & Spann, 2012). Not only do academic and recreational reading impact achievement in a particular course, but reading also increases overall reading comprehension abilities (Mokhtari, Reichard, & Gardner, 2009). Reading exposes students to a larger “quantity of words, concepts, and types of syntactic structure” (p. 68) and positively influences writing skills (National Endowment for the Arts, 2007). In addition to academic performance, time spent on reading benefits other areas of engagement. For instance, students across a variety of demographic groups who read more are also more likely to engage in volunteer work (2007).
Across academic disciplines, the use of writing in coursework positively impacts higher-level thinking skills, active learning, and material comprehension (Olmstead & Ruediger, 2013). Both Writing in the Discipline and Writing Across the Curriculum programs have been shown to positively impact undergraduate education (Bok, 2006), allowing students to construct aspects of their own learning (Baxter Magolda, 2006). Whether students are writing analytical essays, reports, or comparisons of ideas, the act of writing allows meaning-making and new connections between concepts to take place (Greene, 1993). Rosinski and Peeples (2012) agree that writing about complex, real-world problems increases students’ engagement with the material, particularly when writing projects are undertaken in collaboration with other students.
As Kuh (2009) notes, when students are exposed to increased frequency of writing in their courses, along with receiving instructor responses to this writing, they experience increased levels of comprehension. In addition, such students more ably navigate ambiguity and complexity. Light (2001) found that the amount of required writing is one of the most influential factors in student engagement with course material. In fact, the amount of required writing relates more strongly to engagement than does any other course characteristic, including level of academic challenge.
To measure experiences with writing and reading, NSSE asks:
During the current school year, about how many papers, reports, or other writing tasks of the following lengths have you been assigned? (Include those not yet completed.)
- Up to 5 pages
- Between 6 and 10 pages
- 11 pages or more
Of the time you spend preparing for class in a typical 7-day week, about how much is on assigned reading?
Theme 2: Learning with Peers
Learning and development transpires in many different ways as students interact with other students, whether exploring course material together or conversing with those from different backgrounds. Two Engagement Indicators make up the Learning with Peers theme: Collaborative Learning and Discussions with Diverse Others.
Collaborative Learning. Collaborating with peers in solving problems or mastering difficult material deepens understanding and prepares students to deal with the messy, unscripted problems they encounter during and after college. Working on group projects, asking others for help with difficult material or explaining it to others, and working through course material in preparation for exams all represent collaborative learning activities. More broadly, collaborative learning is a process in which two or more students participate in a specific intellectual activity. Students work together to gain a better understanding of course material, to solve problems and look for solutions to questions, or to work on group projects (Goodsell, Maher, & Tinto, 1992). Unlike learning on their own, students engage in collaborative learning in order to capitalize on one another’s resources and skills. This learning approach, first used in elementary and secondary schools, can be traced back to John Dewey’s (2004) idea of “learning in experience” and Lev Vygotskii’s (1962) “zone of proximal development.” From Dewey’s point of view, “education is not an affair of ‘telling’ and being told, but an active and constructive process” (p. 20). He also claimed that the aim of education is helping the student “recall and judge those things which make him an effective competent member of the group in which he is associated with others” (p. 33). Similarly, Vygotskii believed that “with assistance, every child can do more than he can by himself,” when the assistance is provided by a person who is more knowledgeable about the subject of interest, and who is able to engage the student.
Since the late 1980s, collaborative learning has become a well-accepted pedagogy in higher education (Cabrera et al., 2002). As stated by Pascarella and Terenzini (1991), “A large part of the impact of college is determined by the extent and content of one’s interactions with major agents of socialization on campus, namely, faculty members and student peers” (p. 620). In addition, Astin (1993) claims that peers are “the single most potent source of influence” affecting the development of college students (p. 398). Collaborative learning can happen anywhere. For example, in-class instruction may separate students into small groups in which they discuss the course material and deepen their understanding by explaining to and receiving feedback from others. Collaborative learning can also be merged into out-of-class activities, such as co-curricular activities, field trips, community service, and so on (Cabrera et al., 2002; Chickering & Gamson, 1987; Goodsell et al., 1992). According to Cabrera et al. (2002), collaborative learning—relative to other predictor variables like high school GPA, parents’ educational level, and hours spent studying—is the best predictor of students’ personal development, understanding of science and technology, appreciation for art, and analytical skills.
To measure Collaborative Learning, NSSE asks:
During the current school year, about how often have you done the following?
- Asked another student to help you understand course material
- Explained course material to one or more students
- Prepared for exams by discussing or working through course material with other students
- Worked with other students on course projects or assignments
Discussions with Diverse Others. Colleges and universities afford students new opportunities to interact with and learn from others who have different backgrounds and life experiences. Interactions across difference, both in and outside the classroom, confer educational benefits and prepare students for personal and civic participation in a diverse and interdependent world. Institutions are uniquely situated to shape the lifelong identity of students by providing opportunities to be exposed to peers with diverse ideas, experiences, and backgrounds frequently not represented in their home communities. Newcomb’s (1943) research on Bennington College students demonstrated that peers shape students’ beliefs and attitudes during college and that these beliefs persist long after graduation (Alwin, Cohen, & Newcomb, 1991; Newcomb, Koenig, Flacks, & Warwick, 1967). More recent research confirms the substantial impacts of peers on students’ identity development (Astin, 1993).
While experiencing diversity in college impacts students’ personal development, these experiences also improve learning outcomes. The presence of an African American student in a small group discussion among nonminority students has been experimentally demonstrated to improve the participants’ complex thinking (Antonio et al., 2004). Other studies have found that diversity experiences help promote a variety of positive academic outcomes (Bowman, 2010, 2013; Denson & Chang, 2009; Gurin, Dey, Hurtado, & Gurin, 2002; Loes, Pascarella, & Umbach, 2012; Nelson Laird, 2005). However, the strength of the impact of diversity experiences on cognitive outcomes appears to vary by their type and frequency. Bowman’s (2010) meta-analysis on the influences of diverse interactions on learning outcomes found that interpersonal interactions had a stronger relationship with cognitive development than diversity coursework and workshops. Furthermore, students who frequently engage in diverse interactions substantially benefit, while students who infrequently engage with diverse peers receive little or no benefits from their diverse interactions (Bowman, 2013).
Interacting with diverse peers also apparently promotes various democratic outcomes that prepare students for life after college. Many studies indicate that diverse interactions appear to reduce racial bias (Denson, 2009), which improves students’ interactions with others in an increasingly diverse workplace and civil society. Additionally, students who experience diversity have higher levels of civic engagement and are more likely to engage in activities like volunteering and holding a leadership role (Bowman, 2011). In both of Bowman’s meta-analyses (2010, 2011), interpersonal interactions had the greatest impact on the outcomes studied, while other diversity experiences such as diversity coursework, co-curricular diversity, and intergroup dialogue programs were associated with positive, but smaller, effects.
To measure Discussions with Diverse Others, NSSE asks:
During the current school year, how often have you had discussions with people from the following groups?
- People from a race or ethnicity other than your own
- People from an economic background other than your own
- People with religious beliefs other than your own
- People with political views other than your own
Theme 3: Experiences with Faculty
Students learn firsthand how experts think about and solve problems by interacting with faculty members in and outside of instructional settings. As a result, faculty become role models, mentors, and guides for lifelong learning. In addition, effective teaching requires that faculty deliver course material and provide feedback in student-centered ways. Two Engagement Indicators make up the Experiences with Faculty theme: Student-Faculty Interaction and Effective Teaching Practices.
Student-Faculty Interaction. Interactions with faculty can positively influence the cognitive growth, development, and persistence of college students. Through their formal and informal roles as teachers, advisors, and mentors, faculty members model intellectual work, promote mastery of knowledge and skills, and help students make connections between their studies and their future plans. How often students meaningfully interact with their faculty impacts their college experience in various ways (Kuh & Hu, 2001). Both the quantity and quality of students’ interactions with their faculty is essential not only to their experiences while in college but can extend beyond college as well (Hathaway, Nagda, & Gregerman, 2002). For example, it is widely accepted that student-faculty interaction has a positive influence on college student cognitive growth, personal development, satisfaction with college, and retention (Pascarella & Terenzini, 2005) as well as on student learning (Astin, 1993; Cabrera, Nora, Terenzini, Pascarella, & Hagedorn, 1999; Kuh et al., 1997; Pike, 1991; Terenzini & Pascarella, 1980; Volkwein, 1991; Volkwein & Carbone, 1994). Terenzini and Pascarella (1980) showed that the frequency of contact between students and faculty is positively related to student learning outcomes, even after controlling for their background characteristics. Other studies focused on research universities suggest that student-faculty interaction is helpful in predicting GPA for all racial and gender groups with the magnitude of effect varying by specific group (Kim & Sax, 2009).
In spite of the generally positive relationship between student-faculty interaction and higher education outcomes, not all kinds of interaction have the same impact on student outcomes (Cox & Orehovec, 2007; Pascarella & Terenzini, 1977; 1991). In particular, interacting with faculty is beneficial when the interaction is intellectually or substantively focused, while purely social exchanges only have limited effects (Cox & Orehovec, 2007; Pascarella & Terenzini, 1991). Pascarella and Terenzini (1977) also found that interacting with faculty about intellectual or course-related matters, career plans, and academic programs is positively related to first-year retention, while discussing personal problems, campus issues, or socializing informally does not have a significant effect on attrition.
To measure Student-Faculty Interaction, NSSE asks:
During the current school year, how much has your coursework emphasized the following?
- Talked about career plans with a faculty member
- Worked with a faculty member on activities other than coursework (committees, student groups, etc.)
- Discussed course topics, ideas, or concepts with a faculty member outside of class
- Discussed your academic performance with a faculty member
Effective Teaching Practices. Student learning is heavily dependent on effective teaching. Organized instruction, clear explanations, illustrative examples, and effective feedback on student work all represent aspects of teaching effectiveness that promote student comprehension and learning. Faculty whose courses are well organized, who teach with clarity, and who provide prompt and formative feedback have a positive impact on the learning and development of their students. Teaching clarity refers to teaching methods through which “faculty demonstrate a level of transparency in their approach to instruction and goal setting in an effort to help students better understand expectations and comprehend subject matter” (BrckaLorenz, Ribera, Kinzie, & Cole, 2012, p. 150). Simonds (1995) distinguishes two specific kinds of teaching clarity essential to a successful student learning experience: content clarity relates to whether the teacher can help students understand or acquire substantive knowledge; process clarity refers to behaviors such as communicating expectations and requirements to students.
The Wabash National Study for Liberal Arts Education found that students’ experiences with various effective teaching practices were positively associated with critical thinking, psychological well-being, leadership, openness to diversity, and academic motivation (Blaich & Wise, 2008). Other leading experts have also concluded that faculty’s teaching clarity, organization, and feedback impact student learning and development (Pascarella & Terenzini, 2005) and is an important characteristic of effective teaching (Ginsberg, 2007; Hativa, Barak, & Simhi, 2001). Studies have shown that teaching clarity has a positive relationship with various educational outcomes such as student achievement and satisfaction (Chesebro & McCroskey, 2001; Hativa, 1998; Pascarella & Terenzini, 2005). Based on a comprehensive review of higher education research, Pascarella (2006) concluded that teaching clarity is moderately correlated with grades and final examination performance. Others have also found teaching clarity to be positively correlated with GPA and the likelihood to persist (Lambert, Rocconi, Ribera, Miller, & Dong, 2012).
To measure Effective Teaching Practices, NSSE asks:
During the current school year, to what extent have your instructors done the followin?
- Clearly explained course goals and requirements
- Taught course sessions in an organized way
- Used examples or illustrations to explain difficult points
- Provided feedback on a draft or work in progress
- Provided prompt and detailed feedback on tests or completed assignments
Theme 4: Campus Environment
Students benefit and are more satisfied in supportive settings that cultivate positive relationships among students, faculty, and staff. Two Engagement Indicators make up the Campus Environment theme: Quality of Interactions and Supportive Environment.
Quality of Interactions. College environments characterized by positive interpersonal relations promote student learning and success. Students who enjoy supportive relationships with peers, advisors, faculty, and staff are better able to find assistance when needed as well as to learn from and with those around them. Students have countless ways to interact with these important campus populations, and these interactions can occur formally, in the classroom or academic setting, and informally, in social or nonacademic spaces. Both formal and informal relationships are necessary to enhance the student experience with student characteristics, interests, and attributes influencing the frequency and quality of interactions with others (Cole, 2007; Kim & Sax, 2009).
The assessment of quality often relies on satisfaction reports and surveys. Feedback from students is helpful for departments and institutions to highlight experiences, identify areas for potential improvement, and plan for organizational change (Kuh, Kinzie, Schuh, Whitt, & Associates, 2010; Welsh, Alexander, & Dey, 2001). These efforts can pay dividends as research has shown high quality interactions to be related to academic achievement, social development, and critical thinking (Umbauch & Wawrzynski, 2005; Whitt, Edison, Pascarella, Nora, & Terenzini, 1999). More generally, the extent and content of one’s interactions with major agents of socialization, such as faculty and other students, is largely responsible for a college’s impact (Pascarella & Terenzini, 1991). According to Kuh, Kinzie, Buckley, Bridges, and Hayek (2006), this view is consistent with a social networks perspective that college students’ relationships with faculty, staff, peers, family, friends, and mentors contribute to student satisfaction, retention, and gains from college (Astin, 1977; Kuh et al., 1991; Pascarella & Terenzini, 1991, 2005; Tinto, 1975, 1993).
To measure Quality of Interactions, NSSE asks:
Indicate the quality of your interactions with the following people at your institution:
- Students
- Academic advisors
- Faculty
- Student services staff (career services, student activities,
housing, etc.)
- Other administrative staff and offices (registrar, financial
aid, etc.)
Supportive Environment. Institutions that are committed to student success provide support and involvement across a variety of domains, including cognitive, social, and physical. These commitments foster higher levels of student performance and satisfaction. NSSE’s Supportive Environment EI summarizes students’ perceptions of how much an institution emphasizes services and activities that support their learning and development. The college environment includes both physical space where individuals interact and the perceptions held by the individuals who interact within that setting (Astin, 1968). Positive interactions within the environment stimulate engagement and further participation within the environment (Astin, 1968; Kuh & Hall, 1993), as well as satisfaction (Kuh, 1993). A supportive campus environment includes positive cognitive, social, and physical domains for students (Flowers & Pascarella, 2003; Pascarella & Terenzini, 2005).
Supportive environments result in retention, satisfaction, engagement, and involvement of students (Kuh, 1993; Kuh & Hall, 1993). Based on the work of Astin (1977), Kuh et al. (1991), and Pascarella and Terenzini (1995), Kuh et al. (2006, p.14) concluded, “student perceptions of the institutional environment and dominant norms and values influence how students think and spend their time. Taken together, these properties influence student satisfaction and the extent to which students take part in educationally purposeful activities.” Complementing the institution’s mission with programs focused on academic and social support can potentially help students adjust to college by giving clear paths to college success (Kuh, Kinzie, Schuh, & Whitt, 2005). To this end, supportive environments encompass such things as well-designed student orientation programs, learning communities, tutoring services, learning skills workshops, cultural centers, and senior capstone projects. According to Kuh et al. (2005, pg. 241), high-performing colleges “provide resources to those who need them when they need them and create the conditions that encourage students to take advantage of these resources.” They demonstrate that students perform better and are more satisfied at supportive schools and highlight the importance of school cultures that encourage students to use available resources.
To measure Supportive Environment, NSSE asks:
How much does your institution emphasize the following?
- Providing support to help students succeed academically
- Using learning support services (tutoring services, writing
center, etc.)
- Encouraging contact among students from different backgrounds (social, racial/ethnic, religious, etc.)
- Providing opportunities to be involved socially
- Providing support for your overall well-being (recreation, health care, counseling, etc.)
- Helping you manage your nonacademic responsibilities (work, family, etc.)
- Attending campus activities and events (performing arts, athletic events, etc.)
- Attending events that address important social, economic, or political issues
High-Impact Practices
Because of their positive effects on student learning and retention, special undergraduate opportunities such as service-learning, learning community, research with faculty, internship, study abroad, and culminating senior experience are called high-impact practices (Kuh, 2008). High-impact practices (HIPs) share several traits: They demand considerable time and effort, provide learning opportunities outside of the classroom, require meaningful interactions with faculty and students, encourage interactions with diverse others, and provide frequent and meaningful feedback. Participation in these practices can be life changing.
Kuh (2008) found that while participation in HIPs has positive relationships with deep approaches to learning and student-reported gains on a variety of outcomes for all types of students, historically underserved students seem to benefit even more than their majority peers. Unfortunately, such students are less likely to participate in HIPs in the first place, particularly first-generation and African American students. For institutions to provide opportunities to participate in a variety of HIP experiences is important, but faculty also are important in creating an atmosphere that values HIP participation. Each HIP reflected on the NSSE instrument is reviewed the following section, including service-learning, learning community, research with faculty, internship, study abroad, and culminating senior experience.
To measure High-Impact Practices, NSSE asks:
Which of the following have you done or do you plan to do before you graduate?
- Participate in a learning community or some other formal program where groups of students take two or more classes together
- Participate in an internship, co-op, field experience, student teaching, or clinical placement
- Participate in a study abroad program
- Work with a faculty member on a research project
- Complete a culminating senior experience (capstone course, senior project or thesis, comprehensive exam, portfolio, etc.)
- About how many of your courses at this institution have included a community-based project (service-learning)?
Service-Learning. Service-learning is generally defined as an educational method that intentionally connects community service to classroom learning, offering a unique opportunity for students to get involved with their communities (Corporation for National and Community Service, 2007; Gray, Ondaatje, & Zakaras, 1999). Such connection between community and classroom learning distinguishes service-learning courses from traditional, nonservice courses and other community service and volunteering activities.
Many empirical studies have suggested that service-learning programs benefit a variety of personal and academic outcomes. Based on a large-scale, longitudinal study of college students, Astin, Vogelgesang, Ikeda, and Yee (2000) found service-learning participation to be positively correlated with GPA, writing skills, critical thinking skills, commitment to activism, promoting racial understanding, choice of a service career, and plans to participate in service after college. Other studies have demonstrated that service-learning participation is associated with civic and cognitive gains as well (Astin & Sax, 1998; Batchelder & Root, 1994; Giles & Eyler, 1994; Markus, Howard, & King, 1993). Astin and Sax (1998) also found that participating in service-learning during the undergraduate years substantially enhances a student’s life skills development. Gray et al. (1999) examined students’ beliefs about the influence of a service-learning course to a traditional, nonservice course on their development in four areas: 1) civic responsibility, 2) life skills, 3) academic development, and 4) professional development. They found a strong positive correlation between service-learning course participation and expressions of civic responsibility, including an increased likelihood of volunteering and actively helping to address societal problems. A weaker correlation was found with life skills development, such as understanding people with a background different from one’s own. In a comprehensive review of the literature encompassing studies from 1993 to 2000, Eyler, Giles, Stenson, and Gray (2001) concluded that service-learning benefits student personal and interpersonal development; reduces stereotypes and facilitates cultural and racial understanding; and increases social responsibility, citizenship skills, commitment to service, and post-graduation involvement in community service. They also found that participation is positively related to academic learning (e.g., critical thinking and problem analysis), students’ ability to apply their knowledge to “the real world,” and career development. Results related to more formal measures of course performance, like GPA, were mixed though, as were studies on student cognitive moral development.
Learning Community. Learning community (LC) programs are becoming more widely used in the higher education (Chapman, Ramondt, & Smiley, 2005; Heaney & Fisher, 2011; Janusik & Wolvin, 2007; Stebleton & Nownes, 2011). The definitions of LC vary little across the literature. According to Janusik and Wolvin, most scholars define LC as a group of students connected by some common goal or theme (Chapman et al., 2005; Heaney & Fisher, 2011; Hotchkiss, Moore, & Pitts, 2006; Howles, 2009; Stebleton & Nownes, 2011). Most LCs involve 30 to 100 students while smaller LCs can include 8 to 15 students (Howles, 2009; Janusik & Wolvin, 2007). LC students participate in curricular and co-curricular activities designed by faculty and staff consistent with the desired outcomes. Frequently, but not always, the programs are housed in a special residential facility to enable a living-learning environment (Stebleton & Nownes, 2011).
LCs are intended to provide college students with opportunities for intellectual and creative development through inquiry. Students experience their education through active learning environments that include research, increased faculty guidance and interaction, and more hands-on experiences in and outside the classroom. These types of learning experiences enable deeper learning and more knowledge retention than a learning environment in which a professor simply tells students what they need to know (Chapman et al., 2005; Heaney & Fisher, 2011; Hotchkiss et al., 2006; Janusik & Wolvin, 2007; Stebleton & Nownes, 2011).
While some LC outcomes are directly linked to deep learning, as mentioned above, many of the outcomes lend themselves to reducing student attrition. For example, students who participate in LCs reported a stronger sense of connection to the university and faculty, stronger feelings about their ability to work as part of a team, feeling they could take directions better than their peers, perceiving themselves to be co-creators of knowledge and collaborators of research, feeling more comfortable writing research papers, receiving more formative feedback regarding their learning process, collaborative learning helped to develop skills in critical thinking and problem solving, and lower frequency of academic probation (Chapman et al., 2005; Heaney & Fisher, 2011; Hotchkiss et al., 2006; Janusik & Wolvin, 2007).
Research with Faculty. Studies have demonstrated that participating in undergraduate research brings about an array of intellectual, academic, professional, and personal benefits to students. Exposure to research in the early years of college has also been linked to higher retention rates. The Council on Undergraduate Research (n.d.) defines undergraduate research as “an inquiry or investigation conducted by an undergraduate student that makes an original intellectual or creative contribution to the discipline” (para. 3). Undergraduate research can be defined broadly to include scientific inquiry, creative activity, and scholarship (Kinkead, 2003). According to the National Conferences on Undergraduate Research (n.d.), “Its central premise is the formation of a collaborative enterprise between student and faculty member—most often one mentor and one burgeoning scholar but sometimes (particularly in the social and natural sciences) a team of either or both” (para. 6). In a comprehensive review of the nature and forms of undergraduate research in the United States and Great Britain, Healy and Jenkins (2009) note that elements of research and inquiry include learning about current research in the discipline, developing research skills and techniques, undertaking research and inquiry, and engaging in research discussion.
Unlike other high-impact practices that are often characterized by some highly recognizable and observable features, structures, and organization, the elements of undergraduate research can be found in many individual courses as part of regular class assignments. However, as a high-impact practice, NSSE defines undergraduate research as being done with a faculty member outside of course or program requirements. This description of undergraduate research is consistent with Healy and Jenkin’s (2009) view on the form of undergraduate research in the United States—“often outside the formal curriculum; for example, in summer enrichment programmes” (p. 17).
Research studies have reported numerous benefits of involvement in undergraduate research across professional, academic, and cognitive areas. Kardash (2000) found that students who participated in an undergraduate research experience reported an increase in such specific research skills as “observing and collecting data, understanding the importance of controls, interpreting data, orally communicating the results of research projects, and thinking independently” (p. 196). Importantly, these self-ratings were validated by the students’ faculty research mentors. Lopatto (2004) discusses how undergraduate students participating in a summer research program reported learning gains, such as increased understanding of the research process, scientific problems, and lab techniques, as well as gains in personal development like tolerance for obstacles and working independently. Seymour, Hunter, Laursen, and DeAntoni (2003) also reported that participation in a research experience led to the following gains: increased personal and professional self-confidence; development of professional collegiality with faculty mentors and peers; gains in the application of knowledge and skills; increased knowledge and understanding of science and research; enhanced communication skills; and gains in understanding, clarification, and refinement of future career and postgraduation plans.
Internship. Internships, field experiences, and clinical assignments are important experiences for students to excel in their future careers as they help them acquire real-world experience, self-management skills, and self-understanding. Participants command higher salaries (Gault, Redington, & Schlager, 2000), find their first job more quickly (Henry, 1979; Knouse, Tanner, & Harris, 1999) and experience greater success (Gault, Redington, & Schlager, 2000) and job stability (Richards, 1984) in their early careers. In a 2011 position statement, the National Association of Colleges and Employers defined internships as a form of experiential learning that integrates knowledge and theory learned in the classroom with practical application and skills development in a professional setting.
The skills that students learn in career preparation experiences appear to increase their time-management skills and self-discipline (Kane, Healy, & Henson, 1992; Taylor, 1988) as well as increase critical thinking and communication skills (Maskooki, Rama, & Raghunandan, 1998; Raymond, McNabb, & Matthaei, 1993). These benefits may help explain why internship participants have higher grades than nonparticipants (Knouse, Tanner, & Harris, 1999). Participants are also better able to understand themselves, particularly in ways that are relevant to their intended vocations. Researchers refer to this phenomenon as “self-concept crystallization.” This positive relationship between career preparation experiences and self-knowledge is one of the more robust research findings in this area (Brooks, Cornelius, Greenfield, & Joseph, 1995; Pedro, 1984; Taylor, 1988). Career preparation participants not only gain more knowledge about themselves but they also gain valuable knowledge about their intended vocation and how they may (or may not) fit into that vocation. In this view, career preparation experiences function as “anticipatory socialization” experiences allowing students to determine the fit between themselves and their intended vocation (Greenhaus, Callanan, & Godshalk, 2000; Richards, 1984).
Study Abroad. Study abroad programs serve as an enriching component to higher education, providing undergraduate students everyday experiences with people of diverse cultures (Gonyea, 2008; Salisbury, 2009; Vande Berg, Connor-Linton, & Paige, 2009). Study abroad, defined as all educational programs that take place outside the geographical boundaries of the country of origin (Kitsantas, 2004), entails students from one home institution taking a semester (or sometimes a year) of coursework at a different institution in another country (Council on International Educational Exchange, 2006). The study abroad experience can vary but, regardless of its design, scholars and students, as well as parents and future employers, see the many benefits of study abroad (Salsibury, Umbach, Paulsen, & Pascarella, 2009). While away, students are exposed to local cultures, thus enriching their educational experience. These include an opportunity to travel and experience new cultures (Norris & Steinberg, 2008) as well as to use language skills with native speakers, which is associated with detectable gains in language skills (Hadis, 2005). Yet another advantage is that study abroad is associated with positive growth in students (Hadis, 2005). For instance, students that study abroad tend to have an increased knowledge of international affairs and nuanced cultural issues (Williams, 2009).
Maturation is another gain associated with study abroad (Hadis, 2005). In addition to developing competencies in a given foreign language, researchers have found that participation in such programs can also develop interpersonal and intrapersonal skills. There is evidence that participation in study abroad lends to persistence as well as to quicker progress toward graduation (Salisbury, Paulsen, & Pascarella, 2010). One other benefit of study abroad has to do with gains after college. Employers are keen on the values gained and experiences that can be applied toward a profession (Trooboff, Berg, & Rayman, 2008).
Culminating Senior Experience. Cuseo (1998) defines culminating senior experience as a learning opportunity with the goals of bringing integration and closure to the undergraduate experience; providing students with an opportunity to reflect on the meaning of their college experience; and facilitating student transition to postcollege life. Students who participate in a culminating senior experience are more likely to collaborate with other students, interact with faculty, perceive their institution as a place of learning and support, and engage in higher-order learning (Kuh, 2008; NSSE, 2011).
Senior culminating experiences encompass various activities, most notably capstone courses, senior project or thesis, and comprehensive examinations. Senior capstone courses are usually taught by one faculty member with the aim of instilling a deep understanding of a specific discipline, while often taking the student outside of the classroom (Henscheid, 2000). Typically, a senior’s capstone work will be displayed publicly, as part of a presentation, exhibit, or poster session (Bachand et al., 2006). For faculty, evaluating the work in a senior capstone course can be a comprehensive means to assess student learning on a department level (Bertheide, 2007).
Completing a thesis presents an (often first-time) opportunity for students to write intensively about a current issue in their field using an academic style (Reynolds & Thompson, 2011). In addition, writing a thesis allows students to receive one-on-one feedback from either a faculty member or peer. Coupled with a faculty-mentored research project, writing a thesis has been shown to increase student research abilities and critical thinking skills (Lopatto, 2003).
Final comprehensive exams can be traditional tests or may be graded group projects (Sims & Falkenberg, 2013). For example, at the Kelly School of Business at Indiana University (2014), students enroll in Integrative Core (I-Core), which is a block of four classes that culminates in a final “rite of passage” case-study examination in which small groups of students solve a real-life case (para. 7). One of the distinctions of a comprehensive exam is that it is static; unlike many of the other manifestations of culminating senior experiences, comprehensive exams lack the self-guidance and personal direction represented in capstone projects or a student thesis (Schermer & Gray, 2012).
Reflections on NSSE
Measured against strict scholarly standards of theory construction, student engagement is untidy as a theoretical construct. It lacks precision and parsimony; encompasses behaviors, perceptions, and environmental factors; and merges related yet distinct theoretical traditions with a collection of research-informed good practices. But student engagement was not conceptualized to advance theory or even to generate testable propositions—although it can be used for those purposes. Rather, the focus on student engagement emerged from the concerns of practice: asserting a new definition of college quality sharply focused on teaching and learning while providing colleges and universities with measures of process and institutional environment that can inform the improvement of undergraduate education. Because student engagement was explicitly built on a solid foundation of research findings, it represents a signal example of bringing research to bear on pressing concerns of practice. Student engagement integrates what has been learned about quality of student effort, student involvement, and principles for good practice in undergraduate education into a broad framework for assessing quality and guiding its improvement. In this regard, it represents precisely what some leading scholars have argued has been lacking in higher education research (Altbach, 1998; Keller, 1985; Terenzini, 1996).
As a tool to inform institutional improvement initiatives, NSSE has from the outset documented how institutions use results to guide improvement efforts. The emphasis of student engagement on actual behavior and effective educational practice, rather than on abstract values or feelings of satisfaction, offers educators the ability to assess quality concretely and to do so in a way that focuses attention on a range of initiatives including accreditation self-studies, benchmarking and strategic planning, faculty and staff development, general education reform, retention efforts, state system performance reviews, and more.
In advocating assessment of the college environment, Pace (1980) and Astin (1991) sought to influence changes in institutional practice, and this purpose endures in the contemporary application of their ideas. When NSSE emerged in the early 21st century, the project sought to enrich the national discourse about college quality by shifting the conversation away from reputation, resources, and student preparation and toward students’ experience, specifically, the activities bearing on teaching and learning and empirically linked to desired outcomes. To foster this shift, the project asserted the practical aim of providing administrators and faculty with tools for examining the prevalence of effective educational practices on their campuses. The survey results provide participating institutions with actionable diagnostic information about student behaviors and institutional factors that can be influenced in practice and that an array of educators can address. The primary goal of NSSE, then, is to inform and foster improvement in undergraduate education.
References
Altbach, P. G. (1998). Research, policy, and administration in higher education: The Review at twenty. The Review of Higher Education, 21(3), 205–207.
Alwin, D. F., Cohen, R. L., & Newcomb, T. L. (1991). Political attitudes over the life span: The Bennington women after fifty years. Madison, WI: University of Wisconsin Press.
Ames, C., & Archer, J. (1988). Achievement goals in the classroom: Students’ learning strategies and motivation processes. Journal of Educational Psychology, 80(3), 260.
Association of American Colleges and Universities (AAC&U). (2002). Greater expectations: A new vision for learning as a nation goes to college (National Panel Report). Washington, DC: Author.
Astin, A. W. (1968). The college environment. Washington, DC: American Council on Education.
Astin, A. W. (1970). The methodology of research on college impact. Sociology of Education, 43, 223–254.
Astin, A. W. (1977). Four critical years: Effects of college on beliefs, attitudes, and knowledge. San Francisco, CA: Jossey-Bass.
Astin A. W. (1984) Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25, 297–308.
Astin, A. W. (1991). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. San Francisco, CA: Jossey-Bass.
Astin, A. W. (1993). What matters in college? Four critical years revisited. San Francisco, CA: Jossey-Bass.
Astin, A. W., & Sax, L. J. (1998). How undergraduates are affected by service participation. Journal of College Student Development, 39(3), 251–263.
Astin, A. W., Vogelgesang, L. J., Ikeda, E. K., & Yee, J. A. (2000). How service-learning affects students. Los Angeles, CA: Higher Education Research Institute.
Bachand, D. J., Huntley, D., Hedberg, M., Dorne, C., Boye-Beaman, J., & Thorns, M. (2006) A monitoring report to the Higher Learning Commission on program assessment, general education assessment, and diversity. Retrieved from www.svsu.edu/emplibrary/HLC%20Final%20Report%202006 %20-Web.pdf
Batchelder, T. H., & Root, S. (1994). Effects of an undergraduate program to integrate academic learning and service: Cognitive, prosocial cognitive, and identity outcomes. Journal of Adolescence, 17, 341–355.
Baxter Magolda, M. B. (2006). Intellectual development in the college years. Change: The Magazine of Higher Learning, 38(3), 50–54.
Bean, J. P., & Eaton, S. (2000). A psychological model of college student retention. In J. Braxton (Ed.) Rethinking the departure puzzle: New theory and research on college student retention (pp. 48–61). Memphis, TN: University of Vanderbilt Press.
Bertheide, C. (2007). Doing less work, collecting better data: Using capstone courses to assess learning. Peer Review, 9(2), 27–30.
Biggs, J. (1988). Approaches to learning and to essay writing. In R. R. Schmeck (Ed.), Learning strategies and learning styles (pp. 185–228). New York, NY: Springer.
Blaich, C., & Wise, K. (2008). Overview of findings from the first year of the Wabash National Study of Liberal Arts Education. Unpublished manuscript. Retrieved from www.liberalarts.wabash.edu
Bok, D. (2006). Our underachieving colleges: A candid look at how students learn and why they should be learning more. Princeton, NJ: Princeton University Press.
Bourner, T. (2003). Assessing reflective learning. Education Training, 45(5), 267–272.
Bowen, H. R. (1977). Investment in learning: The individual and social value of American higher education. San Francisco, CA: Jossey-Bass.
Bowman, N. A. (2010). College diversity experiences and cognitive development: A meta-analysis. Review of Educational Research, 80(1), 4–33.
Bowman, N. A. (2011). Promoting participation in a diverse democracy: A meta-analysis of college diversity experiences and civic engagement. Review of Educational Research, 81(1), 29–68.
Bowman, N. A. (2013). How much diversity is enough? The curvilinear relationship between college diversity interactions and first-year student outcomes. Research in Higher Education, 54(8), 874–894.
Boyer, E. L. (1987). College: The undergraduate experience in America. New York, NY: Harper & Row.
BrckaLorenz, A., Ribera, T., Kinzie, J., & Cole, E. (2012). Examining effective faculty practice: Teaching clarity and student engagement. To Improve the Academy, 31, 149–160.
Brooks, L., Cornelius, A., Greenfield, E., & Joseph, R. (1995). The relation of career-related work or internship experiences to the career development of college seniors. Journal of Vocational Behavior, 46, 332–349.
Cabrera, A. F., Crissman, J. L., Bernal, E. M., Nora, A., Terenzini, P. T., & Pascarella, E. T. (2002). Collaborative learning: Its impact on college students’ development and diversity. Journal of College Student Development, 43(1), 20–34.
Cabrera, A. F., Nora, A., Terenzini, P. T., Pascarella, E., & Hagedorn, L. S. (1999). Campus racial climate and the adjustment of student to college: A comparison between White students and African American students. The Journal of Higher Education, 70(2), 134–160.
Chapman, C., Ramondt, L., & Smiley, G. (2005). Strong community, deep learning: Exploring the link. Innovations in Education and Teaching International, 42(3).
Chesebro, J. L., & McCroskey, J. C. (2001). The relationship of teacher clarity and immediacy with student state receiver apprehension, affect, and cognitive learning. Communication Education, 50(1), 59–68.
Chiang, L. H. (1998, October). Enhancing metacognitive skills through learning contracts. Paper presented at the annual meeting of the Mid-Western Educational Research Association, Chicago, IL.
Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, March, 3–7.
Cole, D. (2007). Do interracial interactions matter? An examination of student-faculty contact and intellectual self-concept. The Journal of Higher Education, 78(3), 249–281.
Corporation for National and Community Service. (2007). The impact of service-learning: A review of current research. Retrieved from www.nationalservice.gov
Council on International Educational Exchange (2006). Our view: A research agenda for study abroad. Portland, ME: Council on International Education Exchange.
Council on Undergraduate Research. (n.d.). About CUR. Retrieved from http://www.cur.org/about.html
Cox, B. E., & Orehovec, E. (2007). Faculty-student interaction outside the classroom: A typology from a residential college. The Review of Higher Education, 30(4), 343–362.
Cruce, T. M., Wolniak, G. C., Seifert, T. A., & Pascarella, E. T. (2006). Impacts of good practices on cognitive development, learning orientations, and graduate degree plans during the first year of college. Journal of College Student Development, 47(4), 365–383.
Cuseo, J. B. (1998). Objectives and benefits of senior year programs. In J. N. Gardner, G. Van der Veer, & Associates (Eds.), The senior year experience: Facilitating integration, reflection, closure, and transition (pp. 21–36). San Francisco, CA: Jossey-Bass.
DeKonty Applegate, M., Benson Quinn, K., & Applegate, J. T. (1994). Using metacognitive strategies to enhance achievement for at-risk liberal arts college students. Journal of Reading, 38, 32–40.
Denson, N. (2009). Do curricular and co-curricular diversity activities influence racial bias? A meta-analysis. Review of Educational Research, 79(2), 805–838.
Denson, N., & Chang, M. J. (2009). Racial diversity matters: The impact of diversity-related student engagement and institutional context. American Educational Research Journal, 46(2), 322–353.
Dewey, J. (2004). Democracy and education. Courier Dover.
Dingman, S. W., & Madison, B. L. (2010). Quantitative reasoning in the contemporary world, 1: The course and its challenges. Numeracy, 3(2): Article 4.
Dingman, S. W., & Madison, B. L. (2011). Twenty-first-century quantitative education: Beyond content. Peer Review, 13(3), 15–18.
Dretzke, B. J., & Keniston, A. H. (1989). The relation between college students’ reading strategies, attitudes, and course performance. Chicago, IL: Midwestern Psychological Association.
Edgerton, R. (1997). Education white paper (unpublished manuscript). Retrieved from www.faculty.umb.edu/john_saltmarsh/resources/Edgerton%20Higher%20Education%20White%20Paper.rtf
Ehrenberg, R. (2002). Tuition rising: Why college costs so much. Cambridge, MA: Harvard University Press.
El-Hindi, A. E., & Childers, K. D. (1996, January). Exploring metacognitive awareness and perceived attributions for academic success and failure: A study of at-risk college students. Paper presented at the annual meeting of the Southwest Educational Research Association, New Orleans, LA.
Everson, H. T., & Tobias, S. (1998). The ability to estimate knowledge and performance in college: A metacognitive analysis. Instructional Science, 26, 65–79.
Everson, H. T., Tobias, S., & Laitusis, V. (1997, March). Do metacognitive skills and learning strategies transfer across domains? Paper presented at the annual convention of the American Educational Research Association, Chicago, IL.
Ewell, P. T. (2010). The U.S. National Survey of Student Engagement (NSSE). In D. D. Dill & M. Beerkens (Eds.), Public policy for academic quality: Analyses of innovative policy instruments. New York, NY: Springer.
Ewell, P. T., & Jones, D. P. (1993). Actions matter: The case for indirect measures in assessing higher education’s progress on the national educational goals. Journal of General Education, 42(2), 123–148.
Ewell, P. T., & Jones, D. P. (1996). Indicators of “good practice” in undergraduate education: A handbook for development and implementation. Boulder, CO: National Center for Higher Education Management Systems.
Eyler, J. S., Giles, D. E., Stenson, C. M., & Gray, C. J. (2001). At a glance: What we know about the effects of service-learning on college students, faculty, institutions, and communities, 1993–2000 (3rd ed.). Funded by the Corporation for National Service Learn and Serve America National Service-Learning Clearinghouse.
Feldman, K., & Newcomb, T. (1969). The impact of college on students (Vols. 1–2). San Francisco, CA: Jossey-Bass.
Flowers, L. A., & Pascarella, E. T. (2003). Cognitive effects of college: Differences between African American and Caucasian students. Research in Higher Education, 44(1), 21–49.
Fosnacht, K., & Gonyea, R. M. (2012). The dependability of the NSSE 2012 pilot: A generalizability study. Paper presented at the annual meeting of the Association for Institutional Research, New Orleans, LA.
Gallik, J. D. (1999). Do they read for pleasure? Recreational reading habits of college students. Journal of Adolescent and Adult Literacy, 42, 480–488.
Gauder, H., Giglierano, J., & Schramm, C. H. (2007). Porch reads: Encouraging recreational reading among college students. College and Undergraduate Libraries, 14(2), 1–24.
Gault, J., Redington, J., & Schlager, T. (2000). Undergraduate business internships and career success: Are they related? Journal of Marketing Education, 22, 45–53.
Giles, D. E., & Eyler, J. (1994). The theoretical roots of service-learning in John Dewey: Toward a theory of service-learning. Michigan Journal of Community Service Learning, 1(1), 77–85
Gillman, R. (Ed.). (2006). Current practices in quantitative literacy. Washington, DC: Mathematics Association of America.
Ginsberg, S. M. (2007). Teacher transparency: What students can see from faculty communication. Journal of Cognitive Affective Learning, 4(1), 13–24.
Gladwell, M. (2011, February 14). The order of things. The New Yorker, 87(1), 68–75.
Gonyea, R. M. (2008). The impact of study abroad on senior year engagement. Paper presented at the annual meeting of the Association for the Study of Higher Education, Jacksonville, FL.
Goodsell, A. S., Maher, M., & Tinto, V. (1992). Collaborative learning: A sourcebook for higher education (Vol. 1). University Park, PA: National Center on Postsecondary Teaching, Learning, and Assessment.
Graham, A., & Thompson, N. (2001). Broken ranks. Washington Monthly, 33, 9–13.
Gray, M. J., Ondaatje, E. H., & Zakaras, L. (1999). Combining service and learning in higher education: Summary report. Santa Monica, CA: RAND.
Greene, S. (1993). The role of task in the development of academic thinking through reading and writing in a college history course. Research in the Teaching of English, 27(1), 46–75.
Gurin, P., Dey, E. L., Hurtado, S., & Gurin, G. (2002). Diversity and higher education: Theory and impact on educational outcomes. Harvard Educational Review, 72(3), 330–367.
Hadis, B. F. (2005). Why are they better students when they come back? Determinants of academic focusing gains in the study abroad experience. Frontiers: The Interdisciplinary Journal of Study Abroad, 1157–1170.
Hall, C. W. (2001). A measure of executive processing skills in college students. College Student Journal, 35, 442–451.
Hathaway, R. S., Nagda, B., & Gregerman, S. (2002). The relationship of undergraduate research participation to graduate and professional education pursuit: An empirical study. Journal of College Student Development, 43(5), 614–631.
Hativa, N. (1998). Lack of clarity in university teaching: A case study. Higher Education, 36(3), 353–381.
Hativa, N., Barak, R., & Simhi, E. (2001). Exemplary university teachers: Knowledge and beliefs regarding effective teaching dimensions and strategies. The Journal of Higher Education, 72(6), 699–729.
Healy, M., & Jenkins, A. (2009). Developing undergraduate research and inquiry. York, UK: HE Academy. Retrieved from www.heacademy.ac.uk/assets/documents/resources/ publications/ DevelopingUndergraduate_Final.pdf
Heaney, A., & Fisher, R. (2011). Supporting conditionally admitted students: A case study of assessing persistence in a learning community. Journal of the Scholarship of Teaching and Learning, 11(1).
Henry, N. (1979, May–June). Are internships worthwhile? Public Administration Review, 245–247.
Henscheid, J. M. (2000). Professing the disciplines: An analysis of senior seminars and capstone courses (Vol. 30). Columbia, SC: National Resource Center for the First-Year Experience and Students in Transition, University of South Carolina.
Hotchkiss, J. L., Moore, R. E., & Pitts, M. M. (2006). Freshman learning communities, college performance, and retention. Education Economics, 14(2).
Howles, T. (2009). A study of attrition and the use of student learning communities in the computer science introductory programming sequence. Computer Science Education, 19(1).
Huber, M. T., & Hutchings, P. (2004). Integrative learning: Mapping the terrain. Washington, DC: Association of American Colleges and Universities.
Ibabe, I., & Jauregizar, J. (2010). Online self-assessment with feedback and metacognitive knowledge. Higher Education, 59, 243–258.
Indiana University. (2014). Curriculum. Three signature components of the Kelley curriculum will give you the knowledge and the experience to thrive in any industry. Bloomington, IN: Kelley School of Business. Retrieved from https://kelley.iu.edu/Ugrad/Academics/Curriculum/page39062.html
Isaacson, R. M., & Fujita, F. (2006). Metacognitive knowledge monitoring and self-regulated learning: Academic success and reflections on learning. Journal of the Scholarship of Teaching and Learning, 6, 39–55.
Janusik, L. A., & Wolvin, A. D. (2007). The communication research team as learning community. Education, 128(2).
Kane, S. T., Healy, C. C., & Henson, J. (1992). College students and their part-time jobs: Job congruency, satisfaction, and quality. Journal of Employment Counseling, 29, 138–144.
Kardash, C. M. (2000). Evaluation of an undergraduate research experience: Perceptions of undergraduate interns and their faculty mentors. Journal of Educational Psychology, 92(1), 191–201.
Keller, G. (1985). Trees without fruit: The problem with research about higher education. Change: The Magazine of Higher Learning, 17(1), 7–10.
Kim, Y. K., & Sax, L. J. (2009). Student-faculty interaction in research universities: Differences by student gender, race, social class, and first-generation status. Research in Higher Education, 50(5), 437–459.
Kinkead, J. (2003). Learning through inquiry: An overview of undergraduate research. New Directions for Teaching and Learning, 2003(93), 5–17.
Kitsantas, A. (2004). Studying abroad: The role of college students’ goals on the development of cross-cultural skills and global understanding. College Student Journal, 38(3), 441.
Knouse, K., Tanner, J. T., & Harris, E. W. (1999). The relation of college internships, college performance, and subsequent job opportunity. The Journal of Employment Counseling, 36(1), 35–43.
Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory Into Practice, 41(4), 212–218.
Kuh G. D. (n.d.). The National Survey of Student Engagement: Conceptual framework and overview of psychometric properties. Center for Postsecondary Research, Indiana University, Bloomington. Retrieved from http://nsse.iub.edu/ pdf/conceptual_framework_2003.pdf
Kuh, G. D. (1993). In their own words: What students learn outside the classroom. American Educational Research Journal, 30(2), 277–304.
Kuh, G. D. (2001). Assessing what really matters to student learning: Inside the National Survey of Student Engagement. Change: The Magazine of Higher Learning, 33(3), 10–17.
Kuh, G. D. (2003). What we’re learning about student engagement from NSSE. Change: The Magazine of Higher Learning, 35(2), 24–32.
Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. Washington, DC: Association of American Colleges and Universities.
Kuh, G. D. (2009). The National Survey of Student Engagement: Conceptual and empirical foundations. In R. M. Gonyea & G. D. Kuh (Eds.), Using NSSE in institutional research. New Directions for Institutional Research, 2009(141), 5–20.
Kuh, G. D., & Hall, J. E. (1993). Cultural perspectives in student affairs work. In G. D. Kuh (Ed.), Cultural perspectives in student affairs work (pp. 1–20). Washington, DC: American College Personnel Association.
Kuh, G. D., & Hu, S. (2001). The effects of student-faculty interaction in the 1990s. The Review of Higher Education, 24(3), 309–332.
Kuh, G. D., Kinzie, J., Buckley, J. A., Bridges, B. K., & Hayek, J. C. (2006). What matters to student success: A review of the literature (Report commissioned for the National Symposium on Postsecondary Student Success: Spearheading a Dialog on Student Success). Washington, DC: National Postsecondary Education Cooperative.
Kuh, G. D., Kinzie, J., Schuh, J. H., & Whitt, E. J. (2005). Assessing conditions to enhance educational effectiveness: The inventory for student engagement and success. San Francisco, CA: Jossey-Bass.
Kuh, G. D., Kinzie, J., Schuh, J. H., Whitt, E. J., & Associates. (2010). Student success in college: Creating conditions that matter. San Francisco, CA: Jossey-Bass.
Kuh, G. D., Pace, C. R., & Vesper, N. (1997). The development of process indicators to estimate student gains associated with good practices in undergraduate education. Research in Higher Education, 38(4), 435–454.
Kuh, G. D., Schuh, J. H., Whitt, E. J., Andreas, R., Lyons, J., Strange, C. C., Krehbiel, L. E., & MacKay, K. A. (1991). Involving colleges: Successful approaches to fostering student learning and development outside the classroom. San Francisco, CA: Jossey-Bass.
Kutner, M., Greenburg, E., Jin, Y., Boyle B., Hsu, Y., & Dunleavy, E. (2007). Literacy in everyday life: Results from the 2003 National Assessment of Adult Literacy (NCES 2007-480). Washington, DC: U.S. Department of Education, National Center for Educational Statistics.
Lambert, A. D., Rocconi, L. M., Ribera, A. K., Miller, A. L., & Dong, Y. (2012, June). Faculty lend a helping hand to student success: Measuring student-faculty interactions. Paper presented at the annual meeting of the Association for Institutional Research, New Orleans, LA.
LaNasa, S. M., Cabrera, A. F., & Transgrud, H. (2009). The construct validity of student engagement: A confirmatory factor analysis approach. Research in Higher Education, 50, 315–332.
Landrum, R. E., Gurung, R. A. R., Spann, N. (2012). Assessments of textbook usage and the relationships to student course performance. College Teaching, 60, 17–24.
Lewis, A., & Smith, D. (1993). Defining higher order thinking. Theory into Practice, 32(3), 131–137.
Light, R. (2001). Making the most of college: Students speak their minds. Boston, MA: Harvard University Press.
Loes, C., Pascarella, E., & Umbach, P. (2012). Effects of diversity experiences on critical thinking skills: Who benefits? Journal of Higher Education, 83(1), 1–25.
Lopatto, D. (2003). The essential features of undergraduate research. Council on Undergraduate Research Quarterly, 24(139–142).
Lopatto, D. (2004). Survey of undergraduate research experiences (SURE): First findings. Cell Biology Education, 3(4), 270–277.
Machung, A. (1998). Playing the rankings game. Change: The Magazine of Higher Learning, 30(4), 12–16.
Madison, B. L. (2009). All the more reason for QR across the curriculum. Numeracy, 2(1), Article 1.
Markus, G., Howard, J., & King, D. (1993). Integrating community service and classroom instruction enhances learning: Results from an experiment. Educational Evaluation and Policy Analysis, 15(4), 410–419.
Marton, F. (1983). Beyond individual differences. Educational Psychology, 3, 289 – 303.
Marton, F., & Säljö, R. (1976a). On qualitative differences in learning. I: Outcome and process. British Journal of Educational Psychology, 46, 4–11.
Marton, F., & Säljö, R. (1976b). On qualitative differences in learning. II: Outcome as a function of the learner’s conception of the task. British Journal of Educational Psychology, 46, 115–127.
Marton, F., & Säljö, R. (1984). Approaches to learning. In F. Marton, D. J. Hounsell, & N. J. Entwistle (Eds.), The experience of learning (pp. 39–58). Edinburgh, UK: Scottish Academic.
Maskooki, K., Rama, D. V., & Raghunandan, K. (1998). Internships in undergraduate finance programs. Financial Practice and Education, 8, 74–82.
McCormick, A. C., Gonyea, R. M., & Kinzie, J. (2013). Refreshing engagement: NSSE at 13. Change: The Magazine of Higher Learning, 45(3), 6-15.
McCormick, A. C., Kinzie, J., & Gonyea, R. M. (2013). Student engagement: Bridging research and practice to improve the quality of undergraduate education. In M. B. Paulsen (Ed.), Higher education: Handbook of theory and research (Vol. 28, pp. 47–92). Dordrecht, The Netherlands: Springer.
McCormick, A. C., & McClenney, K. (2012). Will these trees ever bear fruit? A response to the special issue on student engagement. The Review of Higher Education, 35(2), 307–333.
McDonough, P. M., Antonio, A. L., Walpole, M., & Perez, L. X. (1998). College rankings: Democratized knowledge for whom? Research in Higher Education, 39(5), 513–537.
McKeachie, W. J., Pintrich, P. R., & Lin, Y. G. (1985). Teaching learning strategies. Educational Psychologist, 20(3), 153–160.
Merwin, J. C. (1969). Historical review of changing concepts of evaluation. In R. L. Tyler (Ed.), Educational evaluation: New roles, new methods (The 68th Yearbook of the National Society for the Study of Education, Part II). Chicago, IL: University of Chicago Press.
Mokhtari, K., Reichard, C.A., & Gardner, A. (2009). The impact of internet and television use on the reading habits and practices of college students. Journal of Adolescent and Adult Literacy, 5(7), 609–619.
National Center for Higher Education Management Systems (NCHEMS). (1994). A preliminary study of the feasibility and utility for national policy of instructional “good practice” indicators in undergraduate education. Washington, DC: U.S. Department of Education, National Center for Education Statistics.
National Endowment for the Arts. (2007). To read or not to read: A question of national consequence (Research Report # 47). Washington, DC: Author.
National Survey of Student Engagement. (2000). The NSSE 2000 Report: National Benchmarks of Effective Educational Practice. Bloomington, IN: Indiana University Center for Postsecondary Research.
National Survey of Student Engagement. (2009). Validity: 2009 Known groups validation. (NSSE Psychometric Portfolio Report). Bloomington, IN: Indiana University Center for Postsecondary Research.
National Survey of Student Engagement. (2011). Fostering student engagement campuswide—Annual results 2011. Bloomington, IN: Indiana University Center for Postsecondary Research.
National Survey of Student Engagement. (2013). A fresh look at student engagement—Annual results 2013. Bloomington, IN: Indiana University Center for Postsecondary Research.
Nelson Laird, T. (2005). College students’ experiences with diversity and their effects on academic self-confidence, social agency, and disposition toward critical thinking. Research in Higher Education, 46(4), 365–387.
Nelson Laird, T., Shoup, R., & Kuh, G. D. (2005a). Deep learning and college outcomes: Do fields of study differ? Presentation at the annual forum of the Association for Institutional Research, Chicago, IL.
Nelson Laird, T., Shoup, R., & Kuh, G. D. (2005b). Measuring deep approaches to learning using the National Survey of Student Engagement. Presentation at the annual forum of the Association for Institutional Research, Chicago, IL.
Newcomb, T. L. (1943). Personality and social change: Attitude formation in a student community. New York, NY: Dryden Press.
Newcomb, T. L., Koenig, K. E., Flacks, R., & Warwick, D. P. (1967). Persistence and change: Bennington College and its students after 25 years. New York, NY: Wiley.
Norris, E., & Steinberg, M. (2008). Does language matter? The impact of language of instruction on study abroad outcomes. Frontiers: The Interdisciplinary Journal of Study Abroad, 17, 107–131.
Olmsted, J., & Ruediger, S. (2013). Using reflection papers in principals of macroeconomics classes. Journal of Economics and Economic Education Research, 14(1), 85–96.
Organization for Economic Cooperation and Development (OECD). (2013). OECD skills outlook 2013: First results from the survey of adult skills, OECD Publishing.
Ormrod, J. E. (2011). Human learning (6th ed.). Upper Saddle River, NJ: Pearson.
Pace, C. R. (1964). The influence of academic and student subcultures in college and university environments. Los Angeles, CA: University of California Los Angeles.
Pace, C. R. (1969). An evaluation of higher education: Plans and perspectives. Journal of Higher Education, 40(9), 673–681.
Pace, C. R. (1979). Measuring outcomes of college: Fifty years of findings and recommendations for the future. San Francisco, CA: Jossey-Bass.
Pace, C. R. (1980). Measuring the quality of student effort. Current Issues in Higher Education, 2, 10–16.
Pace, C. R. (1982). Achievement and the quality of student effort. Washington, DC: National Commission on Excellence in Education.
Pace, C. R. (1984). Student effort: A new key to assessing quality (Report No. 1). Los Angeles, CA: University of California, Higher Education Research Institute.
Pace, C. R. (1998). Recollections and reflections. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 13, pp. 1–34). New York, NY: Agathon.
Pascarella, E. T. (1985). College environmental influences on learning and cognitive development: A critical review and synthesis. In J. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 1, pp. 1–64). New York, NY: Agathon.
Pascarella, E. T. (2006). How college affects students: Ten directions for future research. Journal of College Student Development, 47(5), 508–520.
Pascarella, E. T., & Terenzini, P. T. (1977). Patterns of student-faculty informal interaction beyond the classroom and voluntary freshman attrition. Journal of Higher Education, 48(5), 540–552.
Pascarella, E. T., & Terenzini, P. T. (1991). How college affects students: Vol. 1. Findings and insights from twenty years of research. San Francisco, CA: Jossey-Bass.
Pascarella, E. T., & Terenzini, P. T. (2005). How college affects students: Vol. 2. A third decade of research. San Francisco, CA: Jossey-Bass.
Pedro, J. D. (1984). Inclusion into the workplace: The impact of internships. Journal of Vocational Behavior, 25, 8–95.
Pike, G. (1991). The effects of background, coursework, and involvement on students’ grades and satisfaction. Research in Higher Education, 32(1), 16–30.
Pike, G. R. (2006a). The convergent and discriminant validity of NSSE scalelet scores. Journal of College Student Development, 47(5), 551–564.
Pike, G. R. (2006b). The dependability of NSSE scalelets for college-and department-level assessment. Research in Higher Education, 47(2), 177–195.
Raymond, M. A., McNabb, D. E., & Matthaei, C. F. (1993). Preparing graduates for the workforce: The role of business education. Journal of Education for Business, 68, 202–206.
Reynolds, J. A., & Thompson, R. J. (2011). Want to improve undergraduate thesis writing? Engage students and their faculty readers in scientific peer review. CBE-Life Sciences Education, 10(2), 209–215.
Richards, E. W. (1984). Undergraduate preparation and early career outcomes: A study of recent college graduates. Journal of Vocational Behavior, 24, 279–304.
Rivera-Batiz, F. L. (1992). Quantitative literacy and the likelihood of employment among young adults in the United States. The Journal of Human Resources, 27(2), 313–328.
Rosinski, P., & Peeples, T. (2012). Forging rhetorical subjects: Problem-based learning in the writing classroom. Composition Studies, 40(2), 9–32.
Salisbury, M. (2009). Going global: Understanding the choice process of the intent to study abroad. Research in Higher Education, 50(2), 119–143.
Salisbury, M. H., Paulsen, M. B., & Pascarella, E. T. (2010). To see the world or stay at home: Applying an integrated student choice model to explore the gender gap in the intent to study abroad. Research in Higher Education, 51(7), 615–640.
Salisbury, M. H., Umbach, P. D., Paulsen, M. B., & Pascarella, E. T. (2009). Going global: Understanding the choice process of the intent to study abroad. Research in Higher Education, 50(2), 119–143.
Schermer, T., & Gray, S. (2012). The senior capstone: Transformative experiences in the liberal arts (Final report to The Teagle Foundation). Retrieved from http://www.teaglefoundation.org/Teagle/media/GlobalMediaLibrary/documents/resources/The_Senior_Capstone.pdf?ext=.pdf
Schraw, G. (1998). Promoting general metacognitive awareness. Instructional Science, 26, 113–125.
Seymour, E., Hunter, A., Laursen, S., & DeAntoni, T. (2003). Establishing the benefits of research experiences for undergraduates: First findings from a three-year study. Science Education, 88, 493–534.
Shavelson, R. J. (2008). Reflections on quantitative reasoning: An assessment perspective. In B. L. Madison & L. A. Steen, (Eds.), Calculation vs. context: Quantitative literacy and its implications for teacher education. Racine, WI: Mathematics Association of America.
Simonds, C. J. (1995). Classroom understanding: An expanded notion of teacher clarity. Communication Research Reports, 14(3), 279–290.
Sims, L., & Falkenberg, T. (2013). Developing competencies for education for sustainable development: A case study of Canadian faculties of education. International Journal of Higher Education, 2(4).
Stebleton, M., & Nownes, N. (2011). Writing and the world of work: An integrative learning community model at a two-year institution. Journal of College Reading and Learning, 41(2).
Steen, L. A. (Ed.). (1997). Why numbers count: Quantitative literacy for tomorrow’s America. New York, NY: College Entrance Examination Board.
Steen, L. A. (Ed.). (2001). Mathematics and democracy: The case for quantitative literacy. Princeton, NY: Woodrow Wilson National Fellowship Foundation.
Study Group on the Conditions of Excellence in American Higher Education. (1984). Involvement in learning: Realizing the potential of American higher education (Final report of the Study Group on the Conditions of Excellence in American Higher Education). Washington, DC: National Institute of Education.
Taylor, C. (2008). Preparing students for the business of the real (and highly quantitative) world. In B. L. Madison & L. A. Steen (Eds.), Calculation vs. context: Quantitative literacy and its implications for teacher education. Racine, WI: Mathematics Association of America.
Taylor, S. M. (1988). Effects of college internships on individual participants. Journal of Applied Psychology, 73, 393–401.
Terenzini, P. T. (1996). Rediscovering roots: Public policy and higher education research. The Review of Higher Education, 20(1), 5–13.
Terenzini, P. T., & Pascarella, E. T. (1980). Student/faculty relationships and freshman year educational outcomes: A further investigation. Journal of College Student Personnel, 21, 521–528.
Thacker, L. (2008). Pulling rank. New England Journal of Higher Education, 22(4), 15–16.
Thompson, N. (2000, September). Playing with numbers: How U.S. News mismeasures higher education and what we can do about it. Washington Monthly, 32(9), 16–23.
Tinto, V. (1975). Dropouts from higher education: A theoretical synthesis of the recent literature. Review of Educational Research, 45, 89–125.
Tinto, V. (1986). Theories of student departure revisited. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (vol. 11, pp. 359–384). New York, NY: Agathon.
Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago, IL: University of Chicago Press.
Trooboff, S., Vande Berg, M., & Rayman, J. (2008). Employer attitudes toward study abroad. Frontiers: The Interdisciplinary Journal of Study Abroad, 1517–1533.
Umbauch, P. D., & Wawrzynski, M. R. (2005). Faculty do matter: The role of college faculty in student learning and engagement. Research in Higher Education, 46(2), 153–184.
Vande Berg, M., Connor-Linton, J., & Paige, R. (2009). The Georgetown Consortium Project: Interventions for student learning abroad. Frontiers: The Interdisciplinary Journal of Study Abroad, 18, 1–75.
Vermetten, Y. J., Lodewijks, H. G., & Vermunt, J. D. (1999). Consistency and variability of learning strategies in different university courses. Higher Education, 37(1), 1–21.
Vermunt, J. D. (1996). Metacognitive, cognitive and affective aspects of learning styles and strategies: A phenomenographic analysis. Higher Education, 31(1), 25–50.
Volkwein, J. (1991, November). Improved measures of academic and social integration and their association with measures of student growth. Paper presented at the meeting of the Association for the Study of Higher Education, Boston, MA.
Volkwein, J., & Carbone, D. (1994).The impact of departmental research and teaching climates on undergraduate growth and satisfaction. Journal of Higher Education, 65(2), 147–167.
Vygotskii, L. S. (1962). Thought and language. Cambridge, MA: MIT Press.
Welsh, J. F., Alexander, S., & Dey, S. (2001). Continuous quality measurement: Restructuring assessment for a new technological and organizational environment. Assessment and Evaluation in Higher Education, 26(5), 391–401.
Whitt, E. J., Edison, M., Pascarella, E. T., Nora, A., & Terenzini, P. T. (1999). Interactions with peers and objective and self-reported cognitive outcomes across three years of college. Journal of College Student Development, 40(1), 61–78.
Wilkins, J. L. M. (2000). Preparing for the 21st century: The status of quantitative literacy in the United States. School Science and Mathematics, 100(8), 406–418.
Wilkins, J. L. M. (2010). Modeling quantitative literacy. Educational and Psychological Measurement, 70(2), 1–24.
Williams, T. (2009). The reflective model of intercultural competency: A multidimensional, qualitative approach to study abroad assessment. Frontiers: The Interdisciplinary Journal of Study Abroad, 18, 289–306.
Young, A., & Fry, J. (2008). Metacognitive awareness and academic achievement in college students. Journal of the Scholarship of Teaching and Learning, 8(2), 1–10.
Zeegers, P. (2004). Student learning in higher education: A path analysis of academic achievement in science. Higher Education Research and Development, 23(1), 35–56.
Appendix A
NSSE 2.0 Development Process Details
The development process for the NSSE 2.0 instrument, first administered in 2013, was officially launched in 2009 with the establishment of five internal working groups, each with a specific task: 1) benchmark review, 2) new concept exploration, 3) individual survey item review, 4) institutional user feedback collection, and 5) first-year and senior specific item development. NSSE leadership asked these groups to develop recommendations that would substantively improve the core instrument and yet would still leave many of the existing items relatively unchanged so as to ensure reliable longitudinal analyses. Moving forward, working groups also understood that participating institutions would eventually be able to append to the core survey additional item sets (NSSE’s Topical Modules) on yet-to-be identified areas. Although all working groups began meeting in 2009, initial efforts focused primarily on reviewing the quality of the existing instrument across key criteria. A consensus emerged that the development process should ensure that all NSSE items 1) are appropriate for all types of students; 2) reflect the current higher education landscape; 3) have strong validity and reliability properties; 4) are valued by colleges and universities; 5) are actionable by institutions; 6) have good response variation; 7) have effective response options; and 8) have potential use for future scales. These criteria were used to inform the development of the first and second NSSE 2.0 pilot instruments as well as for the administration of various experimental item sets that would be used to psychometrically test existing questions and new content areas (i.e., quantitative reasoning, teaching clarity). Two years of development work ensued encompassing quantitative and qualitative (cognitive interviews and focus groups) item testing and consultation with external experts before NSSE administered its first pilot instrument in spring 2011.
For the first pilot administration, 19 colleges and universities participated with over 21,000 first-year and senior respondents and with a 32% average institutional response rate. The resulting data were analyzed with two purposes: to examine the quality of the new and modified items and to explore new groupings of items as potential benchmarks or scales. Individual item properties were evaluated and compared to statistics from the same or similar questions on the standard NSSE and other experimental questions. Analysts also compared each institution’s pilot results with their most recent standard NSSE results. Additionally, questions were tested through cognitive interviews of nearly 40 students at seven different campuses, focus groups with students at five different campuses, phone interviews with students on specific questions, and write-in responses from students taking the pilot. Recommendations on rewording or removing items were based on these item-level analyses as well as on feedback from outside sources, such as institutional user and board member feedback.
The 2012 pilot instrument extended many of the changes made for the 2011 pilot, with adjustments based on quantitative analysis and qualitative results from cognitive testing and focus groups, and tested several new questions in areas such as learning strategies and interactions with faculty. Responses were collected from over 50,000 students at 55 institutions with an average response rate of 28%. These pilot data were also analyzed to examine the quality of new and modified items individually and to explore the properties of multi-item measures of engagement for institutional reporting. Staff collected qualitative information from 12 different campuses through 120 individual cognitive interviews, 10 focus groups (with a total of 79 students), phone interviews, and pilot write-in responses in this final round of survey testing. Feedback from outside sources such as institutional users, assessment experts, and national advisory board members was also collected. Results helped the NSSE team to make important decisions about survey question wording as well as to edit, reframe, or delete survey items based on student interpretation of item clarity and validity.
Appendix B
Engagement Indicator Development Process
The NSSE 2.0 instrument’s ten major scales—called Engagement Indicators—which replaced NSSE’s original five benchmark measures, resulted from extensive quantitative and qualitative psychometric testing after both pilot administrations. Unlike the original five benchmark measures, NSSE staff designed the Engagement Indicators to be unidimensional constructs. Testing included the following general methodologies: exploratory factor analysis, confirmatory factor analysis, known-groups validity, internal consistency reliability, generalizability theory, concurrent and predictive validity, cognitive interviews, focus groups, and item response theory. This process was also informed by reviews of individual item descriptive statistics and analyses to ensure that the measures performed well for students taking their courses online. For more detailed psychometric studies related to the Engagement Indicators, see NSSE’s Psychometric Portfolio (e.g., construct validity, predictive validity, cognitive interviews/focus groups, reliability).