IntroductionCriterion 1Criterion 2Criterion 3Criterion 4Criterion 5ConclusionAppendicesExhibits

Standard 2 Assessment System and Unit Evaluation

Download
STANDARD_2.pdf

Assessment System

The Unit’s Assessment System, developed with input from the professional community, was initially constructed as a “plan” in 2000 based on the conceptual framework and professional and state standards.  Phase one involved the development of transition points to monitor candidate performance as candidates moved through their program of study.  Phase two involved the development of a comprehensive and integrated set of evaluation measures at the program and unit levels to monitor candidate performance as well as manage and improve unit and program operations.  Phase three involved the creation of a system of data collection, analysis, and dissemination for the purpose of improving candidate performance, strengthening program delivery, and guiding decision-making at all levels.  Phase four involved the analysis of sources of bias in assessment procedures as well as working to ensure ongoing consistency, accuracy, and fairness in data collection and analysis. 

The unit has worked diligently to create a strong foundation for the assessment system.  Over the years, the unit has matured and become more sophisticated in understanding and determining what data are needed and valued for ongoing improvement. Clearly, we have pockets of excellence within the assessment system as well as areas where development continues.  The unit clearly recognizes the complexity of assessment at the candidate, program, and unit levels and understands that a robust assessment system at an institution this large will take years to plan, develop, implement, and refine.  When the institution was recently involved in their 2007 self-study for regional accreditation with the Higher Learning Commission of North Central Accreditation, the College of Education played a major role in the assessment and leadership efforts.

As called for in the 2000 Assessment Plan, a standing Assessment Committee was constituted to oversee assessment within the unit and to coordinate assessment practices among programs.  The Director of Assessment and Accreditation, a full-time position in the unit, brings proposed policy and practice changes to the Assessment Committee. These changes may be initiated at the program level, department level, or from the committee level such as the Teacher Education Council, consisting of faculty from the Arts and Sciences as well as Professional Education.  Because the Assessment Committee is representative of initial and advanced programs within the departmental structure, feedback is thus obtained from all faculty and staff members within the unit.  Once proposed changes are passed in the Assessment Committee, the changes are brought to the NCATE Steering Committee.  From there, the Dean consults with the Dean’s Advisory Committee (made up of chairs and center directors) for final discussion prior to implementation.  In this way, all changes in the assessment process are thoroughly reviewed and evaluated by departmental representatives and chairs.  The Unit Operations Survey also serves as a formal structure to obtain ongoing data from candidates on the Unit Assessment System.

The Director of Assessment and Accreditation also oversees the efforts of graduate students assigned to the tasks associated with data collection, assists with the writing of assessment reports, and is responsible for dissemination of data to the unit. In 2006, a process was also developed through which department and program representatives are able to provide feedback to the Director of Assessment and Accreditation and the Assessment Committee as to how data are used or to indicate gaps in data collection.  The unit has provided ample resources for assessment, including full reassigned time for the Assessment Director, 25 percent reassigned time for graduate coordinators to focus on assessment and related activities, as well as reassigned time for a faculty member for the purpose of analyzing and reporting follow-up data.  In addition, a data architect and two to four graduate assistantships per semester have been allocated to assessment and development and refinement of the unit data system.

Unit Coherence

The assessment system was established based on the unit’s conceptual framework as well as state and professional standards.  Coherence is demonstrated through an extensive alignment system beginning with course outcomes and ending with performance assessments, employer surveys, and follow-up studies with candidates and cooperating teachers.  A review of the syllabi illustrates how course outcomes and assessments are aligned with the conceptual framework and professional and state standards.  Follow-up studies, employer surveys, cooperating teacher surveys, and performance assessments are also aligned with the conceptual framework and use the INTASC Principles, NBPTS Propositions, CACREP Standards or other appropriate professional standards as the foundation for assessment. 

Key Assessments

The Key Assessments Inventory provides a blueprint of key assessments employed, level of analysis (unit or program), and a timetable outlining when assessments are administered and data are disseminated.  Obviously, programs differ in their approach to assessment; however, key assessments are utilized across programs to provide unit data.  When possible, data are disseminated at the unit level but disaggregated to provide data at the program level. Unit data are reported on the College of Education website for access by candidates, school partners, and other public constituents.  To avoid unhealthy comparisons, program data are disseminated in hard copy at the program level.

  • Scores on the Praxis I Pre-Professional Skills Test (PPST) are collected on each prospective candidate prior to admission to programs within the unit.  Data on program completers are accumulated and disseminated annually.  Data from Praxis II – Principles of Learning and Teaching (PLT) and Praxis II – Content Tests are also collected, analyzed, and disseminated annually within the unit.
  • Over the past five years, the unit has collected three versions of a performance-based instrument used as a summative assessment in student teaching.  The original version of the instrument addressed nine of the 10 INTASC Principles. A newer version, instituted in 2002, includes all 10 of the INTASC Principles as items. Finally, as reliability and validity assessments of the instrument suggested a ceiling effect, a new 11-item performance-based instrument was developed.  This instrument has been employed since 2005. Both cooperating and supervising teachers complete performance-based instruments on candidates during student teaching, allowing a comparative analysis.  Performance-based data are collected each semester and disseminated in the fall (even years).
  • Programs at the advanced level or for other school professionals have identified major assessments within their transition points.  Because programs at the graduate level are quite unique, data are not aggregated at the unit level.  All programs conduct systematic assessments of content knowledge and performance in field or clinical experiences when appropriate.  In addition, all programs conduct follow-up studies and most graduate programs have advisory boards that provide informative feedback on candidate performance.
  • A Self-Report Instrument, organized around the INTASC Principles, is collected every semester as teacher candidates complete their student teaching experience.  Originally the self-report instrument was administered as a mail survey; however, over the past four semesters, the survey has been collected at a required professional development day (targeting student teachers and graduate interns) ensuring close to 100 percent participation. The data are collected, entered into the data system, and analyzed annually; however, dissemination is conducted every other year (odd years).  Data are also disaggregated for programs with 10 or more completers.  In addition, candidates are mailed the self-report instrument at two and five-year intervals as a long-term follow-up.
  • A Cooperating Teacher Questionnaire, also designed to sample INTASC knowledge, skills, and dispositions, is distributed each semester and analyzed annually.  The return rate is typically quite high and the data collected provide important feedback to the unit.  These data are disseminated at the unit and program levels every other academic year (odd years).
  • Two versions of an Employer Survey have been used over the past five years.  One survey was developed based on the INTASC Principles; the other based on the Conceptual Framework.  Data from employer surveys are collected spring semester and disseminated fall semester (odd years).
  • In the past year, a Unit Operations Survey was developed and piloted twice. This tool is designed to assess the efficacy of our activities in the unit to support the acquisition of desired knowledge, skills, and dispositions within our learning environment, programs, support offices, field and clinical experiences, and other unit operations.  As a result of our successful pilot process, data on unit operations will be collected, analyzed, and disseminated in the fall (odd years).   
  • During the past two annual cycles (starting in 2006), data have been organized by the Office of Clinical Experiences on two aspects of candidate performance. First, we have looked systematically at success rates in student teaching and graduate capstone practica (2004-2007).  Reasons for leaving student teaching have been examined with an eye toward identifying outcome issues (e.g., problems with discipline) and process issues (e.g., advisement problems) within transition points that may affect success in capstone experiences. A second relatively new effort is more properly tied to Standard 3 (Clinical and Field Experiences) and Standard 4 (Diversity). Specifically, we have analyzed data related to the diversity of placements to ensure that all candidates experience a diverse field or clinical experience.

Transition Points

Every program in the unit has established clear transition points centered around the general components of admission, prior to clinical experience, exit from clinical experience, program completion, and follow-up.  The unit has worked to “live” their transition points and actually use the different decision points as benchmarks for candidates to move through the program. Individual candidates are tracked through the implementation of the transition points at the program level.  For example, if a candidate does not meet the requirements to enter their clinical experience component, she/he is not placed and is required to continue to try to meet the established criteria through remedial work or counseled out of the profession.  Candidates at both the initial and advanced levels perform well on major assessments as described in Standard One.

Assessment Procedures

The unit is committed to ensuring that assessment procedures are fair, accurate, consistent, and free of bias. The process in terms of the development of assessment instruments at the unit level, in part, is a measure of validity of instruments. No tool is employed until departmental representatives have approved the items as reflecting practices associated with the standards set by NCATE, INTASC, or other accrediting agencies or professional organizations.

The numerical internal consistency reliability of all feedback instruments is reported (a) on a unit-wide basis and (b) at the programmatic level when multiple items are employed in assessing constructs.  For direct observation instruments (primarily the performance-based instrument described above), both reliability and validity are evaluated via correlating cooperating teacher and university supervisor data.  In addition, an overall report (Reliability and Validity Report) on bias, reliability, and validity was produced and disseminated fall 2007.

Results suggest that the major assessments used within the unit tend to be reasonably reliable, though the reliability of some scales is probably influenced by the finding that instruments have proven to be somewhat univocal.  However, given the requirements of accrediting agencies, the configuration of institutional standards implied by our conceptual model, and the national and professional standards, a reasonable level of validity has been documented.

Unit Operations

The unit uses several assessment and evaluation instruments to manage and improve the operations and programs of the unit.  At the institutional level, the unit uses the National Survey of Student Engagement (NSSE) that is collected every other year. The instrument measures operations in the following domains: Level of Academic Challenge, Active and Collaborative Learning, Student-Faculty Interaction, Enriching Educational Experiences, and Supportive Campus Environment.  Data are disaggregated to the college level and disseminated within the unit.  Data on enrollment trends and projected job growth are also collected at the institutional level through extensive work with the National Center for Higher Education Management Systems (NCHEMS).  A new Graduating Senior Survey has also been implemented at the institutional level and disaggregated to the college level providing important feedback to guide decision-making within the unit.

Because the NSSE instrument did not match perfectly with conceptualizations about unit operations, a Unit Operations Survey was developed and piloted during the 2006-2007 academic year. The instrument, approved for piloting and for administration via the process described above, is a 28-item quadrant-analysis scale, whereby candidates rate the importance of operations and the quality of services in the unit.  An initial report was developed during summer 2007 and disseminated fall 2007. Because of low returns, the decision was made to collect further data sets via random sampling of courses taken by (a) seniors, and (b) graduate students in capstone methods courses. The domains measured via the Unit Operations Survey include:  Interactions Within the Unit, Advisement Experiences, Support Offices, Assessment of Candidate Performance (Assessment System), and Learning/ Information Needs.

The Evaluation, Promotion, and Tenure Procedures provide a strong foundation within the Unit Assessment System for faculty evaluation and professional development.  All faculty develop Professional Development Plans (PDP) that are systematically reviewed by colleagues and administrators.  Faculty members also develop a Professional Development Report (PDR) that is submitted for review and comment.  Finally, the student complaint process, monitored and implemented by the associate deans in the colleges, provide oversight of issues and disputes that arise as candidates progress through the transition points.

 

Untitled Document