Within the context of UW System and campus missions, and mindful of North Central Assessment guidelines, departments were asked to develop plans which addressed the following issues:
1) student learning outcome goals for the program matrixed with
required
courses
2) assessment instruments/measures for each goal
3) evaluation criteria and standards for success for each measure
4) feedback mechanisms for program improvement
5) implementation timetable.
1. Student Learning Outcome goals
Goals have been written for every undergraduate major and
masterÕs
degree program. The goals are of three types:
a) knowledge of subject matter goals,
b) competency or skill goals and
c) affective goals.
In developing goals faculty were asked to consider what their students
should
know, be able to do and value upon completing an undergraduate major or
graduate
program.
In addition to the goals, programs were requested to list the required courses in which each goal is addressed.
2. Assessment Instruments/Measures
Programs were asked to list in their plans the instruments/measures that would be the primary sources of their assessment data. The need for multiple measures was stressed as well as the desirability of a good match between the type of goal and the measures. The departments have responded with a variety of measures including : standardized and locally-prepared exams, portfolios, essays, oral presentations, capstone experiences, interviews and surveys.
3. Evaluation Criteria and Standards of Success
Faculty were encouraged to think about the analysis of the assessment data so that analysis would be structured. For example, many departments have goals to the effect that their majors be able to communicate effectively in written and oral forms using the concepts and special terms of discipline. Faculty recognize that developing evaluation criteria for analyzing writing and speaking is particularly important for the integrity of the process and also for providing feedback to students as well as to the programs.
The evaluation criteria are used to determine a Standard of Success
which is a measure of the similarity of the actual and expected student
learning
outcomes. In their plans faculty either established a process for
determining
a Standard of Success which will be applied during the implementation
phase
or gave explicit Standards of Success for a particular goal, such as,
an
average score at the 55 percentile level on a standardized exam or 80%
average
score on the evaluation criteria used for judging writing or speaking
effectiveness. Some based their Standard of Success on two are more
defined levels of achievement
of a goal by setting expected percentages of the majors that would
perform
at the different levels. Since the number of majors in various programs
is vastly different, some departments will use random sampling of their
majors
while others will use all majors and average the results over several
years.
see 2002 addendum for more information
about
statistical analyses
4. Feedback Mechanisms for Program Improvement
The best evidence that improvement will occur is the structure of the program plans as outlined in the preceding paragraphs. The courses in the program are tightly linked to the goals so that the course(s) needing reform are readily identifiable from the analysis of the assessment data. The instruments and measures are appropriate tot he types of goals and will provide meaningful data. The analysis of the data will be thorough and reliable because of the use of evaluation criteria and standards of success. Finally the program plans contain timetables giving the semester when each step will be implemented including the steps outlining the actions to be taken to improve the curriculum and instruction.
5. Implementation Timetable
Each department/program plan contains a timetable showing when
implementation
of the various aspects of assessment will occur. For example, the
timetable
shows when the instruments, measures and evaluation criteria for
particular
goals will be developed, when data collection will begin, when analysis
and
program improvement will occur, etc.
The FSCASL invites Educational Programs that meet the following criteria to request from them approval for the exclusion of statistical analysis from their assessment plan.
· No more than an average of 10 majors graduating from the
program
any given year over a period of 4 years
· Goals and objectives that are best measured by qualitative
tools
The FSCASL offers these alternatives to small Programs, but does expect that some Programs will choose the original plan with statistical analyses. There is still an expectation that multiple methods of assessment will be used, but the frequency of the use of the tools may be decreased.
The assessment instruments the FSCASL recommends for this level of measuring student performance include, but are not limited to the following:
a. Capstone projects
b. Exit interviews
c. Focus groups and group discussion
d. Direct observations -- by video, oral reports, student teaching
e. Classroom-based Assessment -- portfolios (regular or electronic),
journaling
f. Informal alumni surveys over the telephone
g. Student suggestion boxes
back to assessment home page