Discussion Summary

Larisa M. Naples (naples+@pitt.edu)
Wed, 02 Apr 1997 16:58:36 +0000


Well, that's all the time we have, folks!

As you prepare to head out to the conference, I would like to leave 
you with a brief summary of the discussion topics which came up in the 
last few weeks, to serve as a jumping-off point for the break-out 
group discussion.

Our discussion began with Janet posting a list of three suggested 
topics for discussion, based on her own ideas, combined with some 
issues raised by Mavis Green.  The three proposed topics were:
1)  What should the goals of evaluation/assessment in this area be?
2)  What are the major issues facing evaluators in this area?
3)  How can we best handle these issues?

I will summarize the results of each of these threads in turn...

1)  Suggested goals for evaluations in this area included:

- Evaluating the effects of technology in use on the learning 
environment

- Determining if the activities of the project address the stated 
goals

- Identifying any generalizeable, disseminatable products or new 
wisdom


2)  Issues facing evaluators in this area included:

- the "moving target" issue, regarding the difficulty in evaluating 
the effect of something which is constantly changing, and which is not 
sufficiently reliable for a sufficient amount of time for us to get 
past evaluating the technology and on to evaluating the effects of its 
use

- the pressure to quantify impacts which are often qualitative in 
nature

- the problem of having evaluation data skewed because only the most 
technically literate and technically excited people respond to 
evaluator queries

- the issue of suspicion of any "official looking" organization which 
is examining community activities, and the resulting silence of 
potential data sources

- translation of research findings into practice happens slowly, if at 
all


3)  Suggested ways of coping with raised issues included:

- with respect to the quantification issue, the idea of using rating 
scales to focus the evaluation process

- with respect to the data-skewed-by-audience issue, the idea of 
emphasizing *why* things occurred as they did rather than focussing 
exclusively on how much was accomplished

- with respect to the moving-target issue, the idea of assessing what 
happens to the process of education when the technology is present and 
working, when the technology is present and not working, and when no 
technology is present at all


In the last few days of the on-line discussion, one new thread was 
added to the conversation.  This involved identifying measures of both 
success and problems in the application of technology to education.  
The first post on this new topic, by Laurie Maak, pointed out the need 
to decide whehter the measures of success should be limited to 
outcomes or products, or whether success could also be measured in the 
quality of the implementation process.  The on-line discussion of 
assessment and evaluation ended with a rather late, but interesting 
counter-post on this issue, by Chris Hasegawa, regarding when the 
"process" *becomes* the "product" to be evaluated.  

This may be a good place to begin our break-out group discussion, 
which might focus more on how to address the various goals and issues 
which were brought out in this on-line discussion.

Hope everybody has a great time at the conference.  See you there!

Larisa M. Naples
Co-Moderator, Assessment and Evaluation Discussion Group