Call 0867227014, 0318115749 or WhatsApp 0825507946 for more detail!

WHAT IS EVALUATION?

As human beings we naturally ask questions about how useful and how valuable our activities are. We can think of evaluation as a process of considerably sharpening this natural activity of checking on our ongoing work. A more formal definition is to think of evaluation as ‘providing information to make decisions about the product or process’.

Evaluation is not equivalent to research, although it employs research techniques as a means of generating the necessary information, and uses similar criteria for the reliability and validity to judge the quality of the evidence. Also, evaluation tends to be broader than research, as it usually requires information about a range of situations, products and processes. However, the main difference between evaluation and research is that evaluation also involves making judgements about the value of what is being evaluated.

Evaluation in an educational setting is the process whereby we seek evidence that the learning experiences we have designed for students are effective. As we will discuss later, we evaluate educational activities for two overlapping reasons:

  • to obtain information that can inform the ongoing design and development process (often referred to as formative evaluation);
  • to decide whether an innovation is worth retaining (often referred to as summative evaluation). These forms of evaluation often meld together, and each can be difficult to undertake properly.

Clarity is the key to successful evaluation because it determines what kind of a flashlight you use, who holds it and where it is pointed. It is particularly important to be clear about the following issues:

  • What are the purposes of the evaluation? Who is the evaluation for? Who should participate and how?
  • How can you unpack your own assumptions (about teaching and learning) so that you can check how these affect the evaluation?
  • What specific outcomes are you aiming for? What audience/s do you wish the evaluation to reach and inform?

Let us consider each of these in turn, although they are really quite intertwined.

 

Who is the Evaluation for?

What do we mean by describing an educational experience as effective? From whose point of view is it effective?

The decision about effectiveness might be from several different angles. Evaluation, in general, is the process of finding out how effective or useful some activity is. Obviously the decision about how valuable something is depends on the perspectives and vested interests that various stakeholders have, and final decisions about effectiveness can vary quite markedly.

It is very important to ask who the evaluation is for. There are many stakeholders in the planning of university offerings and a variety of information may be sought. In educational innovations, there are several stakeholders. In Table 1.2 we have listed a range of possible stakeholders, and some of the interests they might have in an educational activity, whether this is an innovation in the curriculum or the continuation of existing practice. Each participant in an evaluation study in this project should scan Table 1.2 to see which stakeholders, other than teachers and students, need to be considered, and what implications that has for the information you will seek to gather.

Table 1.2. Description of various possible stakeholders in the types of evaluation studies addressed in the handbook.

Stakeholder Examples of the vested interest of each stakeholder
Teachers Professional satisfaction. Keeping a job.
Students Learning something perceived to have value.
Getting qualifications that can lead to employment.
Subject and course coordinators Ensuring that the students’ learning meets some quality assurance standards.
Faculty deans Capacity to provide for increasing numbers of students.
Meeting professional standards of the discipline area.
Members of the university’s chancellery Links to the university’s strategic mission.
Cost-effectiveness, especially in the provision of technology.
Funding body Assuring that the product is congruent with the grant application.
Employers A focus on graduate capabilities rather than all the intervening experiences.
Professional accrediting bodies Standards relating to what skills and knowledge graduates require in particular professions for the 21st century.

Please note that this handbook places student learning at the centre of the evaluation enterprise and we will focus on discussing evaluation questions and strategies from that point of view.

 

How can you Unpack your own Assumptions about Teaching and Learning?

Evaluation of the educational impact of CFL is a complex field: different evaluators employ different paradigms and hence ask different questions when designing their evaluations. Whenever a measurement or observation is made, the situation being evaluated intrinsically alters (Keeves, 1988). This issue is rarely addressed in conventional evaluation. One must ask the extent to which the outcome of the evaluation was due to the evaluation design selected? In examining evaluation studies, there is a need to describe the context and clarify the educational rationale which has explicitly or implicitly been adopted by the evaluators.

Reeves, (1997) has mapped the dominant paradigms which are used in evaluation studies. He also describes models which researchers use within these paradigms. The paradigms are briefly summarised in Table 1.3, together with a commentary3. It is important that the members of each evaluation study spend time discussing their own paradigms, clarifying their own positions and explicitly looking at the assumptions underlying any models and associated methods they adopt. The role of the mentor is very important in this respect.

Such evaluation is conducted in a naturalistic way (avoiding manipulation of the environment) with data produced largely through qualitative methods (sacrificing wide generalisability for richness and better understanding). While these characteristics can be contrasted to the experimental approach (manipulating the environment) with quantitative data collection (everything is measurable), current practice favours the Eclectic-Mixed Methods-Pragmatic Paradigm, involving a mixed approach to data production and analysis, with both qualitative and quantitative information obtained in the evaluation process.

The Constructivist-Interpretive-Qualitative approach requires a paradigm shift for many academics, whose fields of study are implicitly grounded in an objective, experimental view of the world.

 

Table 1.3. Brief summary of the dominant evaluation paradigms.

Paradigm Assumptions Comment
Positivist-Quantitative Paradigm
  • Problems can be defined a priori.
  • The complexity of social situations can be reduced to a string of variables which are clearly operationalised.
  • There is a reliance on controlled experimentation.
  • Events can be explained in terms of cause and effect.
  • There is one ‘right’ interpretation.
There can be value in seeking to quantify measures. However, people and the complexity of social interactions cannot be reduced to clearly defined variables, and it often is impossible to produce matched groups of people.

We would advise participants not to adopt only quantitative strategies.

Constructivist-Interpretive-Qualitative Paradigm
  • There is a focus on exploring the dynamics of interactions with the emphasis on the world as a socially constructed reality involving multiple perspectives.
  • The perceptions and values of all the participants in a situation are needed in order to explore the various possible interpretations.
This paradigm has enriched our understanding of social situations a great deal.

The main problem with the qualitative nature of this approach is that it does not necessarily focus on the areas which need change. Descriptions are made, but often without any form of judgment attached. This is at odds with the attempt to find appropriate ways to improve situations, which may be the purpose of the evaluation.

Critical Theory-Postmodern-Paradigm
  • Critical theory aims to transcend the positivism of the traditional approach and the relativism of the interpretive approach by placing the process of critical reflection at the centre of the research process.
  • The focus is on changing the world, not only describing it.
  • The concept of praxis is important; praxis is action which is informed by theoretical ideas and by the process of reflection on existing practice. Theory and reflection feed into the formulation of new practice.
Action inquiry has strong links to critical theory.

In both a Constructivist-Interpretive- Qualitative approach and Critical Theory-Postmodern approach, understanding the dynamics and multiple perspectives of those involved is important.

Qualitative strategies are used in both, but the distinction lies in the purpose to which the evaluation will be put.

Eclectic-Mixed Methods-Pragmatic Paradigm
  • This approach is more capable of handling the complexity of modern society and technology.
  • The focus is on practical problems rather than on issues of reality and theories of society.
  • It acknowledges the weakness of current evaluation tools.
Complex evaluation plans can result.

The strength of this approach is the acknowledgment of the current state of the art of evaluation; there are no ‘right’ approaches and maintaining an open approach is essential.

We recommend that participants look favourably at this pragmatic way of proceeding.

What Specific Outcomes are you Aiming for?

It is important to focus on what is achievable in a project of this short timeframe. Evaluation is an ongoing activity, and we hope that this period of focused evaluation activity will lead to new ways of approaching the design, development, implementation and reflection on all teaching and learning activities. But, in 2000, what do you hope to achieve?

To answer this question, you will need to devise an evaluation plan, considering the scope of the evaluation, the questions you want answered and how you want to report your results. The learner-centred framework for evaluation proposed here, (see §2) will provide guidance in structuring your evaluation plan.

The evaluation plan will vary depending on the nature of your project. Some possibilities might be:

  • Evidence of how some specific strategies or materials work with a given group of students. This might well be a subset of a full subject, e.g. just how students use threaded discussions and chat sessions, or how students use a simulation exercise.
  • Evidence of how some specific strategies or materials work across diverse groups of students, for example, the process of implementing offshore teaching using existing resources already developed. The use of similar resources with both full-time, young students and part-time, mature age students could also be the focus of the evaluation.
  • Evidence of how some specific strategies or materials assist students learn specific concepts or procedures.

Consideration of the nature of your project will help you to determine the scope of your evaluation study. This may result in the creation of a relatively long list of initial questions. The process of planning an evaluation is about the refinement of this list of questions using criteria such as: what is of most interest, what is feasible (in terms of paradigm of inquiry and methodology), what is practically possible, etc.

The Flashlight Evaluation Handbook(Ehrmann, 1999b) emphasises the importance of carefully designing the questions you ask:

“The process of laying the foundations for asking a good question is one of the most important and time-consuming aspects of any evaluation. The more you learn about your own perspectives on education, and those of the stakeholders in the evaluation, the more such values you will consider.”

Another relevant quote from Ehrmann (Ehrmann, 1999a) is:

“the quest for useful information about technology begins with an exacting search for the right questions”.

Besides defining the scope of the evaluation study and the evaluation questions, you need to think about how you intend to report your evaluation. We are encouraging participants to undertake a process leading to the writing of a formal paper, but you may wish to choose other appropriate reporting mechanisms. For example:

  • The involvement of some students in evaluation might become a negotiated assessment task.
  • The strategic value of your work within your own university should be considered. Which university committee might this work be of relevance to?

5 total views, no views today