Prognostic, criterion referenced, norm referenced,ipsative

 


Topic: Prognostic, criterion referenced, norm referenced,ipsative

 

1)      Prognostic test

Prognostic tests act as a means of estimation and prediction of the future career. The prognostic test combines basic aspects taken from an assessment of learning processes and an assessment of learning achievements and tries to formulate a diagnosis for the student’s future.

In prognostic studies the focus of interest is what may happen in the future. It is natural, therefore, that most prognostic studies have outcomes that are the time to a specific event, such as death. However, some prognostic studies with dichotomous outcomes may inappropriately ignore the time element.

 

2)      Norm-referenced test

The term nom has two meanings.One is the established or approved set of behaviour or conduct to be followed or displayed by all members of a family or society or any organisation. It is the established customs of the society which most of the people follow without any question. The other meaning of the term is meaningful for us here , that is the average performance of the group.

For example, a group of students are tested for the awareness towards environmental pollution through a written test. The test consists of 50 objective type questions of one mark each. There was no negative marking in the test. The full mark of the test is obviously 50 . After the conduction of the test it is marked by the examiner. There are 150 students in the group. Marks of all students are added and the additive marks are divided by 150 to find the average performance of the group. Suppose it is found to be 30. Then 30 marks is the average obtained by the whole group in which some achieve 49 out of 50 and some other achieve very less, that is 12 out of 50. The 30 mark ,i.e.,the average of the group is said to be the norm of this group.

Now, the evaluation of all 150 students is done considering this 30( the norm) as a point of reference . All students who have got marks about 30 are considered as above average & all those who have achieved below 30 are considered as below average and all those who have achieved just 30 are supposed as average . There is no pass or fail in  this type of evaluation as there is set  marks for passing the test. This type of evaluation is called as norm - referenced evaluation and the test as considered as norm- reference test.

 

3)      Criterion-referenced test

The type of evaluation in which the performance of the testees  are evaluated with reference to some predetermined criteria is called as criterion-referenced evaluation. No weightage is given to norm or average performance of the group in this evaluation. All decisions such as pass or fail, distinction, excellent etc are taken with reference to criteria set out in advance. In the above example, if some criteria is set before the test with reference to which the performance of each students will be  evaluated, it will become criterion-referenced evaluation. Suppose the following criteria are finalized for this test:

Pass mark :40%

Distinction: 80%

In the test discuss about all the students who get 20 or more than 20 (40 percentage) are declared as pass. All those students who score less than 20 are declared as failed. All those who get 40 or more( 80%) are declared as distinction. If  any price is given to those who score at least 90% , then only the students who will get 45 or more will get the price. As all decisions are taken on the basis of some criteria, this evaluation is called as criterion -referenced evaluation.

Construction of criterion-referenced test

Step1: Identifying the purpose of the test

First of all, the objectives of the test are finalized. The  test developer must be aware of the purpose of the test for which it is going to be prepared and along with it he should know the following aspects of the test:

·         Content area of the test from were the items will be developed

·         Level of students or examinees for whom test is being prepared

·         Difficulty level of the test items

·         Types of the test( objective type or subjective type or mixed type of test)

·          Criteria for qualifying the test

By understanding these points ,the test developer starts to work on constructing the criterion-referenced test and then moves to the second step

Step 2: Planning the test

Here the test developer works on,

(I)                  Content analysis: Involves the selection of contents .i.e.,  the testing areas and its peripherals. He also decides the key areas of the content from where more questions are to be developed.

(II)                Types of items: The decisions regarding the type of items are taken at this stage. In case of subjective type, it may be essay type /short answer type and very short answer type. In case of objective type ,it may be multiple choice question/ fill in the blanks /true or false type/ sentence completion type /one word answer type etc if the test is mixed type, then questions are developed accordingly .but what is planned at this stage is the proportion of objective and subjective type items in terms of marks.

(III)              No.of items :This includes the total number of questions of each type involved in the test.

(IV)              Weightage : it is very important to decide the weightage of each type of items and each content area. It depends upon the level of student being tested. As we move from lower to higher level the percentage of knowledge domain items decreases and higher order thinking abilities such as understanding, application and skill increases. The test developer also decides the weightage of each of the content areas being included in the test considering its relevance.

(V)                Duration of the test : Here the difficulty level of the test items and the duration of the test are decided.

(VI)              Mechanical aspects : It includes the quality of paper, ink ,diagrams type setting font size and printing of the test papers.

(VII)            Development of key for objective scoring : To bring objectivity in evaluation process, it is essential to achieve interpersonal agreement among the examiners with regard  to the meaning of the test items and their scoring. For this purpose ,an answer key is  prepared for each paper and given to all examiners. They are supposed to score the test following this key.

(VIII)          Instructions for the test : The test developer also prepare instructions for its administration, scoring and evaluation procedure ' test manual ‘ .It shows the whole procedure of testing. it acts as a guide to the individuals involved in testing procedure at all stages. This manual is strictly applied to bring objectivity in the test.

Step 3: Preparing blueprint of the test

Blueprint serves as a guideline or frame of reference for the person constructing the test. It is a  specification chart which shows the details of the test items to be prepared. It shows all the content areas , number of questions, type of questions from those areas. It also reflects the objectives to be tested. The blueprint describe the weightage given to different content areas , types of items, objectives and all other details of the test.

Step 4 : Construction of test items

According to the blueprint, questions covering all the content areas, objectives and all types of items are constructed. Questions may be objective or subjective type as set in the blueprint.

Step 5 : Selecting the items for the test

The process of selection is done through a process known as ‘try out’ . The steps include:

(I)                  Sampling of subjects : As per the size of the population for which the test is being prepared, a workable sample that is around 150 subjects is selected on a random basis . On this sample the prepared items are tested for knowing its functionality , workability and effectiveness

(II)                Pre-try out : Preliminary try out. Here the prepared items are administered on around a sample of 10 subjects. The answer sheets are checked ,evaluated and discussed with the candidates for identifying if any kind of problem they would have faced during the test.  It is sure that they have faced language difficulty ,words ambiguity and some other problems. The problems are sorted out . The items having these problems are rewritten/ rephrased to improve and modify the difficulties and ambiguity of the items . At the end of the pre tryout the initial draft of the test is prepared.

(III)              Proper-try out : Here the initial draft of the test is administered on around 50 candidates. Answer sheets are scored  and item analysis is done. Difficulty value and discrimination power of each item is calculated. The items which come under acceptable range of difficulty value and discrimination power are selected for the test and the others are rejected.

(IV)              Final-try out : The final try out is done on large sample. The sample size maybe hundred or more depending upon the population size. After administration and scoring of the test, reliability and validity of the test are measured. If it is proved to be reliable and valid, it gets green signal.

Step 6 : Evaluating the test and preparing final draft of the paper

For establishing quality , an  index test manual is prepared which informs about the test’s norms, scoring key ,reliability and validity . The final draft of the test paper is prepared. Instructions for examinees as well as for test administrators are determined. Item analysis performed to find out the item workability for the test . The required changes are done and final draft of the paper is ready for printing.

 

Criterion referenced test

Norm referenced test

 Evaluates an individual’s performance in a given situation with respect to specific characteristics expected in the performance

Compares  the individual’s performance with those of other persons taking the same test

The main objective is to measure the effectiveness of a program or instruction

The main objective is to measure individual differences

It provides specific information on individual levels of performance with respect to objectives

It aims to classify and grade learners in various categories

The score of an individual can be interpreted individually

The meaning of any particular score can be determined only by comparing it to other scores achieved by students taking the test

The purpose is not to classify and rank learners but to ensure development

It is often used for selection purpose

Test results are used to evaluate student performance relative to specific performance levels anticipated

Test results are used for making comparative decisions regarding individuals

A test constructor is not concerned with developing a test to maximize the variability of test scores

It is specifically constructed to maximize the variability of test scores,as the purpose is discrimination of individuals by comparison

Eg; Driving test, Citizenship test

Eg,IQ test, Classroom teacher made test

 

4)      Ipsative assessment

This is assessment against the student’s own previous standards. It can measure how well a particular task has been undertaken against the student’s average attainment, against their best work , or against their most recent piece of work. Ipsative assessment tends to correlate with effort to promote effort- based attributions of success ,and to enhance motivation to learn.

 

 

 

 

 

 

 

 

 

Evolution of Evaluation

  Evolution of Evaluation  “Evaluation is a very young discipline - although it is a very old practice.” - (Scriven, 1996) In this chapter...