|Chennai||Rs. 24840.00 (-0.36%)|
|Mumbai||Rs. 25460.00 (-0.16%)|
|Delhi||Rs. 25450.00 (2.21%)|
|Kolkata||Rs. 25000.00 (0%)|
|Kerala||Rs. 24700.00 (0%)|
|Bangalore||Rs. 25050.00 (1.42%)|
|Hyderabad||Rs. 24930.00 (1.63%)|
The Indian Institutes of Management (IIMs) need to be credited for giving a paradigm shift to the common admission test (CAT), which was just a “speed test” earlier. The yesteryears of CAT had seen it as one with lesser time being allowed for the test, in comparison to the number to questions to be attempted by a candidate. A speed test cannot be considered an ideal practice to assess a candidate.
The IIMs have talked about the three-staged scoring mechanism adopted in the CAT scoring process on their website. However, this disclosure has limited value, as there is no information about metrics such as assessment validation studies. Validity studies are indicators of whether the test parameters used are justified for the admission purpose. One of the globally accepted ways to conduct a validity study is to find the correlation between the assessment test scores of a student and his grade point average (GPA) in the first year of joining an academic institution. An assessment can be considered valid for a particular course if a candidate who’s scored high in the test performs well in his college, too. Once this basic validation is done, further validations could be done to see if admission test scores correlate to the long-term success of the candidate in a management career.
Criterion validity exercises are done for all high-stake international exams such as GMAT/GRE/SAT and also by corporations across India for validating their hiring assessments. For example, SAT (scholastic aptitude test) scores, used by universities for admission in the bachelors programme in the US, report a correlation of 0.35 with the first year GPA of a student.
Similarly, GRE (graduate record examination) validity reports portray a correlation of around 0.40(2) with first year GPA. Several companies in India do validation studies to check whether the test used for hiring is indeed correct. For instance, AMCAT (Aspiring Minds Computer Adaptive Test) has a correlation of 0.35-0.6 with success of a software programmer in information technology services companies. With three years of being in play, there is enough data with CAT to see the assessment’s criterion validity, that is correlation of CAT scores of a student with his/her first year GPA figures. Ideally, all colleges using CAT for admission should find the correlation of CAT scores with the first year GPA of students. A meta analysis should be done across these studies to find if the average correlation is significant. If it is not, then CAT could be rejecting good candidates, while accepting bad ones! These figures are, however, yet to be disclosed by the IIMs.
On the other hand, metrics such as reliability and standard error estimation are also crucial in determining the accuracy of a test. It has been three years and there is no reporting on reliability measures of CAT. Several metrics, such as Cronbach’s alpha (a statistical test for calculating reliability), could be used to estimate reliability. The CAT-IIM website claims to be using ‘item response theory’ (IRT) in designing their assessments; in that case, standard error comes out as a key measurement metric. The higher the standard error, the lower the credibility of the test.
To establish the credibility of CAT as the right test for India’s premier management colleges, one needs data-driven measures of its reliability and validity. Unfortunately, this is missing.
CTO and COO, Aspiring Minds