The methodology adopted by the National Institutional Ranking Framework (NIRF) is broadly divided into five clusters and specific weightage is assigned to each of these clusters. Some of the clusters may be traditional and there may be a need to broaden the dimensions to include some experiential and practical systems, which are followed by various colleges and educational institutions.
—
IN this era when education has become a market commodity, university rankings have become the most important factor to form an opinion of students and their parents.
A survey conducted by 'THE Student Pulse', a research resource of the Times Higher Education consultancy team, found that a university's ranking was the second most-researched factor by prospective international students when choosing where to study, with 34 percent of respondents saying it was important to them, after tuition cost and ahead of courses offered.
Ranking educational institutions can be a daunting task, especially in a country like ours, with a humongous number of institutions that differ significantly from one another.
Due to such a large number of institutions established in different parts of the country, it is not possible for every person to have access to credible information. Hence, ranking becomes the best option to determine the quality of education in colleges and universities.
“However, the factor of ranking has taken an obsessive form due to the extreme competitiveness.
However, the factor of ranking has taken an obsessive form due to the extreme competitiveness and number of institutions available providing the same courses or curriculum.
But the factors on which a ranking is awarded to an institution vary from organisation to organisation. Some take only theoretical research under consideration whereas others may take note of the opinion other people— the faculty, the alma maters, the admitted students— have of their institution.
Irrespective of the inconsistencies in the formulation of ranking, a university ranking is necessary for inculcating a competitive spirit and to give the students and their parents an opportunity to objectively assess institutions before picking one.
Seemingly that task has been taken up by the National Institutional Ranking Framework (NIRF).
The NIRF was adopted by the Ministry of Education, government of India to rank higher education institutions in India. The framework was approved by the former Ministry of Human Resource Development (now Ministry of Education) and launched by the minister of human resource development on September 29, 2015.
Institutions are ranked in 11 different categories depending on the field of activity: public, university, faculty, engineering, management, pharmacy, law, medicine, architecture, dentistry and research. The framework uses several parameters for ranking purposes, such as resources, research and stakeholder understanding.
These parameters are grouped into five clusters and specific weights are assigned to these clusters. Weighting depends on the type of institution. About 3,500 institutions voluntarily participated in the first phase of the ranking.
Broadly divided into five, the parameters cover: "Teaching, learning and resources"; "research and professional practices"; "graduation outcomes"; "outreach and inclusivity" and "perception".
“Broadly divided into five, the parameters cover: "Teaching, learning and resources"; "research and professional practices"; "graduation outcomes"; "outreach and inclusivity" and "perception".
However, the soundness of these parameters and the methodology adopted to scrutinise collected data has been the subject of much criticism since its inception.
The five parameters are put in restrictive pigeonholes and there is a need to elaborate and widen the categories involved for formulation of a ranking system that keeps some more factors under consideration; which can help the teachers, students, parents and companies that approach for recruitment to make an educated opinion.
Ignorance of qualitative research
One of the parameters, "research and professional practices", puts a lot of emphasis on publication metrics. The scores here depend on the number of publications and citations in reputed journals, which is problematic for three reasons.
First, the index fails to acknowledge the impact of qualitative research as it is more time- and resource-intensive.
Second, most journals that are selected for obtaining publication metrics are indexed in either Scopus or Web of Science.
The problem here is that these databases are dubious and cannot be relied upon. In 2021, as many as 50 journals were delisted by the Web of Science.
Third, citations are highly susceptible to manipulations. Many institutions indulge in the nasty practice of self-citations as was stated in the Journal Citation Report.
No wonder there have been huge fluctuations in the rankings of institutions within a short period of one year.
Take the case of Jamia Millia Islamia, which improved its ranking by a whopping 71 places from 83 in 2016 to 12 in 2017.
Conversely, Guru Gobind Singh University saw its ranking drop 61 places, from 22 to 82 during the same period.
It is hard to digest how any institution can improve or lose its rank by such a drastic number of places within a year.
Incomprehensive and traditional method
The NIRF ranking system relies on a limited set of parameters, such as research output, faculty–student ratio and perception.
By focusing predominantly on these parameters, the ranking fails to provide a comprehensive and holistic assessment of all colleges.
Take the case of law universities in India, important aspects like practical training, alumni outcomes, industry collaborations and the relevance of the curriculum do not receive adequate consideration.
Most National Law Universities (NLUs) offer industry-relevant and socially-relevant courses that private or traditional colleges do not offer. It is not a surprise that they have the best placement results in the country.
The ranking also does not explicitly recognise or incentivise innovative teaching methods, pedagogical approaches or instructional technologies that enhance the quality of education.
“The ranking also does not explicitly recognise or incentivise innovative teaching methods, pedagogical approaches or instructional technologies that enhance the quality of education.
Institutions that prioritise innovative teaching practices and invest in faculty development to improve teaching effectiveness may not receive due recognition in the rankings. This incomplete assessment results in inaccurate rankings that do not reflect the true quality of education law colleges provide.
Lack of emphasis on quality of teaching
A major drawback of the NIRF ranking is that it does not emphasise on teaching quality and student learning outcomes. While research is important, the primary function of any educational institution is to provide quality education and equip students with the necessary skills and knowledge for a better career.
By not considering teaching effectiveness adequately, the ranking fails to recognise that some colleges excel in delivering high-quality education and producing well-rounded graduates.
The NIRF ranking lacks specific mechanisms to directly assess the teaching quality which involves classroom observations, student evaluations and feedback from alumni.
These evaluation methods provide valuable insights into the effectiveness of teaching practices and their impact on student learning outcomes. The absence of such mechanisms can result in an incomplete assessment of teaching quality.
Misses practical dimension of teaching
Like other disciplines, legal education requires the development of certain practical skills such as legal writing, advocacy, negotiation and client counselling.
But the NIRF ranking's limited focus on practical training, such as moot court competitions, internships and clinical legal education, leads to the undervaluation of law colleges that prioritise exposure.
Consequently, law colleges that excel in providing hands-on experiential learning opportunities may receive lower rankings, despite their effectiveness in preparing students for real-world legal practice.
Non-transparent and subjective
The specific weightage assigned to each parameter and the calculation methodology used by the NIRF ranking system is not always transparent. This lack of transparency raises questions about the objectivity and fairness of the rankings.
The accuracy and reliability of data used for ranking purposes are essential for the credibility of any ranking system. However, there have been concerns regarding the consistency and accuracy of the data provided by institutions for the NIRF ranking.
Several institutions have been found to have boosted their rankings by fudging and manipulating critical data. NIRF has limited capability of scrutinising the vast amounts of data submitted by colleges in a limited period of time.
The perception parameter
The NIRF also relies on the perception parameter. The parameter includes surveys of opinion of academics and employers which may be heavily subjective in nature. Perceptions of teaching quality can vary among different respondents and their opinions may not accurately reflect the actual teaching effectiveness of law colleges.
“However, there have been concerns regarding the consistency and accuracy of the data provided by institutions for the NIRF ranking.
The parameter lacks a standardised framework or set criteria for respondents to evaluate institutions. This absence of a common benchmark leads to inconsistencies in evaluations and perceptions across different respondents. It becomes challenging to ensure that the perceptions captured by the parameter are objective and reflect the true quality of the institutions.
While the NIRF ranking framework is a positive initiative towards improving the quality of education and creating a competitive spirit among educational institutions, it fails to capture the true picture.
The subjective nature of certain parameters while neglecting indicators like teaching quality and practical training, non-transparency in methodology and data, and the lack of feedback mechanisms raise questions of objectivity and reliability.
It is important to remember that rankings should not be the sole determinant in making decisions about higher education. Prospective students and stakeholders should consider rankings as one aspect among many, and also evaluate other factors such as faculty quality, curriculum, practical training opportunities, industry collaborations and alumni outcomes to make well-informed decisions.
Ultimately, the aim should be to promote continuous improvement in the quality of higher education and empower individuals to make choices that align with their educational and career aspirations.