Undergraduates Celebrate Second Consecutive Virtual Housing Day
Dean of Students Office Discusses Housing Day, Anti-Racism Goals
Renowned Cardiologist and Nobel Peace Prize Winner Bernard Lown Dies at 99
Native American Nonprofit Accuses Harvard of Violating Federal Graves Protection and Repatriation Act
U.S. Reps Assess Biden’s Progress on Immigration at HKS Event
Far from providing more clarity to the distinctions between America’s best colleges, the new ranking of the top 50 American colleges, published by The Atlantic Monthly, serves a better purpose—exposing the limitations of college ranking systems and the influence their underlying criteria have on the results. According to the magazine, the goal of its new rankings is to show how meaningless rankings can be.
The Atlantic Monthly’s special college issue provides valuable insight into those rankings already published by Fiske, Princeton Review and U.S. News & World Report. Each gives slightly different rank order to the top schools. In The Atlantic Monthly rankings, MIT, Princeton and California Institute of Technology were ranked first, second and third, while Harvard placed fifth.
Unlike U.S. News & World Report, which ranks schools based on wide ranging evaluations from peer assessment scores to alumni giving rates, The Atlantic Monthly’s tabulation was determined by only three factors: admission rate, SAT scores and high school class rank. Three simple factors cannot capture the quality of learning or the varied facets of a successful college experience. But it is not necessarily true that the more complex and arguably more subjective U.S. News & World Report methodology yields rankings that are any more appropriate. The immense diversity of colleges and the many intangible features of a successful college experience are not readily quantified no matter how many factors are ostensibly ‘weighed’ by the rankings’ authors. The lesson that The Atlantic Monthly wisely hits home is that when reading any college ranking, prospective students must strive to understand for themselves the characteristics that make each college unique when deciding which is the best fit for their needs.
While less helpful at drawing comparisons between schools, the rankings can, however, provide a compendium of useful information when comparing different schools as it provides a summary of information about the colleges that might be difficult to deduce from glossy institutional brochures. While the rankings can be useful sources of information, it is meaningless to rank all colleges together. It would be more informative for comparative purposes to group together similar colleges. Liberal arts colleges are significantly different from engineering and science colleges that comparing both types using the same criteria is quite inadequate. Creating separate groups for state universities and private institutions also serves to generate a meaningful distinction.
For better or worse, applicants and alumni donors use rankings to indicate college quality—a powerful incentive for colleges to improve along the criteria selected by the rankings. But the rankings only lead to improvements for students and colleges if their criteria are meaningful. Often they are to the detriment of educational quality. It is unfortunate when colleges pump resources into football programs at the expense of educational quality in an effort to boost alumni giving rates—a ranking criterion for U.S. News & World Reports. An excessive focus on standard testing scores such as the SAT to boost selectivity rank in college admissions diminishes the importance of qualities such as leadership in potential candidates that are difficult to quantify. To improve faculty resources scores, the college may choose to heavily favor a history of prolific publications rather than a dedication to teaching in its decision to grant tenure. Rather than a focus on measures of strength based on ranking criteria, colleges must dedicate their energies to the genuine improvement of educational quality.
A more accurate indicator of educational quality, as suggested by The Atlantic, may be the National Survey of Student Engagement (NSSE), distributed to first-years and seniors across the country. It questions students directly about their academic satisfaction, extracurricular involvement and engagement in campus life. These questions specifically address issues such as study abroad, interaction with professors and campus diversity. More than 730 colleges are represented in the current NSSE database. Last year, small portions of NSSE data were published on the U.S. News website under the title “Seniors Have Their Say.”
The most prestigious schools, which perform well under the current ranking system, have to date refused to release NSSE data to the public. Harvard should take the lead in releasing this information to its prospective students. Doing so would put pressure on other schools and when students rely on NSSE surveys to learn about school quality, colleges would have the incentive to improve in more meaningful ways.
Want to keep up with breaking news? Subscribe to our email newsletter.