The Times Higher Education World University Rankings 2010-11 were developed in concert with new rankings data

provider, Thomson Reuters, with input from more than 50 leading figures in the sector from 15 countries across every continent, and through 10 months of extensive consultation to represent the most accurate picture of global higher education. This ranking of the top universities across the globe employ 13 separate performance indicators designed to capture the full range of university activities, from teaching to research to knowledge transfer. These 13 elements are brought together into five headline categories, which are:
y y y y y

Teaching Research Citations

the learning environment (worth 30 per cent of the overall ranking score) volume, income and reputation (worth 30 per cent) research influence (worth 32.5 per cent) innovation (worth 2.5 per cent) staff and students (worth 5 per cent).

Industry income International mix

RANKINGS: THE METHODOLOGY

5% INTERNATIONALISATION WEIGHTING SCHEME FOR RANKING SCORES Industry income innovation (2.5%) This category is designed to cover an institution's knowledge-transfer activity.CRITERIA THE QS WORLD UNIVERSITY RANKINGS INDICATOR WEIGHT QS. because the figures provided by institutions for this indicator were patchy. However.5% 2. Just a single indicator determines it: a simple figure giving an institution's research income from industry scaled against the number of academic staff. It suggests the extent to which users are prepared to pay for research and a university's ability to attract funding in the commercial marketplace which are significant indicators of quality.COM ASIAN UNIVERSITY RANKINGS INDICATOR ASIAN ACADEMIC PEER REVIEW (ACADEMICS WITH KNOWLEDGE OF RESEARCH IN ASIAN INSTITUTIONS) PAPERS PER FACULTY CITATIONS PER PAPER WEIGHT GLOBAL ACADEMIC PEER REVIEW RESEARCH QUALITY CITATIONS PER FACULTY 40% 30% 20% 15% 15% 20% TEACHING QUALITY STUDENT FACULTY RATIO 20% STUDENT FACULTY RATIO ASIAN EMPLOYER REVIEW (EMPLOYERS WITH EXPERIENCE OF RECRUITING FROM ASIAN INSTITUTIONS) INTERNATIONAL FACULTY INTERNATIONAL STUDENTS INBOUND EXCHANGE STUDENTS OUTBOUND EXCHANGE STUDENTS GRADUATE EMPLOYABILITY GLOBAL EMPLOYER REVIEW 10% 10% INTERNATIONAL FACULTY INTERNATIONAL STUDENTS 5% 5% 2.5% 2.5% 2. it has been given relatively low weighting for the 2010-11 tables: it is worth just 2. .5 per cent of the overall ranking score.

It examined the perceived prestige of institutions in both research and teaching. and that the presence of an active postgraduate community is a marker of a research-led teaching environment valued by undergraduates and postgraduates alike. a high proportion of postgraduate research students also suggests teaching at the highest level that is attractive to graduates and good at developing them. which contrasts with the 20 per cent weighting the measure was given in our previous rankings. Thomson Reuters carried out its Academic Reputation Survey a worldwide poll of experienced scholars in spring 2010. and 15 per cent of the overall rankings score. The teaching category also examines the ratio of PhD to bachelor's degrees awarded by each institution. it receives a relatively low weighting: it is worth 15 per cent of the teaching category and just 4. We believe that institutions with a high density of research students are more knowledgeintensive. from both the student and academic perspective. The teaching category also uses data on the number of PhDs awarded by an institution. . The PhD-bachelor's ratio receives a 7. statistically representative of global higher education's geographical and subject mix. The flagship indicator for this category uses the results of a reputational survey on teaching. This broad category also measures the number of undergraduates admitted by an institution scaled against the number of academic staff. Essentially a form of staff-to-student ratio. this measure is employed as a proxy for teaching quality suggesting that where there is a low ratio of students to staff. scaled against its size as measured by the number of academic staff.388 responses.5 per cent of the overall ranking scores.5 per cent weighting in its category and is worth 2. the former will get the personal attention they require from the institution's faculty. As well as giving a sense of how committed an institution is to nurturing the next generation of academics. The results of the survey with regard to teaching make up 50 per cent of the score in the broad teaching environment category.Teaching the learning environment (30%) This broad category employs five separate indicators designed to provide a clear sense of the teaching and learning environment of each institution.25 per cent of the overall ranking scores. There were 13. As this measure serves as only a crude proxy. and our consultation exposed some concerns about its use.

income and reputation is based on the results of our reputational survey.Undergraduate students also tend to value working in a rich environment that includes postgraduates. this indicator makes up 6 per cent of the overall score. has been dramatically scaled back after lengthy consultation. for example. The data are drawn from the 12. The final indicator in this category is a simple measure of institutional income scaled against academic staff numbers. adjusted for purchasing-price parity so that all nations compete on a level playing field. indicates the general status of an institution and gives a broad sense of the general infrastructure and facilities available to students and staff.25 per cent overall. Unlike the approach employed by the old rankings system. with data aggregated over a five-year period from 2004 to 2008 (there has been insufficient time for the accumulation of such data for articles published in 2009 and 2010). The figures are collected for every university.5 billion a year in UK research funding under the forthcoming research excellence framework. This measure is worth 7. Citations research influence as measured by the number of times its published work is cited by A university's research influence academics score. worth just under a third of the overall This weighting reflects the relatively high level of confidence the global academic community has in the indicator as a proxy for research quality. Worth 20 per cent of the teaching environment category. .5 per cent of the category and 2. Research volume. This figure. there is clear evidence of a strong correlation between citation counts and research performance. income and reputation As with the teaching category. the most prominent indicator in research volume. This means that institutions with high levels of research activity in subjects with traditionally very high citation counts will no longer gain an unfair advantage. is the largest of the broad rankings categories.000 academic journals indexed by Thomson Reuters' Web of Science database. Nevertheless. all the citations impact data are normalised to reflect variations in citation volume between different subject areas. The use of citations to indicate quality is controversial their use in distributing more than £1.

scaled against staff numbers and normalised for purchasing-power parity. However. . This has a low weighting to reflect concerns about the comparability of self-reported data between countries. and this indicator suggests global competitiveness. The market for academic and administrative jobs is international in scope. Our final category looks at diversity on campus The ability of a university to attract the very best staff from across the world is key to global success. our consultation revealed concerns about the inability to gauge the quality of students and the problems caused by geography and tuition-fee regimes. as it can be influenced by national policy and economic circumstances. giving an idea of an institution's ability to get papers published in quality peer-reviewed journals. The other indicator in this category is based on the ratio of international to domestic students. So in this category we give a 60 per cent weighting to the ratio of international to domestic staff. Some 17. The research environment category also includes a simple measure of research volume scaled against staff numbers. this is a sign of an institution's global competitiveness and its commitment to globalisation. This is a controversial measure. Again. So the measure receives a 40 per cent weighting and is worth 2 per cent of the final score. We count the number of papers published in the academic journals indexed by Thomson Reuters per staff member.5 per cent overall. International mix staff and students a sign of how global an institution is in its outlook. But research income is crucial to the development of world-class research. This indicator is worth 15 per cent of the category and 4. As with the staff indicator. and as geographical considerations can influence performance. as it is a relatively crude proxy. as academics are likely to be more knowledgeable about the reputation of research departments in their specialist fields.5 per cent of the overall score. our experts suggested it was a valid measure.75 per cent overall is a measure of public research income against an institution's total research income. and because much of it is subject to competition and judged by peer review.5 per cent of this category 5.25 per cent of the overall ranking is determined by a university's research income. For this reason. the weighting has been reduced from the 5 per cent used under our old rankings system.Consultation with our expert advisers suggested that confidence in this indicator was higher than in the teaching reputational survey.5 per cent of the category worth just 0. it is given a higher weighting: it is worth 65 per cent here and 19. making up 3 per cent of the overall score. Some 2.

The data were extracted from the Thomson Reuters resource known as Web of Science. For example. There are occasions where a groundbreaking academic paper is so influential as to drive the citation counts to extreme levels receiving thousands of citations.600 of the highest-impact journals worldwide. So a paper with a relative citation impact of 2. The Z-score is then turned into a "cumulative probability score" to give the final totals. Its authoritative and multidisciplinary content covers more than 11. papers in the life sciences tend to be cited more frequently than those published in the social sciences.Citation impact: it's all relative Citations are widely recognised as a strong indicator of the significance and relevance impact of a piece of research. A cumulative probability score indicates for any real value what the probability is that a normally distributed random value will fall below that point. . Each data point is given a score based on its distance from the average (mean) of the entire dataset. institutions publishing fewer than 50 papers a year have been excluded from the rankings. The rankings this year use normalised citation impact. However. This standardises the different data types on a common scale and allows fair comparisons between the different types of data which is essential when combining diverse information into a single ranking. citation data must be used with care as citation rates can vary between subjects and time periods. The benchmarking exercise is carried out on an exact level across 251 subject areas for each year in the period 2004 to 2008. where the scale is the standard deviation of the dataset. To calculate the overall ranking score. that is. the However. An institution that contributes to one of these papers will receive a significant and noticeable boost to its citation impact. where the citations to each paper are compared with the average number of citations received by all papers published in the same field and year. the largest and most comprehensive database of research citations available. and this reflects such institutions' contribution to globally significant research projects. the relative citation impact may be significantly influenced by one or two highly cited papers and therefore it does not accurately reflect their typical performance. "Z-scores" were created for all datasets. For institutions that produce few papers.0 is cited twice as frequently as the average for similar papers.

Where institutions did not provide data in a particular area (which occurred in only some very lowweighted areas). the column has been left blank. Exclusions Universities were excluded from the World University Rankings tables if they do not teach undergraduates. The overall top 200 ranking and the six tables showing the top 50 institutions by subject were based on criteria and weightings that were carefully selected after extensive consultation. . With this feature. The rankings are designed to be a useful and rigorous tool for the global higher education community. Data sign-off Each institution listed in these rankings opted in to the exercise and verified its institutional data. All of them drew on the exceptionally rich data set by THE where they recognize that different people have different interests and priorities and to allow everyone to make the most of our data and gain a personalised view of global higher education. and we are delighted that the vast majority of universities around the world have embraced this exercise and have actively participated. Some also declined to participate. if University X has a score of 98. some institutions did not respond. if their research output amounts to less than 50 articles per year. A very important principle of the new Times Higher Education World University Rankings in the first year of a brand new system is that all universities that we list have actively cooperated with the system and signed off their data. and therefore could not be included. Unfortunately after repeated invitations to participate in the Global Institutional Profiles Project and World University Ranking by email and telephone by Thomson Reuters. or if they teach only a single narrow subject.For example. users may rank institutions by their performance in any one of the five broad headline categories to create bespoke tables or make regional comparisons via our area analyses. the tables on this site can be fully manipulated and sorted. then a random institution from the same distribution of data will fall below this university 98 per cent of the time.

388 responses were gathered across all regions and subject areas. The results make up a total of 34.5 per cent for research). . Some 13.5 per cent of the overall ranking score (15 per cent for teaching and 19.Reputational surveys A worldwide Academic Reputation Survey was carried out during spring 2010.