Methodology behind the Guardian University Guide 2022
We use nine measures of performance, covering all stages of the student life cycle, to put together a league table for 54 subjects. We regard each provider of a subject as a department and ask each provider to tell us which of their students count within each department. Our intention is to indicate how each department is likely to deliver a positive all-round experience to future students, and in order to assess this we refer to how past students in the department have fared. We quantify the resources and staff contact that have been dedicated to past students, we look at the standards of entry and the likelihood that students will be supported to continue their studies, before looking at how likely students are to be satisfied, to exceed expectations of success and to have positive outcomes after completing the course. Bringing these measures together, we get an overall score for each department and rank departments against this.
For comparability, the data we use focuses on full time first degree students. For those prospective undergraduates who do not know which subject they wish to study, but who still want to know where institutions rank in relation to one another, the Guardian scores have been averaged for each institution across all subjects to generate an institution-level table.
What’s changed for 2022?
The structure and methodology of the rankings has remained broadly constant since 2008 but there have been some seismic changes to the data used in this year’s guide, necessitating some adjustments to our methodology for compilation.
The National Student Survey
Although analysis by the OfS concluded that there was no tangible effect of the pandemic on the survey results of 2020, it is clear that the 2021 results were hugely affected, with almost every provider and almost every question showing a decline in levels of satisfaction.
While we could have disregarded these results as anomalous – after all they reflect a set of circumstances that we would hope will not be experienced by prospective undergraduates – we felt that institutions that scored well despite the sector-wide drop in satisfaction were displaying a resilience that could benefit future cohorts. Therefore we opted to use the results, aggregated with those of 2020.
Our aggregation rules required that 2021 results are available and that the total number of relevant respondents across the two years was 23 or more. We have paid additional attention to departments that had few respondents in 2021, or results for 2021 but none for 2020, and excluded the results if there was any indication of a potentially unfair representation.
Before the results were even known there was a major policy shift behind the NSS, resulting in a review of the survey and the suspension of the obligation for providers to promote the survey to their final year students. Together with the major disruption to results, this led us to reduce the weighting of the NSS metrics from 25% (split 10% ‘the teaching on my course’, 10% assessment & feedback’ and 5% ‘overall satisfaction’) to 20% (8%:8%:4%).
The 2021 edition of the University Guide was the first publication to represent the career prospects of the 2017/18 graduating cohort, based on the new graduate outcomes survey. We would normally have expected to use results for the 2018/19 cohort in the 2022 edition, but two factors have prevented this.
Firstly, the delayed availability of the survey results was not compatible with publication timescales. Secondly, the bulk of graduates surveyed will have been referring to their occupation in September 2020. With the pandemic profoundly affecting employment, in ways that varied across regions and industries, we felt that it was not reliable to treat this data as a representation of how well a department prepares its students for the world of work.
Continuation rates were introduced for the 2019 edition of the University guide and have had a lower weighting than other metrics. The metric has proved reliable as an indicator of how providers manage the risk of students dropping out during their first year since its introduction and with the reduction in weighting afforded to NSS results, the continuation rate was the obvious metric to pick up the slack. For all non-medical subjects the weighting has increased from 10% to 15%.
For the medical subjects of Medicine, Dentistry & Veterinary science, the continuation metric was previously displayed but not weighted. Value added scores had a 5% weighting, but the metric was not well suited to these subjects because they tend not to classify degree awards. This 5% weighting has been transferred to the continuation metric.
The continuation metric is not perfect for the medical subjects either, as the vast majority of students starting these courses complete their first year. This leaves a very tight distribution of scores near the 100% mark and introduces the prospect of small variations caused by one or two students departures resulting in a very negative score.
To counter this we have introduced minimum standard deviations for each metric, and these affect other subjects too. To illustrate, a provider with a career prospects score for nursing of 90% will still be viewed negatively as this sits well below the sector mean of 97.8%, with standard deviation 1.7%. However, rather than viewing this as 4.5 standard deviations beneath the mean, introducing a minimum standard deviation of 5% restricts this judgement to approximately 1.5 standard deviations.
Adjustments to the standardisation process have also been conducted to curtail the effects of disparities between the UCAS tariff allocated to qualifications awarded in the different UK nations.
In particular, the category of Scottish Higher/Advanced Highers carries an average tariff that is 52 points higher for young undergraduates than the average tariff of all qualifications. This means that for the average student who took Scottish Highers or Advanced Highers as their highest qualification on entry to HE, their tariff is likely to be over 40% higher than students who took a different type of entry qualification. Available data does not differentiate between the standard and advanced qualification variations, which is likely to be the source of the higher tariffs.
The effect of this higher tariff has been increasing in recent years and the adjustment in methodology restricts further advantage and begins to counteract the benefit. The proportion of entrants to each departments who had Scottish Highers/Advanced Highers as their highest qualification on entry is observed for each department. The proportion in this category each push up the sector average against which the department average tariff is standardised by 52 points, multiplied by a discount factor. This discount factor was set to a third to limit disruption and in anticipation of the unpredictable ways in which the pandemic will affect the average tariff scores of the different UK nations.
Although the 54 subjects we produce rankings for have not changed, the building blocks that underpin them have. This year has seen the transition from JACS codes to HECOS codes.
What are the metrics?
This measure seeks to approximate the aptitude of fellow students who a prospective student can anticipate and reports the observed average grades of students joining the department – not the conditions of admission to the course that may be advertised. Average tariffs are determined by taking the total tariff points of first-year, first-degree, full-time entrants who were aged under 21 at the start of their course, if the qualifications that they entered with could all be expressed using the tariff system devised by UCAS. There must be more than seven students in any meaningful average and only students entering year 1 of a course (not a foundation year) with certain types of qualification are included. Departments that are dominated by mature entrants are not considered appropriate for this statistic because the age filter would capture and represent the entry tariff of only the minority of students.
This metric contributes 15% to the total score of a department, and refers to those who entered the department in 2019/20.
Student-staff ratios seek to approximate the levels of staff contact that a student can expect to receive by dividing the volume of students who are taking modules in a subject by the volume of staff who are available to teach it. Thus a low ratio is treated positively – it indicates that more staff contact could be anticipated.
Staff and students are reported on a ‘full time equivalent’ basis and research-only staff are excluded from the staff volume. Students on placement or on a course that is franchised to another provider have their volume discounted accordingly.
At least 28 students and three staff (both FTE) must be present in an SSR calculation using 2019/20 data alone. Smaller departments that had at least seven student and two staff FTE in 2019/20, and at least 30 student FTE in total across 2018/19 and 2019/20, have a two-year average calculated.
This metric contributes 15% to the total score of a department. It is released at HESA cost centre level, and we map each cost centre to one or more of our subjects.
Expenditure per student
In order to approximate the level of resources that a student could expect to have dedicated to their provision, we look at the total expenditure in each subject area and divide it by the volume of students taking the subject. We exclude academic staff costs as the benefits of high staff volumes are already captured by the student-staff ratios but recognise that many costs of delivery are centralised: we add the amount of money each provider has spent per students on academic services such as libraries and computing facilities per student, over the past 2 years.
This metric is expressed as points/10 and contributes 5% to the total score of a department.
Taking a degree-level course is a positive experience for most students but is not suited to everybody and some students struggle and discontinue their studies. Providers can do a lot to support their students – they might promote engagement with studies and with the broader higher education experience - and this measure captures how successful each department is in achieving this. We look at the proportion of students who continue their studies beyond the first year and measure the extent to which this exceeds expectations based on entry qualifications.
To achieve this, we take all first-year students on full-time first-degree courses that are scheduled to take longer than a year to complete and look ahead to 1 December in the following academic year to observe the proportion who are still active in higher education. This proportion is viewed positively, regardless of whether the student has switched course, transferred to a different provider, or been required to repeat their first year – only those who become inactive in the UK’s HE system are counted negatively.
To take the effect of entry qualifications into account we create an index score for each student who has a positive outcome, using their expectation of continuation up to a maximum of 97%. To calculate the score there must have been 35 entrants in the most recent cohort and 65 across the last 2 or 3 years.
This index score, aggregated across the last 2 or 3 years, contributes 15% to the total score of non-medical departments. However, it is the percentage score – also averaged over 2 or 3 years – that is displayed.
The National Student Survey asks final year students for the extent to which they agree with 27 positive statements about their academic experience of the course and support that they received. Responses are on a 5-point Likert scale (1. definitely disagree to 5. definitely agree) and we take the responses from full-time first-degree students registered at the provider course to produce two statistics: a satisfaction rate and an average response. The satisfaction rate looks across the questions concerned and reports the proportion of responses that were ‘definitely agree’ or ‘mostly agree’ while the average response gives the average Likert score between 1 and 5 that was observed in the responses to those questions.
To assess the teaching quality that a student can expect to experience we took responses from the 2020 and 2021 NSS surveys and aggregated them for the following questions:
Staff are good at explaining things
Staff have made the subject interesting
The course is intellectually stimulating
My course has challenged me to achieve my best work
The overall satisfaction rate for each provider is displayed, and the average response is used with a 8% weighting.
To assess the likelihood that a student will be satisfied with assessment procedures and the feedback they receive we took responses from the 2020 and 2021 NSS surveys and aggregated them for the following questions:
The criteria used in marking have been clear in advance
Marking and assessment has been fair
Feedback on my work has been timely
I have received helpful comments on my work
The overall satisfaction rate for each provider is displayed, and the average response is used with a 8% weighting.
To assess the overall satisfaction of students with their courses we aggregated responses from the 2020 and 2021 NSS surveys for the question “overall, I am satisfied with the quality of the course”.
The overall satisfaction rate for each provider is displayed, and the average response is used with a 4% weighting.
Data was released at the CAH (common aggregation hierarchy) levels of aggregation and we used details of how these map to HECOS (Higher Education classification of subjects) to weight and aggregate results for each of our 54 subjects, prioritising results from the most granular level.
In order to assess the extent to which each department will support its students towards achieving good grades, we use value added scores to track students from enrolment to graduation. A student’s chances of getting a good classification of degree (a 1st or a 2:1) are already affected by the qualifications that they start with so our scores take this into account and report the extent to which a student exceeded expectations.
Each full-time student is given a probability of achieving a 1st or 2:1, based on the qualifications that they enter with or, if they have vague entry qualifications, the total percentage of good degrees expected for the student in their department. If they manage to earn a good degree, then they score points that reflect how difficult it was to do so (in fact, they score the reciprocal of the probability of getting a 1st or 2:1). Otherwise they score zero. Students taking integrated masters are always regarded as having a positive outcome.
At least 30 students must be in a subject for a meaningful value added score to be calculated using the most recent year of data alone. If there are more than 15 students in the most recent year and the total number across two years reaches 30, then a two-year average is calculated.
This metric is expressed as points/10 and contributes 15% to the total score of a department.
Using results from the Graduate Outcomes survey for the graduating cohort of 2017/18, we seek to assess the extent to which students have taken a positive first step in the 15 months after graduation, in the hope that similar patterns will repeat for future cohorts. We value students that enter graduate level occupations (approximated by SOC groups 1-3: professional, managerial & technical occupations) and students that go on to further study at a professional or HE level and treat these students as positive.
Students report one or more activities, and for each of these give more detail. If students are self-employed or working for an employer, we treat them as positive if the occupation is in SOC groups 1-3, if they have either finished a course or are presently taking one then we look at the level and treat them positively accordingly. Students who have no activity that is regarded positively, but who either reported that they were unable to work, or only partially completed the survey leaving details of an activity incomplete, are excluded from the metric.
The metric refers only to students who graduated from full-time first-degree courses and we only use results if more than 20 students in a department responded. If between 20 and 22.5 responded we use the result but round or obscure the exact figure for data protection reasons.
In any year we avoid averaging results across years for this metric because the national economic environment that leavers find themselves in can have such a big effect on employment. This year the profound differences between DLHE and the graduate outcomes survey mean that we are never mixing results across years.
This metric is worth 15% of the total score in all the non-medical subjects.
How do we use the metric results?
First of all, we determine if a department has enough data to support a ranking. Often individual metrics are missing and we seek to keep the department in the rankings where we can. An institution can only be included in the table if the weighting value of any indicators that are missing add up to 40% or less, and if the institution’s relevant department teaches at least 35 full time first degree students. There must also be at least 25 students (FTE) in the relevant cost centre.
For those institutions that qualify for inclusion in the subject table, each score is compared to the average score achieved by the other institutions that qualify, using standard deviations to gain a normal distribution of standardised scores (S-scores). The standardised score for student: staff ratios is negative, to reflect that low ratios are regarded as better. We cap certain S-scores – extremely high NSS, expenditure and SSR figures – at three standard deviations. This is to prevent a valid but extreme value from exerting an influence that far exceeds that of all other measures.
For metrics in subjects where there are very few datapoints we refer to the distribution of scores observed for a higher aggregation of subjects (CAH1). As mentioned earlier, we also set a minimum standard deviation for each metric and make adjustments to the mean tariff that is referenced by departments with students who entered with Scottish Highers or Advanced Highers.
Although we don’t display anything, we need to plug the gap left in the total score that is left by any missing indicators. We use a substitution that firstly looks for the corresponding standardised score in the previous year and then, if nothing is available, resorts to looking at whether the missing metric is correlated to general performance in that subject. If it is, the department’s performance in the other metrics is used – effectively assuming that it would have performed as well in the missing metric as it did in everything else. If not, the average score achieved by other providers of the subject is used.
Using the weighting attached to each metric, the standardised scores are weighted and totalled to give an overall institutional score (rescaled to 100) against which the departments are ranked.
The institutional ranking
The institutional table ranks institutions according to their performance in the subject tables, but considers two other factors when calculating overall performance.
First, the number of students in a department influences the extent to which that department’s total standardised score contributes to the institution’s overall score. And second, the number of institutions included in the subject table determines the extent to which a department can affect the institutional table.
The number of full-time undergraduates in each subject is expressed as a percentage of the total number of full-time undergraduates counted in subjects for which the institution is included within the subject table. For each subject, the number of institutions included within the table is counted and the natural logarithm of this value is calculated. The total S-Score for each subject – which can be negative or positive – is multiplied by these two values, and the results are summed for all subjects, to give an overall S-score for each institution. Institutions are ranked according to this overall S-score, though the value displayed in the published table is a scaled version of this, that gives the top university 100 points and all the others a smaller (but positive) points tally.
Each institution has overall versions of each of the indicators displayed next to its overall score out of 100, but these are crude institutional averages that are otherwise disconnected from the tables and give no consideration to subject mix. Therefore these institutional averages cannot be used to calculate the overall score or ranking position.
The indicators of performance for value added and for expenditure per student are treated slightly differently, because they need to be converted into points out of 10 before being displayed. Therefore these indicators do read from the subject level tables, again using student numbers to create a weighted average.
Institutions that appear in fewer than eight subject tables are not included in the main ranking of universities.
The KIS database of courses, to which institutions provide regular updates to describe courses that students will be able to apply for in future years, is the data source of the courses that we list under each department in each subject group.
We have associated each full-time course with one or more subject groups, based on the subject data associated with the courses. We gave institutions the freedom to adjust these associations with subjects and also to change details of the courses. We include courses that are not at degree level, even though such provision is excluded from the data used to generate scores and rankings. Due to timing of publication, an update to this data will take place in September.