Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Milad Haghani, Associate Professor and Principal Fellow in Urban Risk and Resilience, The University of Melbourne

How do scientists estimate crowd sizes at public events – and why are they often disputed?

Last Sunday, tens of thousands marched across the Sydney Harbour Bridge in support of Gaza. But exactly how many people were there depends on whom you ask.

Police put it at about 90,000. Organisers claimed up to 300,000. Other reports and expert estimates landed somewhere in-between.

Why are these accounts so different and how hard is it, really, to estimate the size of a crowd?

Why people care about crowd sizes

It’s far from the first time crowd numbers have been a flashpoint.

The most infamous modern example is US President Donald Trump’s 2017 inauguration, where aerial photos and transit data clashed with claims from the White House officials and Trump himself, sparking controversy.

In New Zealand last year, the Hīkoi march to parliament triggered a similar debate as vastly different estimates circulated.

Crowd size matters for several reasons – from symbolic significance to safety implications.

It can convey the level of support for a social or political cause, signal the scale and significance of a spiritual gathering or a street party. Regimes and revolutions can use crowd sizes as a propaganda tool.

That’s why there are often strong incentives to inflate – or deflate – the numbers.

But crowd estimates are also important for safety reasons. Underestimating can leave infrastructure and logistics unprepared, sometimes leading to catastrophic crowd crushes. Overestimating can result in unnecessary restrictions, closures, or even cancellations.

How are crowd sizes estimated?

There’s no single way to count crowds. Experts choose from a toolbox of methods, each suited to different settings and each with its own blind spots.

Manual visual estimation

The oldest method is also the simplest: estimate the density (people per square metre) in a few sample patches of the crowd (often inferred from aerial images), then multiply by the total area occupied. In theory, straightforward; in practice, riddled with problems.

Human observers (even experienced ones) struggle to distinguish between, say, two, three, or four people per square metre. Crowd density is rarely a round number, but as human observers we often tend to infer whole density numbers from the scene.

Plus, crowd density is rarely uniform: people bunch near focal points and leave gaps elsewhere. So, extrapolating from a sample can lead to very misleading estimates.

Errors also creep in from misjudging the physical size of the sampled area or overlooking how much of the total space is actually usable. These misjudgements can lead to completely different counts.

Computer vision

CCTV, aerial photos and drone imagery allow automated counting using image processing techniques. These range from texture-based methods that work best for low- to medium-density crowds, to object detection models that locate individual heads or bodies.

These can be quite accurate in open spaces with clear sight lines. But shadows, poor lighting, poor weather conditions, obstacles and occlusion in dense gatherings can compromise their accuracy.

Wireless sensing

Crowd sizes can also be inferred using unique wifi or Bluetooth signals from smartphones, or mobile tower activity (how many phones were making calls, texts and using data in an area). These methods work well for large, dispersed, or moving crowds, and can be particularly useful where aerial imagery is impractical.

But they depend on people carrying devices, having them switched on, and having location functions enabled.

Artificial intelligence (AI) and deep learning

Modern crowd counting systems often use AI, especially a type called convolutional neural networks. These systems create “density maps” from images, showing where people are and how tightly packed they are.

They can also correct for perspective – for example, recognising that people farther from the camera look smaller – and adjust for changes in density across the scene. These AI models need training on the right kind of data.

None of the methods are foolproof

The most accurate systems combine methods – for example, by calibrating wifi data with computer vision at critical points. This can significantly reduce error, compared to wifi alone.

But several factors further complicate things.

Setting can change everything. Open spaces are easier to measure than narrow streets. Static crowds are simpler than moving marches. Dense, uniform gatherings are easier to estimate than patchy ones.

Timing plays a big role. Numbers change as people arrive, leave, or move between spaces. Two counts of the same event just half an hour apart can diverge significantly.

Technical limits like shadows, poor lighting, perspective distortion, bad weather conditions, many people holding large banners, flags or umbrellas can skew counts.

Psychological bias can affect human observers. We naturally tend to focus on the most animated and tightly packed parts of a crowd. This “crowd emotion amplification effect” makes the gathering feel larger and more charged than it really is. People tend to overestimate the size or emotional intensity of the crowd, especially when they are part of it.

The bottom line on crowd sizes

There’s rarely a single, correct crowd size estimate; at best, we should expect a range.

Discrepancies are not necessarily a sign of bad faith. They often reflect the limits of the data and the methods used. The most reliable counts come from matching the method to the event’s setting and being transparent about how the figure was reached.

In the end, crowd size estimation is part science, part art. Knowing its limits should help us treat estimates with healthy scepticism and recognise that differences in reporting are not necessarily a sign of dishonesty.

Next time you’re in a stadium, try guessing the attendance before the official number flashes up on the scoreboard. Chances are, your estimate will be off. That gap is a reminder that crowd size controversies are as much about human perception as they are about motives.

And if you’re a regular, try doing it every time; over time, your guesses will likely get more accurate. That’s what “training a model” looks like – in this case, the model being your own brain.

The Conversation

Ruggiero Lovreglio receives funding from Royal Society Te Apārangi (NZ) and National Institute of Standards and Technology (USA).

Milad Haghani does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.