
Welcome to Tech Talk, a weekly column about the things we use and how they work. We try to keep it simple here so everyone can understand how and why the gadget in your hand does what it does.
Things may become a little technical at times, as that's the nature of technology — it can be complex and intricate. Together we can break it all down and make it accessible, though!

How it works, explained in a way that everyone can understand. Your weekly look into what makes your gadgets tick.
You might not care how any of this stuff happens, and that's OK, too. Your tech gadgets are personal and should be fun. You never know though, you might just learn something ...
How does Night Mode work?

Ever wondered how your smartphone, a tiny device you carry in your pocket, can capture stunningly clear photos of a starry night sky or a dimly lit city street? It's a question that's probably crossed your mind while scrolling through your gallery. The magic behind those impressive night shots isn't a single feature, but a symphony of hardware and software working in perfect harmony.
It's a far cry from the blurry, noisy images that were once the hallmark of low-light photography on older phones. Today's "night sight" and "astrophotography" photos are a testament to how far mobile technology has come.

A great night shot begins with the most fundamental component: the camera sensor. Think of the sensor as the camera's eye, capturing light and converting it into an electrical signal. In low-light situations, the challenge is to gather as much light as possible.
A camera sensor can see a lot more than your eye can, and sees more colors. That's why photos of the Northern Lights looked better than the real thing; the camera processed all that information we couldn't see into something we can.
This is why a larger sensor can be important. A bigger sensor has more surface area, allowing it to collect even more photons (light particles) and produce a cleaner, less noisy image.
Alongside a larger sensor, a wider aperture helps, too. The aperture is the opening in the lens that lets light in. A wider aperture, represented by a smaller f-number (e.g., f/1.8 vs. f/2.2), allows more light to reach the sensor in a given amount of time. This combination of a large sensor and a wide aperture is the hardware foundation of a good night camera.
Now for the software magic

But hardware is only a small part of the story. The real secret sauce is the software, which uses a technique called computational photography. This is the brain behind the brawn, processing and enhancing the raw data captured by the sensor. The most important technique in this arsenal is image stacking.
When you press the shutter button in night sight mode, your phone doesn't just take one picture. It takes a rapid-fire series of photos, often a dozen or more, in a second or two. These individual photos are all slightly different, some a little underexposed, others slightly overexposed. They all contain a bit of noise, too, but the noise is random and different in each frame. That's important, and means that with enough frames, you can have a clear set of data for everything.
After capturing these multiple images, the phone's processor gets to work. It aligns all the frames, compensating for any minor hand shake you might have had while holding the phone. Then, it uses a sophisticated algorithm to "stack" them on top of each other.
By averaging the pixel values of all the aligned images, the random noise that appeared in each individual photo is effectively canceled out, leaving behind a much cleaner, sharper base image. This is a lot like how a student might get a more accurate grade by averaging the scores from multiple tests instead of just one.
Off to you, AI

The process doesn't stop there. Once the noise is reduced, the software performs further processing to enhance the final image. It "intelligently" brightens the darker areas of the photo without blowing out the highlights. It also adjusts the contrast and color balance to make the image look more natural and vibrant. This is where different smartphone brands develop their unique "look."
Google's Pixel 9, for example, is known for its strong emphasis on realistic colors and contrast, while other manufacturers like Samsung might opt for a more vibrant, saturated look with the Galaxy S25.

AI also helps the software identify objects in the scene, like faces or landscapes, and apply specific optimizations to make them look their best.
This is why every "night mode" photo you take will look a little bit different. The source may be the same, but there is a bit of wiggle room for the software to process everything. It may slightly change the color, sharpen some areas more than others, and even make the edges of objects slightly different. The end goal is to produce something that meets a set of overall parameters that we like to see.
The next time you marvel at a stunning night shot from your phone, remember what's happening under the hood. It's not just a lucky shot. It's a complex, multi-step process that starts with an adequate sensor and lens, captures a burst of images, uses powerful software to stack and average them, and then intelligently enhances the final product.
It's a perfect example of how computational photography has democratized a once-specialized area of photography, allowing everyone to capture beautiful images even when the sun has gone down.
We've come a long way

The journey from grainy, unusable night photos to the clear, vibrant images we see today has been a rapid one. Early smartphone cameras were simply not equipped for low-light conditions. Their small sensors and narrow apertures meant they couldn't gather enough light. To compensate, the camera would increase its sensitivity (a setting called ISO), which made the image brighter but introduced a massive amount of noise, turning the photo into a pixelated mess.
The breakthrough came with the realization that instead of trying to capture a perfect image in a single shot, it was better to use software to combine multiple imperfect shots. This idea, which had been used in professional astrophotography for years, was adapted for the mobile world.
Google's introduction of Night Sight on the Pixel 3 in 2018 was a landmark moment, popularizing the concept and pushing other manufacturers to develop their own versions. Apple's Night Mode, Samsung's Night Mode, and others all follow a similar principle of computational photography using specialized hardware and custom software.
These features have fundamentally changed how we think about mobile photography. They've given us the ability to capture moments that were once impossible without a bulky DSLR camera and a tripod.
Tips for Taking Better Night Photos with Your Phone

Even with all this technology, there are a few simple things you can do to get the best possible night shots.
Hold Still: This is the most important tip. Since the phone is taking multiple photos over a few seconds, any movement can cause blur. Find a stable surface to rest your phone on, or simply hold it as still as you possibly can.
Tap to Focus: Before you press the shutter, tap on the darkest part of the scene to tell the camera where to focus and expose correctly. This will help the phone's algorithm work more effectively.
Clean Your Lens: A smudged lens can make a huge difference in low light. A quick wipe with a soft cloth can improve sharpness and reduce glare from light sources.
Don't Zoom: Digital zoom is a recipe for a blurry, noisy mess. It's always better to get closer to your subject or crop the photo later.
Night Mode king
Google is known for having excellent low-light and nighttime photography, and the Pixel 9 Pro continues that trend, thanks to the company's excellent computational photography.