This week, Samsung LSI announced a new camera sensor that seemingly is pushing the limits of resolution within a mobile phone. The new S5KHP1, or simply HP1 sensor, pushes the resolution above 200 megapixels, almost doubling that of what’s currently being deployed in contemporary hardware in today’s phones.

The new sensor is interesting because it marks an implementation of a new binning mechanism, beyond the currently deployed Quad-Bayer (4:1 pixel binning) or “Nonapixel” (9:1 binning), but a new “ChameleonCell” binning mechanism that is both able to employ 4:1 binning in a 2x2 structure, as well as 16:1 binning in a 4x4 structure.

We’ve been familiar with Samsung’s 108MP sensors for a while now as they’ve seen adoption for over two years both in Samsung Mobile phones as well as in Xiaomi devices, although in slightly different sensor configurations. The most familiar implementation is likely the HM1 and HM3 modules in the S20 Ultra and S21 Ultra series, which implement the “Nonapixel” 9:1 pixel binning technique to aggregate 9 pixels into 1 for regular 12MP captures in most scenarios.

One problem with the 9:1 scheme was that you’re effectively having to magnify 3x in relation between the two resolution modes, and for devices such as the S21 Ultra, this was mostly a redundant mode of functioning as the phone had a dedicated 3x telephoto module to achieve a similar pixel spatial resolution, at a larger pixel size.

8K video recording was a use-case for when the native 108MP resolution made sense, however even here the problem is that the native resolution is far above the required 33MP for 8K video, meaning the phone had to suffer a very large field of view crop as it didn’t support resolution super sampling down from 108MP to 33MP.

Sensor Solution Comparisons
  Optics Sensor
  35mm
eq. FL
FoV
(H/V/D)
 Aperture
-
Airy Disk
Resolution Pixel
Pitch
Pixel
Res.
Sensor
Size
HP1

(Theoretical)
24.17 71.2°
56.5°
83.7°
~f/1.9
-
1.15µm
201.3M native
(16384 x 12288)

50.3M 2x2 bin
(8192 x 6144)

12.6M 4x4 bin
(4096 x 3072)
0.64µm


1.28µm


2.56µm
15.2″


30.4"


60.9″
1 / 1.22"
10.48mm x 7.86mm
82.46mm²
HM3

(S21 Ultra)
24.17 71.2°
56.5°
83.7°
f/1.8
-
1.09µm
108.0M native
(12000 x 9000)

12.0M 3x3 bin
(4000 x 3000)
0.8µm

2.4µm
21.4″

64.1″
1 / 1.33"
9.60mm x 7.20mm
69.12mm²
GN2

(Mi 11 Ultra)
23.01 73.9°
58.9°
86.5°
f/1.95
-
1.18µm
49.9M native
(8160 x 6120)

12.5MP 2x2 bin
(4080 x 3060)
1.4µm

2.8µm
32.6″

65.2″
1 / 1.12"
11.42mm x 8.56mm
97.88mm²
 
S21U
3x Telephoto
70.04

(4:3)
27.77°
21.01°
34.34°

(4:3)
f/2.4
-
1.46µm
10.87M native
(3976 x 2736)

9.99M 4:3 crop
(3648 x 2736)

12M scaled
(4000 x 3000)
1.22µm 27.4″ 1 / 2.72"
4.85mm x 3.33mm
16.19mm²
S21U
10x
Telephoto
238.16

(4:3)
8.31°
6.24°
10.38°

(4:3)
f/4.9
-
5.97µm
10.87M native
(3976 x 2736)

9.99M 4:3 crop
(3648 x 2736)

12M scaled
(4000 x 3000)
1.22µm 8.21″ 1 / 2.72"
4.85mm x 3.33mm
16.19mm²

The new HP1 sensor now has two binning modes: 4:1 and 16:1. The 4:1 mode effectively turns the 201MP native resolution to 50MP captures, and when cropping into a 12.5MP view frame, would result in a 2x magnification which would be more in line with what we’re used to in Quad-Bayer sensors. In fact, the results here would be pretty much in line with Quad-Bayer sensors as the native colour filter of the HP1 is still only 12.5MP, meaning a single R/G/B filter site covers 16 native pixels.

A 4:1 / 2x2 binning mode is more useful, as generally the quality here is still excellent and allows mobile vendors to support high-quality 2x magnification capture modes without the need of an extra camera module, it’s something that’s being used by a lot of devices, but had been missing from Samsung’s own 108MP 3x3 binning sensors because of the structure compromise. Samsung LSI here even states that the HP1 would be able to achieve 8K video recording with much less of a field of view loss due to the lesser cropping requirements.

200MP - Potentially pointless?

This leads us to the actual native resolution of the sensor, the 201MP mode; here, native pixel pitch of the sensor is only 0.64µm, which is minuscule and the smallest we’ve seen in the industry. What’s also quite odd here, is that the colour resolution is spatially 4x lower due to the 2.56µm sized colour filter, so the demosaicing algorithm has to do more work than the usual Quad-Bayer or Nonapixel implementations we’ve seen to date.


Source: JEOL

At such small pixel pitches, we’re running into different problems, and that is the diffraction limit. Usually, such large sensors are employed as the “wide” angle module so they typically have apertures between f/1.6 and f/1.9 – the HP1 is a 1/1.22” optical format sensor that’s 19% larger than the HM3 in the S21 Ultra, so maybe a f/1.9 aperture is more realistic. The maximum intensity centre diameter of the airy disk at f/1.9 would be 1.15µm, and generally we tend to say that the diffraction limit where spatial resolution gets noticeably degraded is at twice the size of that – around 2.3µm, that’s well beyond the 0.64µm pixel size of the sensor.

The 200MP mode might have some slight benefits and be able to resolve things better than 50MP, however I very much doubt we’ll see much advantages over current 108MP sensors. In that sense, it appears to me to be a pretty pointless mode.


Top: 3.76µm native pixels at f/6.4 (7.8µm airy disk - 2.07x ratio) - Original
Bottom: 1.22µm native pixels at f/4.9 (5.9µm airy disk - 4.83x ratio) - Original

That being said, we do actually see camera implementations in the market that are far above the diffraction limit in terms of their sensor pixel size and optics. For example, above I took out two 1:1-pixel crops – one from an actual camera and lens, and one from the Galaxy S21 Ultra’s periscope module. In theory, we should be seeing somewhat similar resolution, glass optics quality aside, however it’s evident that the S21U is showcasing a far lower actual spatial resolution. One big reason here is again the diffraction limit, where the S21U’s periscope at 1.22µm pixels and f/4.9 aperture has an airy disk of 5.9µm, or 4.82x the size of the pixel, meaning that physically the incoming light cannot be resolved more than at a quarter the resolution of the actual sensor.

To be able for the HP1 sensor to take advantage of its 200MP mode, it would need to have a very large aperture optics to avoid diffraction, a very high-quality glass to actually even resolve the details, and to not be in very demanding high dynamic range scenarios, due to the extremely low full well capacity of the small pixels.

To alleviate dynamic range concerns, Samsung does state that the sensor has the newest technologies included: better deep trench isolation (“ISOCELL 3.0”) should increase the full well capacity of the pixels, while also support dual gain converters (Smart-ISO Pro) as well as staggered HDR capture.

The one thing the sensor would lack in terms of more modern features is full-sensor dual pixel autofocus, with Samsung noting it instead uses “Double Super PD”, with double the dedicated PD sites as the existing Super PD implementations such as on the HM3.

Overall, the HP1 seems interesting, however I can’t shake it off that the 200MP mode of the sensor will have very little practical benefits. In theory, a device would be able to cover the magnification range from 1x to ~5x with quite reasonable quality and maybe avoid having a dedicated mode in that focal length, however we’ll have to see how vendors design their camera systems around the sensor. The new 2x2 binning mode however is welcome, simply due to the fact that it’s a lot more versatile than the 3x3 mode in current 108MP sensors, and should allow real world large benefits to the camera experience, even if the native 200MP mode doesn’t pan out as advertised.

Comments Locked

44 Comments

View All Comments

  • GeoffreyA - Friday, September 24, 2021 - link

    I agree with this sentiment. A camera should capture a picture as the eye would see it directly.
  • Prime957 - Friday, September 24, 2021 - link

    What if:
    Sensor -> A.I/Computation -> Pixel
    Correlates to:
    Eye -> Nerves/Synapses/Unknown -> Brain
    I'm pretty sure our brains are not "seeing" exactly what our eyes are "seeing". I've heard somewhere that technically the image at the back of our eyes are upside down and mirrored or something like that. Isn't it crazy how nature and technology can go hand in hand at times?
  • mode_13h - Saturday, September 25, 2021 - link

    Your brain is doing a lot more than just flipping the image. It's also doing spatio-temporal interpolation across your blindspot, and a similar sort of resampling to compensate for the vastly different densities of photoreceptors in your fovea vs. periphery. Not to mention the relatively lower density of color-sensing cones. In order to support this fancy processing, your eyes are continually performing micro-movements, not unlike how Intel's XeSS jitters the camera position to facilitate superresolution.

    To put it another way, the image you *think* you see exists only in your brain. It's not the raw signal from the photoreceptors in your retina, that's for sure!
  • GeoffreyA - Saturday, September 25, 2021 - link

    Excellent description. It's interesting how this muxing brings it all together out of bits and pieces. There are disturbances of the visual system where a person can't see motion, and even stranger effects. Also, in line with the lower density of colour cones, isn't colour really "an extra," if we may call it that? The outlines and non-colour elements seem to carry the mass of the visuals. (Cell animation, black-and-white films, anyone?)
  • GeoffreyA - Saturday, September 25, 2021 - link

    Prime957, you're quite right. It seems technology often ends up copying nature, unwittingly or otherwise. Perhaps it's because nature has already got the best designs.
  • Foeketijn - Sunday, September 26, 2021 - link

    You are so right about that one. We see a tree with leaves, But if you would change those leaves with balloons or whatever, we would still see those leaves.
    We can only see color in the center of our view. But you think you can see what color your whole view is.
  • mode_13h - Saturday, September 25, 2021 - link

    LOL. As Prime957 said, you're not seeing an image in the way it actually hits your retina. The computational processing by your brain is vastly more intense than what porina is talking about.
  • GeoffreyA - Saturday, September 25, 2021 - link

    Yes, you're right. It just doesn't strike me when I made that comment. I suppose what I'm looking for are pictures coming out like they used to on film, whereas something looks off and plastic in digital today (even in movies for that matter).
  • boozed - Saturday, September 25, 2021 - link

    I'm guessing what you see as "plastic" is either a lack of grain or overzealous noise reduction?

    There were a lot of different film stock options available to photographers of that era and each one had its own specific (and different) response to the incoming light. By contrast, the image processing pipelines in modern digital cameras are generally designed to record images that as neutral as possible, so the raw images that come out of them will look pretty flat until adjustments are applied. It's then up to the photographer or editor to "interpret" the scene (which is another philosophical debate...)
  • GeoffreyA - Saturday, September 25, 2021 - link

    It's not just a plastic appearance one sometimes finds. I find that tolerable, despite liking grain. Rather, something looks off in many movies shot on digital cameras (and, to a lesser extent, still pictures). This will be controversial but fake is how I describe it to myself. And it's not all. Some, shot digitally, look fantastic. Blade Runner 2049 is one example.

Log in

Don't have an account? Sign up now