Hacker News new | past | comments | ask | show | jobs | submit login
Cameras and Lenses (ciechanow.ski)
2132 points by mariuz on Dec 10, 2020 | hide | past | favorite | 212 comments



Although I knew most of technical aspect described having immersed myself in Photography several years ago, I cannot express properly how GOOD this article is. The sliders and diagrams are brilliant for helping a layperson understand how it all works. Kudos!


You'd like their other work :

https://ciechanow.ski/lights-and-shadows/

https://ciechanow.ski/gears/

https://ciechanow.ski/tesseract/

Etc.

Check out the whole blog! It's amazing.

These are some of the most intuitive explanations coupled with the slickest animation / demo work I've ever seen. All of that put together into crystal clear educational material is such a rare gem and requires incredible talent.

Impressive as hell.

I hope someone is paying them to do this full time. I'd pay for this.


And the articles renders so fast and so smooth on my laptop. In fact they render more smoothly and loads faster than 99% of all much much less technically advanced articles out there. Wow!


Heck, they render fast and smooth on my 3 year old iPad.


I also like his floating point numbers' explorer https://float.exposed/0x3fb999999999999a


The final words on the Tesseract page are lovely!

> I find it very inspiring that while we can’t physically experience a four dimensional space, with just a bit of ingenuity we can easily simulate how a tesseract and its shadow would look in our day-to-day world.

> You may find math’s indifference to the limitations of our human perception quite cruel, but I think it’s liberating. Reflecting on higher dimensions is transcendent – it removes the shackles of the physical world and allows us to explore the realms we’ll never encounter.


Regarding the lights and shadows articles, how they hell are they getting raytraced-quality lighting, with no noise, with changes to the scene in realtime?? Are these all pre-rendered PNGs?


Probably closed solutions to those simple lighting situations.


I was wondering why it was lighting up as visited link. Turns out it’s same blog that posted few years ago something so special it took me entire Christmas to read!


Added the blog to my Thunderbird RSS-Feeds :-)


You're right. Similarly, I cannot express how good this site really is in just a few words. Having been an enthusiastic amateur photographer for decades as well as having worked in electronic imaging for about as long, this is one of the best practical illustrations of the subject that I've come across.

I'd highly recommend it to anyone who is seriously interested in the underlying 'mechanics' of imaging.

(Incidentally, anyone whose has read my many anti-JavaScript raves, then I'd only say this is exactly how JavaScript ought to be used.)


+100 because I came here to post exactly what you did. I just bookmarked this to share with my tween who's getting interested in photography and starting to ask things like "Why that camera?" and "Why this lens?"


I agree, it's pretty basic but for people that don't know any of this it's an excellent explanation.


I wouldn't say it's basic at all. It's an excellent distillation of the physics of cameras.

My one minor nit-pick would be introducing the wave nature of light so early--to motivate Snell's law and total internal reflection, but without discussing diffraction until well later and only in passing.

Snell's law can be motivated from a purely classical particle-based model as:

1. Light travels slower through media other than vacuum (This is mentioned, but hand-waved as boundary conditions, which probably isn't meaningful to the intended audience). 2. Light takes the shortest (in time) path between two points. (Light... finds a way).

Either way, the audience might find the obvious "why" for the first point more satisfactorily addressed with a short sidebar on permittivity and permeability of materials (their electrical properties, effectively the capacitance and inductance "density").

That can also lead to an interesting discussion of why conductors are usually not optically transparent and insulators are.


I agree I think the motivation for refraction was the weakest point - they introduced waves, and even showed that the waves have to be continuous at the boundary... and then... back to rays!

It just needs a diagram showing the waves refracting, and the geometry of the wave fronts. (Pretty annoying that they never explained refraction like that in school either - Snell's law was just a given when it's actually really trivial to derive from first principles.)


The only thing better than a pinhole camera is a pinhole camera with multiple holes.

Let me explain:

When you have two holes (apertures), their images would likely overlap (depending on the arrangement).

Imagine you took two photos from a few millimetres apart to the left and to the right, then you superimposed them. Some areas would look ok (as both views saw the same distant object) and some would be less ok (of objects that are closer).

If you have a DSLR take the lens off and put a sheet with holes in front. Just try it.

Now as you add more holes you get more and more overlapping images on the same area of the sensor and eventually end up with a blur.

This blur is the same blur you see with a single large hole. It's just that with a large hole you had infinitely many overlaps!

Even cooler, if you happen to arrange the holes in a specific pattern you could capture images with different combinations of perspectives from different holes and you may even undo the overlaps. This is called coded aperture imaging:

https://www.paulcarlisle.net/codedaperture/

This doesn't just solve the biggest problem (limited light) of a single hole, but also captures depth information and you can use it for 3d reconstruction, refocusing etc.

One final bit, with a warning of a deep rabbit hole:

That "infinitely many overlaps" I was talking about happens with lenses too and is essentially a convolution where you convolve the image with itself (actually many different perspectives of itself if I am correct). Which is just the Fourier transform.


>That "infinitely many overlaps" I was talking about happens with lenses too and is essentially a convolution where you convolve the image with itself (actually many different perspectives of itself if I am correct). Which is just the Fourier transform.

That statement is a bit muddled, let me unpack it. The infinitely many overlaps thing can be expressed mathematically as the convolution of the image with a function that's one where the aperture is open, and zero where it's not. The thing about the Fourier transform is actually related to a different phenomenon. When the slit is really small, you start getting diffraction effects. The diffraction bands are approximately the Fourier transform of the slit function. However that is not significant unless the slit is extraordinarily tiny.


Indeed! We like to think of light as photons, traveling in rays in straight lines. That's geometric optics and works great for most purposes in photography.

Physical optics takes into account the wave nature of light. This becomes important when the size of the lens becomes small (eg pinholes) ... there's diffraction around edges and pixels receive contributions from many points in space.

Geometric optics lets you model using ray-tracing, reflections, and Snell's law refraction.

Physical optics uses tools such as Fourier transforms, convolutions, and sinc functions.

Understanding a simple lens system? Geometric optics is your friend. Building an astronomical telescope? Check into physical optics.


Thanks for weighing in. My kid and I are really enjoying The CuCoo's Egg!


I read that a few times when I was younger. It's such a great story, and I've always been meaning to revisit it as an adult. When my kids get a little older I'm sure we'll enjoy it.


The diffraction bands are approximately the Fourier transform of the slit function.

Is this correct from a physics standpoint? Feynman had a lot to say about light in his book (QED https://www.amazon.com/QED-Strange-Theory-Light-Matter/dp/06...) and never framed it this way. At one point he remarked that the explanations in the book were related to diffraction, and the explanation there was very different from the Fourier transform.


In short, yes. An ideal image source incident on a positive optical lens produces its spatial Fourier transform at the lense's focal point. This is easiest to see with a lasersl backlighting a transparency, since the light is collimated and monochromatic. The transparency produces diffraction at its edges, which causes the effect. Actually, you'd also see the spatial Fourier transform at infinity if you took away the lens. The result of this is that you can do cool spatial frequency filtering effects at the focal point, then convert it back into an image with another lens. Laser systems that require high precision will use such a setup to remove high-frequency components and pass just the collimated light.


Pinhole lenses create visible diffraction effects pretty fast. Actually, even normal lenses cause diffraction at small ~f/11-f/22 apertures.


I suspect the GP comment is referring in context to the use of the Fourier transform to efficiently implement convolution.


The pattern of holes technique in that link (MURA) is also used in UV lithography for IC fabrication (example: https://youtu.be/8eT1jaHmlx8?t=1543).


> captures depth information and you can use it for 3d reconstruction, refocusing etc

That’s crazy cool! Got any good links for reading about how to do those things that way; 3d reconstruct and refocus?


Here's a paper on depth: http://groups.csail.mit.edu/graphics/CodedAperture/

Stanford has a class on computational imaging: http://stanford.edu/class/ee367/

I think these slides should give you a pointer to enough papers to fill a lifetime: https://web.stanford.edu/class/ee367/slides/lecture9.pdf

Georgia Tech also has a class, that's on youtube and Udacity: https://www.udacity.com/course/computational-photography--ud...


Thank you :)


Let us know if you make something ;)


Does Bartosz have a Patreon or something? I feel like I'm deeply indebted to him for the privilege of having read this work. Really, one of the most comprehensive and fantastic explanatory pieces I've ever read.

I cannot imagine the effort that goes into writing and coding a post like this.


Looking at the page source I'm doubly impressed there seems to be no use of third party libraries. I personally happily use three.js, but still, wow.

https://ciechanow.ski/js/base.js https://ciechanow.ski/js/lenses.js


I'm sure there's a simple explanation if I look at the code, but how he did the z-order for the fins on that aperture is breaking my brain.


I'm speechless, even in a mid-end android phone every animation is smooth, is like dark magic to me.


FYI there are digital sensors which aren't using bayer filters. Some are classic sensors missing the bayer filter (leica monochrome for example) other are based on different tech like Foveon sensors [0]

Many people also debayer their cameras themselves, especially in astro photography [1]

[0] https://en.wikipedia.org/wiki/Foveon_X3_sensor

[1] https://stargazerslounge.com/topic/166334-debayering-a-dslrs...


Fuji also use what they call an X-Trans sensor.

https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor


I picked up a Fuji last Christmas, and it’s one of the best purchases I’ve made in the last decade. Their X-line takes incredible pictures, and is so approachable. If anyone has been considering more serious photography, or is having a child and wants to capture more unique shots of them, I can’t recommend them highly enough.


I'm a big fan, I'm using my XT3 as a webcam right now too!


Which is still pretty much a Bayer sensor, just with a different filter layout. You still have to demosaic to get a usable file. And a well-tuned RAW convertor to do so.


Well, it's still a CFA (color filter array), but definitely not Bayer, which is a specific class of arrangements.


Bryce Bayer was my father. Hopefully I have a few more decades before I'm "debayered".


While the mod is interesting and a way to get into monochrome photography, it should be noted that the mod has disadvantages. Mainly, you end up removing the microlenses. Which seems to cancel out any gains unfortunately.

source: I've done this mod on a couple cameras and wouldn't recommend it for performance gains alone. It's a fun project, not much more.


In Rawtherappee you can choose which demosaicing algorithm to use.

http://rawpedia.rawtherapee.com/Demosaicing


That debayering sensor mod looks really interesting :) Nice way to quadruple the effective resolution of your camera, if you don't care about colors.


I've seen this kind of essay described as an "explorable explanation" - I absolutely love them. There's a directory of other examples of this kind of thing here: https://explorabl.es/


I obviously haven't finished reading the entire post, but I wanted to simply appreciate the hard work that has gone into this. I absolutely love interactive articles. Great job!


Lately an area of interest of mine in digital photography in particular is scene-referred vs. output-referred image formats and subjective perception. The author (justifiably) walks around this topic in his camera sensor overview, but the data captured by a sensor from photons hitting it doesn’t really contain a readily viewable image.

Data values captured by a modern camera far exceed the ranges that can be reproduced by output media such as paper or screens—meaning that the only way[0] to obtain an image we can communicate to someone else (or future ourselves) as a static artifact[0] is by throwing out data and conforming values to ranges that fit in the output color space, converting scene-referred (“raw”) data to output-referred.

This is where subjective perception comes in: how we perceive colors and shapes in a given scene depends a lot on what we had seen prior to this scene, our general mood, and various other aspects of our mind state. It’s only by taking control of processing scene-referred data that we can use the full range of captured values to try to convey most convincingly, within the constraints of the output space, our subjective perception of the scene around the time we pressed the shutter trigger.

(Naturally, further down this rabbit hole come the questions about e.g. what our conscious perception—but not the camera—was blind to in the scene, and eventually about the nature of reality itself, at which point one may feel compelled to give up and go into painting instead.)

[0] This would be quite niche, but I wonder if we could develop tools for exploring raw data at viewing stage, allowing the audience to effortlessly adjust their perception of the scene (even if within ranges specified by the photographer). Such exploration would require significant computing powers, but we’ll probably be there in a couple of years.


If I understand correctly, you're looking for a simplified RAW image editor? Many digital cameras allow storing RAW images alongside JPEG. The viewer can then load the RAW images into any (web-based) image viewer/editor that supports RAW format and have full control over tone mapping.

The tool interface needs to be simplified to make it a better fit for the use-case you present but I don't see computing power as a bottleneck.

Of course they'd still be limited by the dynamic range of the camera. This can also be resolved by calculating irradiance map based on multiple RAW images taken with different exposure times.


I think there is a fundamental difference between editor (designed to produce a deliverable) and viewer (designed for immediate experience) software. One of the things essential to the latter but really to the former is immediacy, hence I suspect that the computing powers commonly available today make it impossible for now (but likely not for much longer).


(Edit: “not really to the former”)

Apart from performance, another crucial thing is that the viewer must not have to think about technical aspects (like exposure, color profile, etc.), as you noted, so the GUI would have to be radically different.

I am envisioning producers bundling N processing profiles with their “digital negative”, and software that somehow allows the user to fluidly explore the perception of the scene by interpolating inside an (N-1)-dimensional space bounded by parameters in those profiles with really low latency.


To clarify, the alternative to taking control of processing scene-referred data is to use camera’s JPEG rendering of its own idea of how the scene should look.


If I'm conversing with a Japanese photographer in Japanese, what word do I use for "that Japanese word that American photographers use for a special kind of blur?"

It can't be "bokeh" because that's just an everyday word that means blur; for instance "mae no shashin ga boketeru kara, keshimashou ka" (the previous picture is blurry, so shall I erase it?)

Also, since that term came into use in around 1997, if I could have a conversation with an English-speaking photographer via time telephone to 1990, what would be the wording used?


I've seen a few of these kinds of interactive articles on hackernews but this is on a whole new level. Wow. One of the best explanations of just about anything I have ever seen.


It's truly amazing to me how someone can be this talented. The text, the animations, the style. It's all high quality.

I'm in awe.


In terms of sharpness, each lens is different. And the sharpness varies depending on the focal length (zoom), aperture and the sensor.

A great way to visualize this, is to take a look at the sharpness of the nifty fifty canon lens

https://www.imaging-resource.com/lenses/canon/ef-50mm-f1.8-s...

Click into the heat map https://www.imaging-resource.com/lenses/canon/ef-50mm-f1.8-s...

Move the aperture from 1.8 to 2.8, and note how the sharpness increases, starting in the center of the lens.

Then at about 16, diffraction kicks in and things start getting less sharp again.

Things get even wilder when you introduce variable zoom and different camera bodies.

It's a great way to visualize the good/fast/cheap pick two tradeoff's of lenses.



"By default all animations are enabled, but if you find them distracting, or if you want to save power, you can globally pause all the following demonstrations."

I love this writer / developer


This is beautiful.

I've been working with computer vision since university, but now days work underwater. I guess I should have a pretty good understanding of this stuff, but I still occasionally get caught out by optics.

This is the best possible pre-reading for anyone wanting to jump into the field.


I’m always impressed by the amount of work and quality this guy put on his posts. Especially the interactive widgets


What an incredible piece of work. I can only dream of having the patience, skill and drive to make something so beautiful and informative.


For a markedly less engineering oriented introduction see https://en.wikibooks.org/wiki/Modern_Photography/The_camera

One interesting thing I learned recently is how they make cheap plastic lenses, and the fact that plastic lenses were invented by an Australian. They are created by choosing a polymer with good optical properties and then thermoforming it by carefully applying air pressure to raise a rounded shape to a dome after clamping the edges, then holding the air pressure constant until it has cooled. Because nothing actually touches the lens portion, optical properties are preserved.


Do you mean, like creating a bubble of plastic fed from below where surface tension holds it in a shape like a bubble? And then letting it harden while in that shape? Does the air pressure you talk about mean blowing a stream down onto the lens?

And then, no further polishing is necessary? I'm fascinated to learn more if you have a source to read more from!


You can go up or down (probably sideways if you can account for gravity, eg. through rotary motion). Up is more common since you only need significant equipment on one side and access requirements mean that keeping the bulk of it 'down' is easier. No polishing is necessary. However, I recall reading that some polymers can be polished if you want, but IIRC it depends heavily on the polymer, the abrasive, intermediate fluids in use and result desired. Re: Source, I raided libgen for thermoforming over the last few months, visited about 10 equipment factories and studied their mechanisms and control systems, but actually learned this tidbit from 1000 Australian Inventions or some such book which I bought for my daughter before our family returned to China... to keep inventing ;)


This works for some lens shapes. For most though, molding is necessary.


I’ve been an amateur photographer for over 10 years now, and I’ve never seen such an easy-to-understand but still mathematically in-depth explanation of depth of field until now. Bravo. This is excellent.


It may not seem interesting at first, but I am intrigued by how they managed to draw the overlapping blades of the IRIS.

> In real camera lenses an adjustable aperture is often constructed from a set of overlapping blades that constitute an iris.

If you naively create 6 blades and rotate them around the pivot (the tiny screw), they will never overlap EACH OTHER. One will end up at the very top, and one at the very bottom.

If you know how this was created, please let me know.


This is interesting. I don't know much about html canvas, but it looks like this is handled in the author's custom js [0] in the line that starts with: "} else if (mode === "blades") {"

[0] https://ciechanow.ski/js/lenses.js


Do you mean, how it was drawn using canvas?

I'm guessing they drew overlapped part of the last blade twice


Looks like so, with a clever rotation, clipping, and drawing twice (had a cursory look at 'lenses.js', look for 'draw_blade')


Not sure I get your question, you want to know how it works in real life ? Have a look here: https://www.youtube.com/watch?v=cR7cibDvYyo


I think the question is how is this drawn on a canvas. It's not trivial, if I draw 6 shapes successively, one will be on top and another at the bottom, you would not see the tip of one at the top and the other part of the same blade at the bottom. Nicely spotted, interesting question!


Great article. One thing that's always stumped me as an amateur photographer is how an object is "in focus" when the rays from that object coverage to the smallest point, thus the smallest amount of pixels are getting hit. It would seem to me you would want to collect the max amount of information and thus use more of the sensor. How does converging to less of the sensor give you a sharper photo?


Ideally, the mapping from object to image is injective. What you're proposing will lead to "hash collisions" aka blurriness, since each point in the object will bleed colors into neighboring points in the image.

The entire object doesn't get collapsed to a single point. Rather, a single point of the object radiates light in all directions. A lens then captures a fraction of that radiation and collapses it back down to a single point. Then we iterate over each point in the scene with a "for each".


The amount of total light is the same, but the smaller the points of focus, the easier it is to distinguish image elements from each other because each point overlaps less with its neighbors.

Here’s a metaphor. Let’s say you have 10 glasses, with 10 different amounts of water in them. When you hit them with a spoon, each glass makes a different musical note: 10 different notes.

If you pour the water from each glass individually into 10 new glasses, you preserve those 10 notes.

If you combine every two glasses into one, now you have 5 glasses, and only 5 notes. It’s the same total amount of water but you lost information.


If you take up a larger part of the sensor, then the distance between any two details must be larger so the areas that their light covers over the sensor don't overlap. If they overlap, you have no clue where the information is coming from.

Look up circle of confusion, Rayleigh resolution criterion and sparrows resolution criterion for the technical details of resolution.


It's more that the rays from a point on an object converge to the smallest point when in focus? The rays from the object as a whole are spread out over an object-shaped are on the image plane, unless the object is (effectively) at infinity, when they will converge to a point...


How is this the first time I have gained an intuitive feeling for refraction? Instead I've been taught to simply accept Snell's law. It feels very powerful that a single hypothesis - and a reasonable one at that (namely that the electrical field is continuous at the junction between materials) is enough to explain refraction!


Refraction goes beyond electrical fields. An intuitive feeling for refraction can be obtained from:

- water: when waves go from, say, deeper to shallower water, their speed changes. If they are moving at an angle, refraction is seen. This can be explained by drawing wavefronts. The wavefront doesn't hit the shallower water all at the same time, and so does not slow down all at once, but gradually.

- wheels: when rolling axle with two wheels goes from pavement to grass on an angle, its direction will bend direction, due to the change in speed, which is experienced by one wheel ahead of the other.


Brilliant work.

It really helps building an intuitive foundation of these phenomena.

With such intuitive foundation, one has confidence to look into the math behind the curtain, even if this math gets complex, you will still feel confident.

IMO, this kind of explanations is the correct way to start teaching anything.


The first blog post of his that I read was Lights and Shadows (https://ciechanow.ski/lights-and-shadows/), and wow his blog is impressive. I love it!


> Alternatively, we could increase the sensitivity of the sensor which is described using the ISO rating.

Umm well, not really.

The sensitivity of a digital CCD or CMOS sensor never changes; it has one 'native ISO' equivalent sensitivity and that's all. It creates the same output signal in response to a photon every time.

Varying the pseudo-ISO simply varies the gain of the downstream post-readout amplifier. That's where noise is introduced.

Sometimes it's better to take a deliberately underexposed photo and tweak regions of it in post to achieve the desired exposure instead of introducing global noise from in-camera amplification.


> Sometimes it's better to take a deliberately underexposed photo and tweak regions of it in post to achieve the desired exposure instead of introducing global noise from in-camera amplification.

That is definitely not my experience for Canon's in low light situations. Here [1] is an example shot at ISO 100 but pushed by 5EV to match the brightness of the same scene shot at ISO 3200. The noise is much more tolerable for the latter.

[1] https://imgur.com/a/Ca9ccbK

P.S.: it's raw, no in-camera JPEG denoising in place.


Perhaps what we're seeing here is that quantization errors for the non-amplified signal become larger than the noise introduced by the amplification?


Only some camera sensors are isoless


There are also "dual native ISO" sensors. Still different apmplification levels, but in this case it is before ADC.

See https://www.youtube.com/watch?v=g8hHFt3ChZ8


"Dual native ISO" sensors don't perform different amplification before the read out amplifier or anything like that.

With an image sensor you get to decide between high sensitivity and high capacity (how much light it can detect before saturating). So a highly sensitive sensor can handle less light before clipping (which makes sense).

This trade-off can be tweaked by changing the size of the photodiode (bigger diode -> more sensitive), but also electrically - simply add capacitance in parallel to the photodiode. This requires more energy, i.e. more photons, to change the voltage over the photodiode, which is what the readout measures.

Dual native ISO sensors simply add another transistor (as a switch) in series to extra capacitance; this allows them to switch between low-capacitance (high sensitivity, but saturates faster) and high-capacitance (lower sensitivity, can handle more light).

Edit: Haven't watched the whole video but his core explanation in there seems to be that Dual ISO means the sensors has a PGA (programmable gain amplifier) -- almost all sensors use that approach, and that's what is controlled by the ISO setting in them.


Maybe you should watch that video…


I have a question for the more knowledgeable:

Color filters on the sensor split the light into three wavelength ranges - red, green, blue. Then photosites measure intensity of light, which means that sensor only knows that the incoming photon is "roughly green", but doesn't actually recognize its precise wavelength.

So e.g. when the light is pure orange, it falls down into red wavelength range and is counted as red.

Based on this I would expect that cameras would often produce pretty incorrect colors, but they are usually pretty good (after correcting for stuff like white balance).


You are right, this is a challenge. The wavelength of the light cannot be measured directly, only inferred by the intensity of the pixels with the different color filters. On the other side, most reproductions of photos are reproducing the original frequencies either. A computer screen has red, green and blue dots, which produce light at the corresponding wave lenghts. So if you have orange light, you have a signal on the green and red pixels and the green and red dots on your screen will light up. Which will be detected by the sensors for red and green light in your eyes. No where in the chain, not even in your eye, is a sensor for "orange" directly, it is just the mixture of the red and green sensitivity.

It is important to note, that neither the sensor pixels nor your eyes have a complete separate reaction to a wavelength. The sensitive area strongly overlaps. So for hues of green which have rather long wavelengths, you get some reaction on the red pixels, which gets stronger as you move towards orange, where both red and green pixels detect until it gets more and more red and less green. The exact absorption curve of the sensore color filters matters here, that is one reason, different manufacturers have slightly different color rendition. On top of that is calibration, when converting the raw image into a proper RGB-image, one can further balance the response. For that, the color calibration targets are used, which have like 24 patches of different colors. Taking a phone of this target, the calibration software can both calibrate for the light illuminating the target as well as the color response of your camera.

A common reason for red-green colorblindness is that the affected persons have the sensitivity between the red and green colors overlapping too strongly, so they loose the ability to differentiate. A green creates almost as strong a signal in the "red" cells. A way to improve the color vision for those people are glasses which increase that separation by absorbing the frequencies between the red and green colors.


> The wavelength of the light cannot be measured directly, only inferred by the intensity of the pixels with the different color filters.

Well, depending how you think "directly" but you can get pretty far with spectrometer, i.e. device that splits light and measures the intensity spatially to collect a spectrum. It's not impossible thought to build camera based on that principle, just need to sample the light in an array to make pixels


I was talking here about the typical foto cameras. Of course you can measure the wavelength of the light with other devices like spectrometers. I was specifically talking about camera sensors which have separate filters in usually 3 colors in front of the pixels. The sensors, which are made by Sigma, formerly Foveon, use a different principle. They determine the wavelength by measuring how deep electrons are generated by the photons in the silicon. The depth depends on the wavelength of the light. However, it is more difficult to get a precise color response that way as you cannot just use predefined color-filters.


Thank you for your great explanation. The overlap was the missing piece in my mental model ...


It works because the system is faulty in the same way our eyes are faulty, we basically have sensors for three main wavelengths and derive every color we perceive from a mix of those three, check this out:

https://en.wikipedia.org/wiki/Color_vision#Physiology_of_col...

https://en.wikipedia.org/wiki/Trichromacy


They work exactly the same as a screen. Orange is not pure red, it is red + yellow, and yellow is red + green. So orange is red + _some_ green. The in-camera processing will render the image based on the sensor input and the known properties of the filter (think of it as a color profile, a mapping between what the sensors read and the color). And the processing includes color interpolation for each pixel, as each pixel (photosite) only has one color filter, but the resulting image pixel has all three colors; these are calculated based on the neighbor photosites.

Different sensors/cameras have different filters, and combined with the manufacturer specific post-processing this gives different cameras/manufacturers a different color rendition and feel.


It is because our eyes are also "incorrect". The cameras and processing and displays were engineered to match our eyes expectation. If some alien race with a sufficiently different physiology would look at our pictures they could tell that it is off. (and it wouldn't even mean that their eyes are more correct, just that their model is different.)


> So e.g. when the light is pure orange, it falls down into red wavelength range and is counted as red.

If you only work with one photon at a time, then yes, you don't know if the photon passing through the filter was red, or orange, or with smaller probability even green or blue. But when you have trillions of photons passing through, you can "see" the difference by the relative intensity of light.

Remember that at the quantum level, things don't happen deterministically; you have to consider the probability that a given outcome occurs. So the photon has a certain probability to hit the filter and get absorbed, a certain probability to pass through the entire depth of the filter, a certain probability to hit the sensor without generating an event, a certain probability to hit the sensor and initiate a chemical reaction (for film or biological eyes) or an electron cascade (for CCD sensors), a certain probability to quantum tunnel to the other side of the universe...

So getting back to your question, when pure orange photons hit a red filter, many of them will make it through the filter, but not as many as if they were pure red photons. When pure orange photons hit a green filter, some of them will make it through, but not as many as if they were pure green photons. So if your brain knows what the "white point" of a given environment is, relative to that white color it'll see a specific combination of "some red, some green" as orange. (Of course, if known-to-be-white objects are already orange--like when you put on amber ski googles--your brain will eventually adjust and recalibrate to perceive that color as something else... perception is tricky!)


It is just the same with your eye - and camera response curves are optimized to be similar to your eye. There is metamerism - many spectral distributions will look the same to your eye as well as to a camera. See https://en.m.wikipedia.org/wiki/Metamerism_(color)


>> So e.g. when the light is pure orange, it falls down into red wavelength range and is counted as red

'Pure' orange light isn't red. It has a wavelength of 590–620 nm. Red is 625–740 nm and Green is 495–570, so 'orange' is between red and green. The sensor filters each allow a range of wavelengths through. So green is triggered as well as the red filter. In RGB terms orange is 255, 127, 0, i.e. with a strong red component and a smaller green component.

White balance is computed downstream from the sensor, and is used to resolve the effect that a coloured light source creates a colour cast on objects, most noticeably on white ones. The human visual system auto-compensates for this but cameras require special processing, sometimes done using presets for different types of light (sunlight, shade, tungsten etc).


Your eyes cannot see orange directly, but as an overlap of different intensities of the stimulation of different receptors in the base colors.

So as long as the filters in the camera have roughly the same transmission curve as the sensitivity curve of the color receptors, all is good.

However, to animals (or hypothetical aliens) with other color receptors, the images produced by photo prints and screens would look quite weird, with colors all wrong.


Filters allow rather large families of wavelengths through. You don't have just red, you have a fraction of green. Because your eyes only have three types of photoreceptors they can be fooled by playing back a similarly large family of wavelengths back.


Not an expert but: orange color code is FFA500. It has red and green. I expect when orange hits the sensor, it will register as X amount of read and less than X green?


That is in the rgb color model. In physical world, there are (almost) infinitely many different spectra that could be perceived as same "orange". It is honestly pretty amazing how well human color vision works despite that


That is correct. The sensitivity of the "red" and "green" pixels overlaps in the orange light frequencies.


If you tried to turn the sensor data into light again, there would not be enough information to do so accurately. Everything is built around human perception of color. When a photon hits your eye, it produces a "tristimulus value" for your brain. (The tristimulus is produced by "S", "M", and "L" sensitive-cones; these roughly correspond to blue, green, and red; that's why those colors are what we use for base colors. But, you can use other colors, and you could use more than 3 if you wanted to. There is no law of the universe that splits colors into red/green/blue parts... that's just a quirk of human anatomy.)

The goal of a digital camera is to be sensitive to colors in the same way that your eyes are. If that tristimulus can be recorded and played back, your brain won't know the difference. The colors your monitor emits when viewing a photograph could be totally unrelated to what was in the original scene, but since your brain is just looking for a tristimulus, as long as that same tristimulus is produced, you won't be able to tell the difference.

(Fun fact -- there are colors you can see that don't correspond to a wavelength of light. There is no single wavelength of light that fully stimulates your S and L cones, but plenty of things are magenta.)

TL;DR: computerized color is basically hacking your brain, and it works pretty well!


Amazing, just amazing. Thanks alot. I would only suggest author to set og:image in your pages. When you share it on social media no image appears, you know how does it affect spreading, right? :)


I built a trinocular microscope mount for my DSLR camera out of a couple of small magnifying lenses. When I was trying to get it to work and improve the quality, I did a bunch of reading on optics. Building your own camera lens that works is pretty easy, but making it capture nice pictures is hard.

Having this article back then would have been a big help! The interactive widgets are great, and it looks like they're entirely home-grown. I am really impressed with the level of detail and work that went into this post.


The article is very good but you have to consider the existence of other filters, not only the Bayer one.

And the Canon Dual Pixel AF that works a bit different than the article explains https://www.canonwatch.com/dual-pixel-af-has-become-a-canon-... (but the substance is the same)


In the article it is clearly acknowledged that there are different types. When you create a introduction like this you kind of have to limit your scope


Very impressive work, the interactive examples are excellent and will help many understand the concepts.

I’m a novice when it comes to anything related to web development so I’m going to go and look at how this type of this is done. Would anyone have and suggestion or recommendations for getting started?

I teach fluid mechanics with Jupyter and I use pyGEL3D for inserting 3D models but it’s far from ideal. I’ve love to do something like this.


The pinhole diameter slider is backwards; the objects should appear sharper as the aperture shrinks. Also it'd be good if you showed the effect of diminished light-gathering power at the same time, to emphasize the tradeoff.

All of the sliders should be labeled.

All in all, this is a good conceptual effort, but the details aren't strong.


While this is a very long article I just want to say great webpage, I'm saving this for an seo reference file.


This is so good!

I wonder if the author publishes the source code for the interactive bits somewhere (I could of course look at them from the presented JS, but still). I've been looking for a non-fan-inducing way to generate a DOF-effect, and the one in the first example is definitely good enough for my use case.


I was 45 years old when I learned why squinting your eyes make you see sharper. Never too old to learn!


This article is just superb.


This article is stunning. The design, the way it presents the information, everything. I don't think I've ever seen such a good webpage.

Edit: Now I've seen some of his other pages... This man is a genius.


As someone who spends most of his time either shooting, teaching or hustling, this truly is phenomenal. It makes some of the more mind-bending aspects of photography perfectly clear. Great post!


Great work, we live in amazing times! I can't imagine how humanity will evolve from now on if knowledge is as available and as well and accessibly presented as it is these days!


I can only echo my impressed-ness. With or without an interest in the obvious topic, it is well written, well illustrated, accurate. A superb example of the internet at its best.


It's really fascinating - I had always assumed digital cameras had different sensors for different light colours like eyes do.


Wow - just wow - such a great resource. This is what I love about www - exposure to such enthusiasm and expertise.


Awesome article and blog, as an amateur photographer this is highly educational and fun to read!


This is really interesting. In the first picture what does the blue slider do?


It changes the focal length of the lens, although it does not seem to affect the lens (blue ring) in the left-hand pane which seems like a bug.

There is a demonstration later in the article (below the 5th mention of "focal length") which does affect the lens (blue ring) in the left-hand pane. Rotate the scene so that you are viewing the camera side-on, and you will see the focal length of the lens changing when you move the slider.


It's changing the focal length of the lens (roughly the distance between focus points of the lens). If you rotate the scene, you see the lens is blue. Further in the article, they mention the focal length, using the same coloring.


this is quite a beautiful page, and in the first few seconds you can immediately grasp focus & depth-of-field/bokke. But then it keeps going!


Is it possible to make a color pinhole camera?


Sure! "Color" vs "Black and White" vs "Sepia" is primarily a function of the photosensitive (aka film or digital sensor) layer.


This is great. Thanks for sharing.


Love this


As a hobby photographer, this is simply amazing and the most intuitive article I’ve come across. This is a must read.

I am curious, however, why we still can’t digitally reproduce bokeh. Apple is getting close. I thought LiDAR would theoretically solve that and could yield indistinguishable renders compared to analog lenses. That would be a game changer in my view and why I would like to see Apple develop a full-frame sensor coupled with their technology.


A large lens captures information over an area, and so to a certain extent can "see around" out of focus objects. A selective blur of a fully focused scene captured from a single viewpoint (i.e. a small lens) can only approximate this effect, because it simply doesn't have access to the same information. Even with a perfect depth map, you still don't know what's behind occluded objects.


If instead of resolving points of light on the image sensor, you use a group of pixels to resolve an entire tiny image you can effectively also see around things. You end up with a picture of many small sections of the large image each at a different angle. The image on the far left of the image would see a different angle than the image on the far right. This is exactly what the Lytro camera did and it's why you can take the picture first and focus later. Of course you sacrifice overall imagine resolution quite severely:

* https://www.researchgate.net/figure/a-b-The-first-and-second...


Well yes but you're still going to need a sensor as large as the aperture of the lens you want to simulate, which makes it a non-starter for phones.


Yea, this type of image recording has huge compromises but I personally find it really cool.


So do I! If I had a bit more photography budget I'd try them out.

Also, I'm excited about cameras with dual/quad pixel AF, that are kind of a hybrid between lightfield and traditional cameras. I wonder what kind of sorcery one would be able to do with the light field data in those cameras!


One of the limiting factors for modeling bokeh and flare-like effects is dynamic range limitation. You need extreme HDR capturing to accurately reproduce these effects, as they often play the largest part with bright, especially colored, light sources. I did work on flare simulation and while many effects can be modeled by a rather simple convolution (in the spectral space of course -- you cannot make a rainbow out of RGB straightforwardly), the problem is that kernels (PSFs, point spread functions) for these convolutions have very long tails and it's the shape of these tails that gives most of the 'natural' artistic feel.

The thing is, these tails become apparent only when you convolve with a very very bright source -- which on a typical 12-bit level linear raw image would amount to something like 10⁵-10⁶, i.e. needing 4-8 additional bits of HDR.

Here are some useful links on the topic of flare simulation, I believe bokeh has mahy similar aspects:

https://web.archive.org/web/20200119024053/http://simonwinde...

http://resources.mpi-inf.mpg.de/hdr/TemporalGlare/


Here's another interesting paper [1] on that topic. It shows that synthetically blurred images are significantly more realistic if they're based on recovered radiance maps (HDR).

[1] https://people.eecs.berkeley.edu/~malik/papers/debevec-malik...


Phone cameras are extremely wide-angle so everything is in focus and there is no natural bokeh. To add bokeh, you have to separate the subject from the background, and then also determine how far different parts of the background are. This requires very advanced AI for non-trivial images (see the imperfections in Photoshop's "select subject" tool), which Apple is actually still doing (that's what portrait mode is). But if it's not perfect, it quickly becomes worthless, so in short, they are doing it, but only the most advanced companies can try.


Not sure but - as film e.g. captures actual photons from the scene, probably some kind of information is encoded through that.

Bokeh is a kind of of space representation, similar to how you can basically "see" through hearing a sound stage of instruments separated properly when someone has a really good sound system, or how dogs have "5.1/7.1" sense of smell.

How does one encode that I have no idea.


Look into light field photography tech. It is possible to capture a ”volume of light”, within which bokeh & more can be adjusted after the fact. Issue is the amount of data generated and complexity of tech versus getting a ”good enough for most situations” image via simpler means (regular photo). Regular + depth images (Apple LiDAR etc) with help of AI can create something vaguely similar to actual bokeh, but they’re missing a lot of source data.

In the world of 3D rendering (content created from scratch) very advanced & realistic bokeh effects are possible, as an example see http://lentil.xyz for the Arnold renderer.


Wow. I left the CG industry 3 years ago in a sad bout of defeat, involving both an ability to make a decent living and a realization that it would never my standards of creative engagement that were set by my lifetime love of photography and film. But, this project is very cool.


Wow, thanks!


Sidenote, it appears the author works for Apple.


The website sliders don't work well with a touch interface. If I use a mouse, I can smoothly move the slider values. But attempting to tap and slide them either has no response or makes the slide move to the extreme left or right.

I'm using Firefox (Nightly) on a Windows 10 tablet (Microsoft Surface Go).


I did some more testing and I think it's due to how Windows interact with the website slider.

I've tried with Chrome on Windows 10 and find that the slider works but only if I touch on the line the slider is sitting on to set a value. Attempting to tap on the round circle and pulling it to slide does not work.

I tried again on Firefox and if I very carefully tap on the line, I can set the slider. But touching the circle and sliding does not work. So, Firefox is more sensitive than Chrome when it comes to determining whether I am touching the line or not.

I presume this is due to how Windows interprets touch events and passes them on to the browser, making the sliders hard to work with in my case.


Sliders work perfectly for me on latest FF on latest LineageOS, which suggests that it's not a problem with the website, rather with either FF or Windows on your device.


It works flawlessly on my iPad in Safari.


Firefox on Android here, every animation works flawlessly.


Wow. What is this tool used to make these animations?


[flagged]


"Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."

https://news.ycombinator.com/newsguidelines.html

We detached this subthread from https://news.ycombinator.com/item?id=25371653.


But it's way less effort than checking, isn't it? The name in the footer isn't English, so it's hard to know from that and there's no about page that I could find. The Twitter profile makes it more obvious, but I only clicked on it to check because of your comment. Why bother??


[flagged]


I'm from Slovenia and this is bullshit. Maybe it holds true in a couple of Slavic languages, but I'm familiar with a handful of them and this isn't a thing in any of those.

Regarding the "insulting" part, you're probably mistranating it. What would be insulting in many Slavic languages is using the "middle grammatical gender" (rough literal translation) which is equivalent to using "it" to refer to a person in English. If you insist on translating literally, using third person gender neutral (in Slovene and many Slavic languages) is actually a sign of respect and is how talks to their superiors (antiquated in 3rd person, but still the norm in 2nd).

Not that it matters anyways, as we're all talking in English and thus English grammar is the only one that matters. Even if you know for a fact all parties in the conversation speak the other language, if you're using English, the rules of the other language don't matter.

P.S. Re: Latinx: yeah, that's stupid and I'd see it as mildly offensive if directed at me, but that is neither a pronoun nor common practice, so is entirely irrelevant here


I'm Romanian and have many Polish friends. This is not even remotely accurate, except for basket cases that I have yet to meet.


Actually being Polish I can say it is correct, at least for the cases mentioned (also correct for suffix -cki). However it is true that there are surnames that do not follow this particular inflection scheme.

Keep in mind that Poles are likely to use a single, male variant of the surname, rather than explain the intricacies of inflection in the Polish language.


-A former girlfriend is of a Russian emigré family; she always got a chuckle out of Norwegians' inability to understand the concept of inflection.

Her father as a matter of course got letters from all her leisure activities, school &c addressed to 'Mr. Yuri ----skaia'. - as her last name was ----skaia, obviously his would be, too.

At least for us inflectionally challenged Norwegians.


Reminds me of Nabokov complaining about the incompetent translators of Russian novels → English writing Mr. Karenina.


It wasn't "Mr. Karenina" (I've never seen anyone use that form) he took issue with; he believed that the name should be translated as Anna Karenin rather than the usual Karenina, since English does not inflect surnames.


Basically the same as "Miss John Doe".



I'm still going to use they/them because a name doesn't necessarily match with one's gender.


language 1 extracts gender to a separate word.

language 2 embeds gender in the word itself.

names are words.

it's a lose-lose situation: native speakers of language 2 will look at what you wrote and cringe or just assume you're ignorant. they won't get the point you're trying to make, null, nada, 0% chance. source: i'm a native speaker of language 2.


Yep. "Greek is gendered, the gender is embedded into the words". Sure, but I can speak whole normal sentences without referring to gender at all, something like "went to the supermarket and bought x y z, then did a, b and c and came back".

You can talk about someone at length without mentioning their gender once.


This is absolutely true, but sooner or later you’ll have to pick a gender or you’ll just end up sounding funny at best.


Not really, that's just the way sentences are formed. The third person has no gender, the gender is only in the pronouns, which are optional. Adjectives will give it away, though.


[flagged]


If you keep posting flamewar comments to HN we will have to ban you, so please don't do that. We're trying for something else here.

https://news.ycombinator.com/newsguidelines.html


If you think someone having a Slavic last name means their gender necessarily matches the gender suggested by the form of their name, you are just mistaken.

I have a Slavic last name. I also have female relatives whose last name is, quite officially, the male-suffixed version of mine. I invite you to guess how that might have happened.


[flagged]


[flagged]


You opened this thread by lecturing others on the pronouns they used for the article author with no standing to do so. Who are you and why are you more authoritative on the author's pronouns than the person you attempted to correct?


[flagged]


I actually appreciate when people default to neutral when they otherwise don't know. They tend to use he/him if they don't, and that's incorrect.

This unasked for anti-activism is very unhelpful. You are indistinguishable from people who refuse to use correct pronouns even when they know. You do not speak for me.


To be clear, I never forced you to do anything. You just got angry at me.


Thank you for sharing your personal opinions.


> Using gender neutral in this case is as insulting as calling him a dog.

In English? No it isn't. You are the one who is "butchering" a language's pronouns by improperly mapping considerations into it from another language here, not vice versa.

I have a Slavic last name, and I speak English fluently and Russian not-so-fluently-anymore.

If you use "оно" to refer to me in Russian because you don't know my gender, it would sound weird to me. (I haven't been to any Russian-speaking countries in a while, but I suspect there hasn't been a shift in Russian to use that pronoun that way while I've been gone.) I personally wouldn't be insulted, but I can easily imagine how somebody might; it would carry a subtext of referring to me as an inanimate object.

If you use "they" to refer to me in English because you don't know my gender, it would not sound weird to me at all. (Though it might have 5-10 years ago, for different reasons.) A much closer (but still imperfect) analogy to using "оно" to refer to me in Russian would be using "it" to refer to me in English, which would sound weird to me.

Hope that helps.


Russians use “yous” or “вы” which is both gender-neutral and a polite pronoun.


That's for the second person pronoun, which are all gender neutral in Russian, whether they are singular or plural. (Like English.)

We're talking here about third person pronouns, and the "the plural is the polite version and the singular is the casual version" doesn't extend to those. You wouldn't refer refer to someone in the third person as "они" rather than "он" to be polite.


> you wouldn’t refer

you wouldn’t do that because Russia is a paradise for homophobia, not because it does not make sense.

“Они” is a valid pronoun for the same thing than now means “them” in woke English.


The grammar of a language isn't about what "makes sense". A particular usage making sense or being reasonable isn't the same thing as that usage being part of a language.

If non-homophobic circles of Russian-speakers actually use "они" as a gender-neutral singular third-person pronoun, that's news to me. I wouldn't be terribly surprised to find out that's true. If it is, I would expect that to be a recent development, as in the last decade or so.

But it sounds like you're basing what you're saying on just mapping the English usage to Russian, or basing it on ground principles about what makes sense or doesn't, which is pretty unconvincing. Are you aware of examples of "они" being used this way in the wild by native speakers?

Anyway, whether it expresses politeness is a whole other can of worms, and I would be quite surprised to find out that on top of that in these circles "они" is used to express politeness in place of "он" or "она" (or I guess "оно") the way that "вы" is used to express politeness in place of "ты".


I don't see mention of pronouns anywhere. Usually when people don't list pronouns or hints it's because they haven't thought about it or don't care. Without any comment from him, you have no place telling others how to refer to him.

I'm going by the photo on his Twitter account, but it's as valid a guess as any other. If you went by how I look or by my legal name, you would be wrong, and I would be annoyed at someone on a website chastising others for not using what they assumed was correct.


> The site is the work of a guy, "him", and "his" work. We don't have to guess and say "them" and "their" work.

Assumptions are presumptuous and harmful. Everyone is a they/them until otherwise stated. These are extremely useful pronouns.

You're a they/them.

Calm down. You seem to be overly attached to regressive labels. I don't know what drives this in you, but the universe is burning and people just want to be happy. Your life is short; you're going to die and rot away. Is this really something to get worked up over?

In the future, when our descendants are all uploaded to computers or replaced by an AI overlord, people will be all sorts of genders. They won't be bound by yesterday's norms. Would you be a stuffy gender policeperson, or will you just let people be who they are?


[flagged]


You're literally complaining about having more dropdown menus in your life like it's a big deal. Not a good place to "you're out of touch with the needs of the world" from.

While it's true that pronoun usage in English is a relatively unimportant problem compared to lots of other existing problems in the world, it's worth noting that the person who seems to be the most emotionally invested in that problem here is you. You might accurately feel that using "they/them" this way is forced on you in other situations, but that's not what happened here. In this conversation, you observed "them" being used that way; you didn't get told that you have a moral or ethical obligation to use it that way. That a pretty important distinction.

You're the one that's being high and mighty and trying to impose a standard on somebody else's use of language here, not vice versa.

Also, until just now, the "1% of the population" that you are talking about (which I presume is trans people) wasn't even directly relevant. You're right that part of the cultural shift towards greater use of gender-neutral pronouns is socially driven by trying to make trans people's lives easier, but you could easily have the more or less the same conversation and not have trans people in mind at all. That shift is also socially driven by just trying to leave gender out of the picture when it's irrelevant. That's an aspiration that one might reasonably have even if one doesn't give a shit about trans people.

The thing you're upset about in the first place wasn't a case of someone using "them" because it was a preferred pronoun or whatever. It was case of using "them" as a gender neutral singular second-person pronoun because they didn't bother to do the legwork to work out the gender of the person they were referring to.


[flagged]


Please don't go into personal attack, and please stop posting unsubstantive and/or flamewar comments to HN. You've unfortunately been doing a lot of that lately, and it's not what this site is for.

https://news.ycombinator.com/newsguidelines.html


Your rhetorical style is very harmful to the people you claim to support. You received criticism for your interactions and you immediately redirected that towards hatred for "trans folk" in an effort to smear the person who said it.

> What harm is it to use singular "they"?

That's not what the OP is saying. Because of course there's no harm in any pronoun. But there is societal harm in acting high and mighty about how saying "they" is somehow making a better world and then acting like a prat when called on it.

> regressive labels. ... people will be all sorts of genders. They won't be bound by yesterday's norms.

No, modern twitter activism is just making a bunch of crap up, and this over-concern on properly labelling everyone's gender expression just harms actual transexual people with actual body dysmorphia.

What people who harp on pronouns miss is that pronouns are not used to refer to someone's gender but their sex. I can see your sexual characteristics from across the room, and referring to them is a good way to narrow down who I'm talking about. "That's her there, the tall one in the red dress". Nobody is going to say "That's zir, the one who is only sexually interested when they're romantically attracted to someone, and whose gender expression varies with the weather." Not only is it a bunch of totally irrelevant and inappropriate discussion for most settings, but it's invisible and thus worthless in a referential sense.

The pronoun warriors are invariably not sexually dysmorphic, they're just deconstructing their parent's mores. They're today's goths, not some miraculous new human evolution. If they manage to do something new and valuable - if for instance Harry Styles actually looked good in a dress - all they'd end up doing is expanding the normal dress code for men, they wouldn't actually be creating a new identity. And this isn't helpful because society explores these things (copying elements of one sex's traditional styles in the other sex's new style, etc) via fashion designers and manages to do it without attacking each other's moral character and making queer/intersex people into political pawns.

> It seems like you dislike trans folk. I'm not sure what they did to hurt you, but I suspect your fears and hate are the result of conditioning.

If they had displayed any hate, and they did not, it would have been caused by conditioning from vocal jerks, not from trans people living their lives.

> Would you be a stuffy gender policeperson, or will you just let people be who they are?

Okay, in what way are people not already 100% allowed to express their gender? (In the USA, Canada, etc...)

To someone in the Q-adjacent community who has been through Tumbler and Twitter, you're incredibly transparent. Everything you say is divisive, rude, harmful because it distracts from actual problems, and because it makes this nonsense seem important. You're clearly a justice warrior, not someone who works for justice.

> You've got a lot of fear and anger. Reflect on it. Let it go.

And you're doing this in the name of a community who was doing just fine on their own. Trust people to speak for themselves. Don't pick fights in their name.

Since the LGBTQ institutions have been taken over by the woke they've pushed gendered nonsense and the public isn't blind. Mental breakdowns over pronouns, putting men in women's prisons, etc. Your behavior here is part of a pattern that has reduced LGB acceptance over the last few years, and while Trans acceptance is superficially up, it's at the cost of hateful wars against transexuals, for instance.

I know an ~60yo MTF who was savagely attacked online, under their actual name, for saying that they were tranSEXual because they knew they were biologically male and wanted to be a woman. They were told their language was hateful and they were "killing us!" even though they nearly lost their job, and were being bullied in a way that actually causes people to kill themselves. This is an actual person who spent the last four decades living as the other sex, fighting actual discrimination back in the day, and the mob wanted to cut them down for not toeing the woke line.

The effect of this allyship is similar to Portland, where white Antifa activists are using BLM as an excuse to riot. There's a 'funny' video of a black officer saying he's been racially insulted more during the BLM riots than in their entire career before that. He continues to say that every time a black person tried to speak to him - to ask about his actual views and rational for being an officer - that white BLMers come and physically block their interaction. The rioters don't want anyone to find a middle ground and solve anything because they're using black people to justify their anarchist cosplay. "Oregon Public Broadcasting reports Wednesday that black members of Portland’s Black Lives Matter movement are tired of their anti-racism demonstrations being hijacked by the mostly white “Antifa” anarchists and are working to separate the two movements as protests continue."

Similarly, I've never seen the pronoun police display actual concern for anything other than their ability to control people.

Reflect. Stop. Thanks.


> What people who harp on pronouns... Mental breakdowns over pronouns... pronoun police

To my eyes, in this thread, the person who is harping on pronouns the most, seems to be closest to having a mental breakdown over them, who can most accurately be described as "pronoun police", and who shows the greatest explicit tendency to control others' use of pronouns, is also the person who in the first place corrected an in-the-wild usage of singular "them" to "him".

Like, can we step back and look at the thread and say that yeah, sometimes maybe it's the people who are pushing back against singular "they" that are blowing the thing out of proportion?

> I've never seen the pronoun police display actual concern for anything other than their ability to control people.

If you're open to a real example, the "pronoun police" at my workplace suggest that when writing about hiring candidates, we use singular "they" instead of "he" or "she", in order not to call too much attention to the gender of that person when those evaluations are read by decision makers. The intent is to reduce any possible gender bias in hiring.

This to me reads like an actual concern for something other than one's ability to control people, namely a concern for being more objective and unbiased in hiring decisions. (Or at least, if you want to be cynical about it, for seeming more objective and unbiased in hiring decisions.)


I wanted to step back to the beginning and ask why you felt that you needed to jump into a conversation with someone who was obviously much more deeply involved that you. In no way did I criticize trans people, or otherwise attack anyone other than SJWs who fight for people who don't want them.

Don't you see the issue of people claiming to fight for others, but really just stirring up trouble?

I'm honestly interested because I see many communities attacked and turned into a hollow shell, leaving the original members as the primary enemies. Transexuals are literally hated on Twitter despite being the group who fought for the freedom that the current justice warriors are using against them. And people like yourself, who seem to sincerely be trying to help, are helping the attackers. What about the transgender message to you justifies death threats and insane hatred of transexuals? Because that's what you're supporting if you back these pronoun bullies. All anyone different needs is just for everyone to let them be. Fighting in their name is directly counterproductive.

Real LGBTQ+(everything) people don't want to force the public to do anything, they simply want to be equals! Everyone who fights for this "must use pronouns" or "must say 'cis'", etc, fight must hate actual LGBTQ people or they wouldn't try to pick a fight for them. Do you and everyone else not understand how hard it is without people making it harder?

I mean, it is a way for you (the generic you, but also the personal you) to virtue signal. But do you really think this justifies the damage?

How, and why? This stuff is crazy!


Oh, also, regarding

> What people who harp on pronouns miss is that pronouns are not used to refer to someone's gender but their sex. I can see your sexual characteristics from across the room, and referring to them is a good way to narrow down who I'm talking about.

The thing you see about someone when you look at them from across the room isn't their biological sex, if anything it's their gender presentation. This chunk of youtube video (https://youtu.be/9bbINLWtMKI?t=548) talks more or less about that.


You seem like you intend to be helpful, but I don't think you're as surrounded by these politics, or the disingenuous arguments, as I am. imho, Contrapoints is a homophobic monster who supports shaming people with "genital preferences" (ie, gays and lesbians.) If you want a sane person who just happens to be trans, try Rose of Dawn. They're honest and sincere. Bair White is also reasonable, and often gets threats and hate from the community for being unwilling to join in the nonsense.

Ultimately the rift comes down to lying. Do we have to deny truth to be kind? I say no, and I'll leave this rough paraphrasing of Blair. "Am I male, yes. Of course. I'm a transwoman and I wouldn't be trans if I already was female. Do I wish I wasn't, yes. But am I offended, no?"

As for Contrapoint's "seeing gender", that's just simply not the case. For one thing, I grew up when boys were allowed to like pink, so most of people's signifiers don't stand out to me.

I'm super behind being sensitive, but if you're six inches taller than the others, have a beard, or breasts, etc, those are sex linked traits. Not size entirely of course, but often enough that the 10% male is larger than the 90% female. It's gaslighting to assume we can't differentiate between the sexes.


> Contrapoints is a homophobic monster who supports shaming people with "genital preferences"

Jesus, what was the phrase you used? "Straight to ideological slander"?

> I'm super behind being sensitive, but if you're six inches taller than the others, have a beard, or breasts, etc, those are sex linked traits.

So if you were talking to me and saw someone who looks like Blair White across the room, and wanted to single that person out, do you think you'd spot that person's biological sex and say "him over there" to me rather than "her over there"?


It's not straight anywhere though. It came over years of watching CP defend literal incel terrorists beating up lesbians who refuse to have sex with them, while gaslighting the victims and defending the attacks. CP didn't coin "Cotton Ceiling" but they sure go on about it. If kinkshaming is bad, what is it to personally shame lesbians who reject men?

And I didn't lead with it, I used it to explain why I found your reference to be less than compelling.

> So if you were talking to me and saw someone who looks like Blair White across the room, and wanted to single that person out, do you think you'd spot that person's biological sex and say "him over there" to me rather than "her over there"?

You and I are already talking about Blair so I'd say 'He'. But If I thought it might be ambiguous I wouldn't use either. Like I wouldn't use a shirt-color if I couldn't see it clearly, or thought that you couldn't.

And I won't use people's chosen pronouns but I don't pick fights either so I'd sidestep the issue in their home or at their event.

But if you mean, would I spot Blair if I didn't know them? Ehh, I dunno.

Do you think this means I'm reading their gender expression? Doesn't that implicitly assume that there are only two gender expressions then, such that I'm being fooled into seeing the other one? This whole angle feels like it'd be labelled regressive, and it brings up the question of how one would be expected to recognize 'Moon Gender' at all.

It feels like it'd be simpler to just say that I would be mistaken about their sex.


Alright, I'm pretty burned out on this conversation so this is gonna be my last post. It's been distracting me and I want my real life back. I'll just try to wrap up the bits I find interesting.

> But if you mean, would I spot Blair if I didn't know them?

That's more or less what I mean, but also, would you, on top of that, expect me to spot the same thing?

> Do you think this means I'm reading their gender expression?

I don't think that logically follows, and I certainly don't think I'm going to convince you of that.

I think I have a chance of convincing you that the argument you gave in the paragraph starting with "What people who harp on pronouns miss is" in the comment at https://news.ycombinator.com/item?id=25379317 is a bad argument for the claim that pronouns (should?) refer to biological sex rather than gender. So I'm shooting for that. (But not for convincing you that the conclusion is wrong.)

I think the Blaire White across a room scenario suggests that pronouns are more useful in a referential sense if they're used in a way that aligns with gender presentation (assuming a person's gender presentation falls neatly into male or female) than if they're used in a way that aligns with biological sex, in the case where those things differ. So if you're really just optimizing for referential usefulness when you're using "he" and "she", I don't think "biological sex" is where you're going to land. I think you're going to land closer to "gender", and even closer to "gender presentation". These things align for most people, so usually it doesn't matter, but in the cases where it matters I expect "biological sex" to be less referentially useful pretty often.

I also don't think the Blaire White scenario is unusual. I know a handful of trans people in person, and the same applies to them, i.e. if you were to point someone who looks like one of them out of a room and use a pronoun to help me know which person you're talking about, the pronoun that matches their gender identity would do a better job than the pronoun that matches their biological sex.

I also don't think it's just me, i.e. I don't think most people are much more prone to 'seeing' someone's true biological sex in this kind of scenario than I am.

> Doesn't that implicitly assume that there are only two gender expressions then?

Sort of. I think "that thing that you see and also expect me to see that makes you choose which of 'he' or 'she' you would use if you want to be referentially useful when pointing out a person across the room" doesn't quite capture what "gender presentation" is either. (Hence the hedge with "if anything" in my earlier comment.) I just think that thing lines up with "gender presentation" a lot better than it lines up with "biological sex" in the cases where those differ.

> It brings up the question of how one would be expected to recognize 'Moon Gender' at all.

Taking it seriously, in real life, for people with Moon Gender (or whatever outside-the-binary gender identity you want to substitute)? I don't think one is expected to just recognize it based on presentation. Hence, the practice of being explicit about preferred pronouns in circles that care about about this.


> I think the Blaire White across a room scenario suggests that pronouns are more useful in a referential sense if they're used in a way that aligns with gender presentation

I think we should have defined what we think of as gender expression. I think it's stuff like affectations, hair styles, makeup, "gendered" clothing (dresses), etc.

Harry Styles wore a dress recently to much acclaim. He was lauded for expanding the male repertoire.

> I also don't think it's just me, i.e. I don't think most people are much more prone to 'seeing' someone's true biological sex in this kind of scenario than I am.

I saw a man, in a dress. And I think that's what everyone saw.

The difference between Harry and a transwoman is - intent, which is invisible. I only know Harry isn't trans because the article told me.

> I just think that thing lines up with "gender presentation" a lot better than it lines up with "biological sex" in the cases where those differ.

I would have agreed. Ten years ago nobody was going out of their way to queer everything. If a man wore a dress they either lost a bet or wanted to be seen as female. But now, in a world where nothing signifies sex, how can I read your gender expression?

But now a dress isn't part of gender expression anymore and is simply a garment. I guess we're expected to just move the bar and say that because Harry wasn't wearing a dress and lipstick he's a guy not a transwomen but that feels fleeting.

> I don't think one is expected to just recognize [Moon Gender] based on presentation. Hence, the practice of being explicit about preferred pronouns in circles that care about about this.

But that makes a mockery of the entire idea of gender expression. Now it's just people demanding special labels again.

The whole ideology is pushed as a unit and lives and dies as a unit. Either gender expression is a thing, and there are a finite but large set of them, and sex doesn't exist as a binary, and trans babies are real, etc, or it's not.


> Alright, I'm pretty burned out on this conversation so this is gonna be my last post.

I sometimes try to have proper conversation, but these days try to make sure I'm not dealing with some form of alt-right bandwagon trolling by checking comment histories. Every single HexagonalKitten comment so far has been about how others get all upset about things, and, of course, 'immigration issues'.

It's not that I have an issue discussing these topics, but at some point I do feel I have to use some quick heuristics to weed out the trolls or 'culture wars' group-thinkers.


So, what's your conclusion of one day of posting history? Far-Right? Alt-right? Full on Notsee? Transphoooooobe? Paid Russian?

As far as group thinker goes, lol. I'm in the 5% side, against he 95% braying mob who dox and torment people all the time. This used to just be common sense, for instance that you can't medicate children to be the other sex, and now people insist that up is down. I'm saying what I said ten years ago, and barring an actual change in human biology, what I'll be saying in another ten years.

But since you brought up immigration issues, do you expect to see an alt-right person (such as you seem to believe I am) saying that race is not an issue? That immigration shouldn't be stopped? Where does that fit in your categorization? Could you describe what a moderate, liberal, Parisian view would be, such that you could judge mine against it?


This is a useful comment -- this is the sort of total derail that hijacks a discussion of (scrolls up) oh right, cameras and lenses, color filters, how we (hehe) filter the world to 'master the art of recording what we see'.

It's good to know when you're conversing with people who prefer fires that don't shed light.


> If you're open to a real example, the "pronoun police" at my workplace suggest that when writing about hiring candidates, we use singular "they" instead of "he" or "she", in order not to call too much attention to the gender of that person when those evaluations are read by decision makers. The intent is to reduce any possible gender bias in hiring.

I am, and I like that. Like the example of hiring musicians after a blind listening. Kind of the whole schtick of that music show where the hosts spin around only after listening to the artist. fwiw, that's actually not considered progressive anymore because it amplifies the homogeneity of choice. Because of this or that factor, many cellists are one sex, or the QA department skews heavily towards one demographic. Without active HR grooming you don't get those SV pleasing diversity numbers where you have a numerical microcosm of the country. But yes, the idea of a meritocracy and blind hiring is always refreshing.

> To my eyes, in this thread, the person who is harping on pronouns the most, seems to be closest to having a mental breakdown over them, who can most accurately be described as "pronoun police", and who shows the greatest explicit tendency to control others' use of pronouns, is also the person who in the first place corrected an in-the-wild usage of singular "them" to "him".

Have you cared for groups of children? Like all bullies, SJWs try to make the victim appear the aggressor. As a current example, Letitia Wright got taken in by pseudo-science, but people attacked her (viciously!) for retweeting it even though she had been victimized by the "hoax". "You're pushing a message that KILLS PEOPLE! F you, cword", and so forth. Presumably if they found anyone taken in by Letitia's post they'd attack them too. Simply put, none of this is what you'd do to actually fix a problem, or if you actually cared about people.

In our case, look at how the person who used "them" responded to the criticism. They went straight to trying to ideological slander.

>> It seems like you dislike trans folk. I'm not sure what they did to hurt you, but I suspect your fears and hate are the result of conditioning.

Accusations of transphobia. That's not the kind of allyship that anyone needs. In fact, that's almost exactly how a troll who wanted to make people hate each other would act.

Yes, this is obviously a non-problem, but by their own rules, the OP assumed someone's pronouns. And when called on this they didn't respond reasonably but lashed out with essentially accusations of hate. If they intend to be helpful they should check themselves before they wreck their allies.


FWIW, I agree with you that "it seems like you dislike trans folk" isn't a good place to go, for a number of reasons. For one, it's a tactical mistake to pull out that kind of accusation if you want someone to listen to you. For another, it's probably too reductive to be accurate. I'm not going to defend the hill of "SJWs aren't bullies" or even "the person you're calling an SJW here wasn't being a bully".

But I also think if you look past that and strip the junk away, there's a kernel of truth to it which is worth keeping.

Namely, the pushback against singular "they" here is grounded as pushback against some perceived nominally-pro-trans movement. (That's what "1% of the population" refers to, yeah?) A good chunk of the criticism presented is criticism of the kind of people who call for singular "they" and not actually criticism of singular "they". It's not just about the grammar, and probably wasn't in the first place, as evidenced by the fact that we're talking about trans people now somehow.

If you're willing to toss "SJW" and "bully" around, I think it's not out of line for me to scrunch up my nose and throw out a "reactionary" when I see that.


Never in my comments have I displayed anger, irrationality, or a desire to dictate that others have to use certain language. It must be easy just to think I'm some fringe element going nutso like some Trump supporter living in Florida. I'm far from that. Go ahead, treat all arguments with anyone you disagree with that way. You may feel right, but it's not right, and the more you dismiss someone based on that the worse you will get surprised.

All I said was that we don't have to speak like we don't know that someone is a man or woman with a high degree of certainty, based on a name. Bending over backwards to neuter someone's pronoun on a 1% chance is silly.

I guess that makes me a grammatical nitpicker if you can only oversimplify it. And sure, attribute it however you like, but I don't speak in vagaries when something is specific and known. And it counters an accurate use of the English language, for some political purpose.

I'm against the dumbing down of discourse at the hands of people who are riding the latest bandwagon, and on other people's behalf no less.

Finally, about your example of obscuring during someone's hiring candidacy by calling him/her "they". That is ridiculous to the extreme. I guess that's what happens when you buy into the idea that your world is dominated by evil bias, and everyone around you is repeating that mantra.


> or a desire to dictate that others have to use certain language.

As far as I can tell, the content of your first comment in this thread was

> The site is the work of a guy, "him", and "his" work. We don't have to guess and say "them" and "their" work.

This, to me, indicates that you want the previous person to say "him" and "his" where they said "them" and "their". Even in this post you're displaying a negative opinion about the use of singular "they", and expressing that you want others to stop using singular "they" this way. Am I wrong?

Like, if you referred to Dominique McLean with "he", and someone said to you "We know that person prefers they/them pronouns, we don't have to guess based on appearances and say 'he'," would you not characterize them as dictating that you have to use certain language?

> I don't speak in vagaries when something is specific and known.

I hope you do. (And I'm pretty sure you do.) Omitting information that is specific, known, and irrelevant is a basic part of communicating well. Not every failure to bring up something that is specific and known constitutes "dumbing down of discourse".


It's supposedly the assumption that matters. I read it that Kepler was bothered by the rudeness to most people, assuming neutered pronouns for everyone, for the hypothetical sake of the 1%. (Note, the 1% are not here asking for this of course, because they don't want to be associated with these behaviors.)

Rules for thee, and not me. That's bad enough, but when someone is speaking for an entire population and using language like hate and shame they aren't helping anyone.


What set of rules are you actually picturing that you think echelon has that they're not applying to themselves?

As far as I can tell, the rules they're suggesting for singular "they" is that "they/them" can always refer to anybody in the singular, irrespective of their gender. (Maybe unless the person you're referring to indicates a preference otherwise.)

Like, maybe this is inconsistent with some set of standard SJW rules that you have in mind. But even if so, I don't see any reason in this conversation to assume that the person you're criticizing is attached to that standard set. (I also don't think there is a standard set at this point.)


May I say a sincere thank you for taking the time to write an articulate, reasonable, credible counterpoint to the above discussion.


I love how you go right ahead assuming, at the same time you preach against it. It's really respect worthy.

I'm not Christian. I'm not religious. I'm not Republican. I'm not disliking trans people. I'm not afraid of any such issues. All wrong. Yet you were happy to assume all that?

What I don't like is people who take on for themselves the cloak of righteous moral positioning on behalf of others, and force others to conform to their thinking because they believe they're right. Or worse, believe they're better.

"They" is not singular. You know the person is a man. You want to neuter everyone's speech to fit your worldview for the benefit of a 1%.

Strong opinions do not equal anger. And not everything in the world should be solved by just "can't we all get along and let people do what they feel is good". Opinions and attitudes such as you're showing are deserving of objection and being opinionated in the opposite direction.


>You're a they/them.

Why on Earth are you trying to dictate someone else's pronouns?


“They” is historically understood to be plural. Some information fidelity is lost if “they” can mean multiple people and a person.

Language is hard, so I understand the need for compromise. “Zee” could be used to avoid gender centric labeling while maintaining language clarity.


Historically understood to be plural by you.

Actual-historically, from 1375, not so much.

https://public.oed.com/blog/a-brief-history-of-singular-they...


That is not true. Singular they has been in use as long as plural they. It has always been understood to be usable in either situation.


I'm sure there are examples, but in common use it is understood to be plural and has been used that way in all writing for a long time.


That is not correct

https://en.m.wikipedia.org/wiki/Singular_they#Usage

> They with a singular antecedent goes back to the Middle English of the 14th century... and has remained in common use for centuries...


> in common use it is understood to be plural

What I said is correct. Prior to about 4-5 years ago you did not find people using they in the singular, hence "in common use".


> I’m sure there are examples, but in common use it is understood to be plural and has been used that way in all writing for a long time.

It is grammatically plural, but it has been accepted for semantically singular use since long before the Victorian effort to impose Latin-inspired rules on English usage which failed to eradicate it despite intense effort. (Victorian prescriptivism did have some good effects — regularized spelling FTW — but trying to eradicate clear and useful usages like singular “they” was one of its less-well-considered, but fortunately also less-successful, efforts.)


> in common use it is understood to be plural

Nothing in what you wrote contradicts what I stated. The man on the street thinks they is plural, and nearly all writing treats it that way. Historical examples excepted.


Anyone else came here expecting a Haskell post?


Impressive work! Nit on design: in first ~3 attempts to scan through the page to see if it is worth reading, i found myself stopping very early when the black background started given it usually signals a footer and end of useful content. The only reason I ended up discovering there is more content was the sanity check that if it is HN front news there has to be more to it :)


Another nitpick is that the sliders are all unlabelled and rely only on their colours matching the text. I found myself having to go back and forth between the text and the sliders, and I imagine it must be worse for colour blind people.


Maybe this should make you think about your "worthiness" scan ;)


Agree it is imperfect but there is too much Internet, too little time in life, therefore fast heuristics and patterns are useful :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: