Indirect reflected light like you get from a piece of paper, or an e-reader, or almost any everyday object, yes, I can see how that is easier on the eyes.
But in the case of a projector, the screen is lit by extra light; I'm not sure the situation is all that different from a regular monitor.
Those images are certainly impressive, but I certainly don't agree with the statement "equal in quality to those produced by conventional cameras": they're quite obviously lacking in sharpness and color.
There's one of those Taboola type ads going around with a similar image that suggests it is a close up of belly fat. Given the source and their propensity for using images unrelated to topic, so not sure if that's what it really is.
I wonder how they took pictures with four different cameras from the exact same position at the exact same point in time. Maybe the chameleon was staying very still, and maybe the flowers were indoors and that's why they didn't move in the breeze, and they used a special rock-solid mount that kept all three cameras perfectly aligned with microscopic precision. Or maybe these aren't genuine demonstrations, just mock-ups, and they didn't even really have a chameleon.
They didn't really have a chameleon. See "Experimental setup" in the linked paper [emphasis mine]:
> After fabrication of the meta-optic, we account for fabrication error by performing a PSF calibration step. This is accomplished by using an optical relay system to image a pinhole illuminated by fiber-coupled LEDs. We then conduct imaging experiments by replacing the pinhole with an OLED monitor. The OLED monitor is used to display images that will be captured by our nano-optic imager.
But shooting a real chameleon is irrelevant to what they're trying to demonstrate here.
At the scales they're working at here ("nano-optics"), there's no travel distance for chromatic distortion to take place within the lens. Therefore, whether they're shooting a 3D scene (a chameleon) or a 2D scene (an OLED monitor showing a picture of a chameleon), the light that makes it through their tiny lens to hit the sensor is going to be the same.
(That's the intuitive explanation, at least; the technical explanation is a bit stranger, as the lens is sub-wavelength – and shaped into structures that act as antennae for specific light frequencies. You might say that all the lens is doing is chromatic distortion — but in a very controlled manner, "funnelling" each frequency of inbound light to a specific part of the sensor, somewhat like a MIMO antenna "funnels" each frequency-band of signal to a specific ADC+DSP. Which amounts to the same thing: this lens doesn't "see" any difference between 3D scenes and 2D images of those scenes.)
Given the size of their camera, you could glue it to the center of another camera’s lens with relatively insignificant effect on the larger camera’s performance.
what happens when you go too far from trusting what you see/read/hear on the internet? simple logic gets tossed out like a baby in the bathwater.
now, here's the rig I'd love to see with this: take a hundred of them and position them like a bug's eye to see what could be done with that. there'd be so much overlapping coverage that 3D would be possible, yet the parallax would be so small that makes me wonder how much depth would be discernible
Nobody ever doubted that fusion ignition was physically possible. It happens in stars all the time, and people have achieved it in thermonuclear weapons.
This was the first time fusion ignition was achieved in a laboratory setting, i.e. in a controlled fashion. Is that seen as a fundamental physics milestone? To me it seems more an incremental engineering achievement.
The usefulness of being able to switch the language depends on the availability of translations of course. It might give editors an incentive to add more translations for place names.
Just to nip this in the bud, OpenStreetMap in general doesn't contain "translations" it contains the exonyms that are commonly in use for geographic objects. Most of the time things only have a name in the local language so there will be no value for other languages in the OSM data. Transliterations are a bit of a grey area in this context, but are definitely more useful than actual translations which tend to be garbage.
Further point: the data available in the vector tiles is defined by the vector tile schema and by far doesn't contain "everything".
Just some clarification on what Belgium's multi-linguality means exactly:
Flanders is Dutch-speaking.
Wallonia is primarily French-speaking, except for a small part of it which is German-speaking (
The Brussels Capital Region is officially bi-lingual Dutch/French, but in practice it's almost completely French. In restaurants, shops, ... you can be sure to find people who can help you in French, but almost never in Dutch. Police, hospitals, ... are supposed to be bi-lingual, but good luck getting good help if you speak Dutch but not French.
In most places, you'll only encounter one language: your own native language. Only Brussels is supposed to be bi-lingual.
Many cities do have two or three names, but that's not because those two or three languages are actually spoken in that city.
Have you been living under a rock? Our current lives are already tougher because of climate change, and it's only going to get worse. More extreme and more frequent weather events (droughts, floods, heat waves, ...) are already happening.
> I don't see why current generations' lives should be tougher just to help out future generations.
Most people want a good life not only for them but also for their children, and their children's children. I don't have children, but I still want a good life for future generations. Is that not simple basic human decency?
Note that the longer we wait, the more difficult we make it ourselves to change things, and the more tough even our own lives are going to be, even ignoring future generations.
> There needs to be a healthy balance.
Yes. The status quo is not a healthy balance (or arguably any kind of balance).
> Some Enum features work differently (incompatible) in Python 3.11.x versions.
I know that Python 3.11 added some things, like StrEnum; those obviously won't work on older Python versions. But I'm not aware of things that work in a certain Python 3 version but don't work in newer ones. You're even talking about incompatibilities between different 3.11.x versions? Can you give some more detail on that?
I'm confused by something in the "A Practical Guide to (Correctly) Troubleshooting with Traceroute" which is referenced from the article.
Slide 8 titled "Traceroute – What Hops Are You Seeing?" says:
> By convenction, the ICMP is sourced from the ingress interface.
(I assume the author means "the source address of the ICMP message is the address of the ingress interface")
> Random factoid: This behavior is actually non-standard. RFC1812 says the ICMP source MUST be from the egress interface. If obeyed, this would prevent traceroute from working properly.
(I assume by "ICMP source" the author means "the source address of the ICMP message" because I don't see what else it can mean).
To clarify: from the text before that, and the drawing on the slide, the egress interface the author talks about is the egress interface the original message would have taken had its TTL not expired.
Now, I had a look at RFC 1812 (Requirements for IP Version 4 Routers) and I don't see where it says what that slide claims. The closest I can find is section 4.3.2.3 Original Message Header (https://www.rfc-editor.org/rfc/rfc1812#section-4.3.2.3) which says:
> Except where this document specifies otherwise, the IP source address in an ICMP message originated by the router MUST be one of the IP addresses associated with the physical interface over which the ICMP message is transmitted. If the interface has no IP addresses associated with it, the router's router-id (see Section [5.2.5]) is used instead.
To me that reads completely different from the claim on that slide (and also looks like I would have expected).
The author of the presentation seems more knowledgeable about networking details than I am, so it's very well possible that he's right and I'm misunderstanding something. Can anyone shed some light on that?
I used to do that as a kid, but I had completely forgotten how to do that. It's been a long time since I saw a spinning top; I guess kids these days don't play with them anymore.
If I remember correctly, our tops were blank wood as bought and we tended to paint them in our designs.
But in the case of a projector, the screen is lit by extra light; I'm not sure the situation is all that different from a regular monitor.
reply