Hacker News new | past | comments | ask | show | jobs | submit login
The iPhone 11 Camera, Part 1: A Completely New Camera (halide.cam)
225 points by tomduncalf on Oct 29, 2019 | hide | past | favorite | 105 comments



I want to see a real teardown of these cameras...

Show me the lenses. Put the coatings under an electron microscope. Cut the sensor in half, and measure the depth of the electron wells.

Then on the electronics side, hook up a logic analyzer. Find the speed of those ADC's. Dump the default register values.

And on the software side, feed in manufactured frames to find out exactly how the HDR algorithm works. Use high speed strobe lights to find out the exact shutter timings. Figure out if it uses the gyroscope or optical flow for frame stacking, and how many frames does it even use? Publish the bit depths and colour spaces of intermediate image pipeline stages.

These 'technical' reports which really boil down to "we took some photos and they're quite crisp" really disappoint me.


There are companies doing this kind of thing:

https://www.techinsights.com/products/def-1909-803

They'll charge you however.


If you actually did a tear down like you just mentioned and documented it accordingly, it's hard to imagine you wouldn't recoup the cost of the phone several times over just by uploading the video to YouTube.


I think if blended the phone in a blender you could recoup your costs of the phone on Youtube. And that's probably the rub.


What about the cost of all this research? That has to take multiple days worth of a highly skilled team.


This was a really thought-out, in-depth article. I really appreciated the artistic considerations.

The camera has usually been my primary motivation for upgrading, and I'm tempted. I'm still sporting a 7+ with broken lenses. Which has meant mostly terrible cell-phone pics that occasionally due to a happy accident look like a "Holga" picture.

The upside is that I went back to really actively using my DSLR, and I'm not sure anything really compares still. However, this article was pretty compelling.

I think the title is accurate – these pictures no longer look like 'iPhone' pictures, which was a kind of brand identity more than a technical constraint. This feels like a refreshing shift. That said, they have a look. They remind me of Andreas Gursky photos – hyper realistic, super detailed, somewhat matte.


I wonder how much of the image processing Apple could back port to iPhone 8/X, XS, XR if they wanted to? Those phones are extremely powerful still. And Apple has touted the power of their neural engine ever since the 8/X.


There are often performance factors people underestimate.

For example Portrait Lighting can run on hardware without the neural engine, if you hack the exif to say it was shot on a newer device. However, this is just when editing. There's no way it would be capable of the realtime preview, and casual users rely on the preview to make the feature useful at all.

Actually, it would confuse a ton of users. We added portrait-mode support for non-humans on the iPhone XR. The hardware doesn't provide realtime depth information, so we can only apply the effect after you've taken the photo. We frequently receive support requests telling us that the feature doesn't work.

I can't speak to why Deep Fusion isn't coming to old devices, since it's an offline process. I mean, I could speculate that if it relies on multi-cam capture to capture multiple images, that's strenuous API that isn't supported on hardware earlier than the XS.

The cynical perspective is that Apple caps features to the latest hardware to sell new hardware. I'd say it's just as likely that Apple looks at the new hardware coming down the pipeline and plans features that push the hardware to its limit.


> that's strenuous API that isn't supported on hardware earlier than the XS.

Great, then why doesn’t it work on my XS?


Yeah, I was disappointed they did not bring night sight to older iPhones. Google has made night sight available to all models of Pixel phones.


Take Deep Fusion. Even if that were to take a little longer, have it run over my photos at night while I'm plugged in.


But then when you go back to review old photos, they're going to look different than the first time you reviewed it. That's going to be confusing.

Also, if you're going to share a photo, you're likely going to do it the same day you took it.


No it won’t. The framing will be identical. That’s how the DSLR world works. The viewfinder is used for framing & exposure check and then the rest is done behind the scenes.


A photo has more elements than just framing. It would be disorienting and customer-hostile to have your photos randomly change overnight.

About 60-75% of the time I take photos, I share them soon after or within a few hours. Having to wait to let them be processed overnight would be obnoxious.

And it's not particularly helpful to compare people taking photos on their phone to the DSLR world. (Coming from someone with a nikon d200, d300 sitting on his shelf). Totally different use cases, totally different usage expectations.


Yes, Deep Fusion doesn't affect the composition, but it could bring out detail the photographer didn't intend to reveal.


Sure, but I’m not sure how easy it is to back-port neural engine training data to a less sophisticated, previous generation neural engine.

I know Google made their night model generally available, but I don’t think it depends on specific custom hardware.


They won't though. How would they upsell customers if they provide them with the same features that newer phones have?

I wish they would, but that doesn't fit Apple's typical MO.


I was pleasantly surprised when Samsung backported their "night" mode to my S9+. I was even more pleasantly surprised when I saw how much of an improvement it made in low light conditions.

If anything, it ensured my next phone would be a Samsung again (and it was, I went Note 10), so I think it's still a good upselling move to do so.

On that note, how far they've come since the Note 4 terrible TERRIBLE result in low light, that phone was top notch for its time but in low light it took several seconds for a single shot, focus sucked, grainy as hell ... Seeing a 9/10th gen phone instantly take those shots, with almost no grain and a lot of back details properly lighted is very impressive to me. I understand how it works in theory, but there is still a "wow I can't believe my phone pulls that off" factor for me.


Supporting and improving previous generation devices fits Apple's MO better than any other company's.


Obviously it doesn’t. Apple refusing to support and improve previous gens due to commercial considerations is what we are talking about. Seems like the MO has changed?


Err, Apple is far more typical in supporting older mobile phones with new versions -- and getting them updated to the latest version fast.

Android vendors had (have?) a horrible history on this...


In terms of stability usually, but not tentpole features. That's quite alright, I'm not even complaining. Just saying it would be nice to get some newer features but that it isn't how Apple normally operates. That's just fine.


Apple have been good about getting updates on to older phones, but usability on the older phones is pretty poor.

My SE has really been struggling with iOS 13. A lot of apps don’t load and things are ssssslow.


I mean, you bought the phone with a particular version of software. They don't charge you for some significant upgrades and security updates - but it seems fair to me that the features which require huge R&D costs would require a new purchase.


So, let me have the software features and I'll toss a few bucks their way.

Of course one could argue that the iPhone 8/X, XS, XR buyers did finance the R&D for it. Each year Apple has beat their neural engine drum but it wasn't until this year that they really introduced features that took great advantage of it. Sure, they used it for FaceID, but I'll wager it didn't really tax the silicon much.


I'd wager it just about maxed out their silicon, because it's taken two years of software optimization to speed up FaceID on the iPhone X. It was a little bit sluggish until iOS 13, which to me indicates some clever software trick was able to squeeze out a little more performance from already hardworking chips.


I wasn't denying that at all. My whole point is that that's how it is, that's how it's marketed.

Sure it would be nice to get some special new features, but as mentioned, that also doesn't suit how Apple operates when it comes to new features. And that's quite alright.


> fit Apple's typical MO

Apple, or any business trying to make money?


I’m a professional photographer, for whatever that’s worth. Mostly it’s relevant because I have a camera in my hands all day for most of the week. I’m not a gadget guy, I don’t need the latest and greatest, but when I pick up new gear I want it to do the thing and do it well and to not need to be replaced within a few years because it’s completely obsolete.

I picked up an iPhone 11 pro last week, upgrading from a 7. The camera really is great. What I keep thinking is that I’m finding it creepy. I’m not a luddite by any means, I’m all for innovation….I shoot with a Sony mirrorless camera as my main everyday, that felt to me like a massive change.

I think what gets to me is I don’t know exactly what the iPhone camera is doing. When I press the “shutter button,” what exactly is happening? How many shots is it taking? Is it shooting on all three lenses? Is it recording audio even when I have Live Photo turned off? If I think I’m intentionally leaving something I don’t want to record out of the frame, how can I be sure that one of the other lenses isn’t also recording and including it? I just don’t feel like I’m sure about any of it anymore, with all the processing I hear about it doing, layering multiple shots together, shooting outside the frame. The tech is cool and all and I’m not prone to paranoia, but I just keep wondering.


If it makes good photos, and the extra information isn’t being harvested to track you like Facebook would do, what does it matter what it’s actually doing under the hood? It probably is doing a lot of that because it can condense all that extra information into a picture that looks better to humans. But the tech of how it reaches that really shouldn’t matter or be creepy, it’s just processing a bunch of pixels to make something look nice. Just like your mirrorless camera, but with more frames put together to make the final result.


An example is, if the extra data is retained to improve post-processing capabilities, for example refocusing or a slight change in perspective, then you could inadvertently share files with more information than you think they contain. Remember the story of people cropping out their naked bottom half just to have it plain in the exif thumbnail.


This is definitely one example of my thinking.


That is typical apple.

I felt EXACTLY like that using my first mac coming from unix. Apple hides things and I never quite knew what was going on under the hood.

But now it is like 1000x that. Run "ps -ef" on the command line from macos and wonder. I have 311 processes running. What is dprivacyd or com.apple.geod or photoanalysisd or ProtectedCloudKeySyncing?

With photography I feel your pain. Everything is amazing, but at the same time really dumbed down. You don't need to know how Ansel Adams zone system, but... you don't know exactly what is going on and you have little control. Sometimes you need white balance or manual exposure or manual focus or control over the shutter speed, or a real shutter button.

sigh.


On Ubuntu I have 330 processes running.


Camera is the one thing that has me on the edge of should I get a new one.

Everything else...could probably rock my 7+ for another year or two. Maybe do a battery replacement.


I was in the market for a new DSLR, but their software just hasn’t kept up.

The iPhone 11 pro blew me away with the night mode photography, and the simulated bokeh that can be modified in post because of the depth camera is downright genius. And the most important feature, it’s always with me, unlike the DSLR.

I’ve been using my wife’s hand-me-down phones for a decade now. This was the first phone I bought for myself, and really it was a camera that has a phone feature, not the other way around.


I've been thinking about the same thing since Google impressed everyone with their computational photography tech years ago. The software in DSLRs is really primitive in comparison.

2014: https://ai.googleblog.com/2014/10/hdr-low-light-and-high-dyn...

2017: https://ai.googleblog.com/2017/04/experimental-nighttime-pho...

Really makes you wonder what could be achieved by combining Google's computational photography algorithms with a bigger sensor and high quality optics. I'm probably not going to buy another high end camera, until I see something like that on the market.


Indeed, a traditional camera with the post-processing of a smartphone would be incredible.

But indeed, camera manufacturers are way way way behind. The best they can do at the moment is find faces on a picture (and still, not that well except on Sony cameras). This is disappointing to say the least.

The Zeiss ZX1 runs Android, but I don't know how much computational photography it provides, since all the reviews about it only emphasize the fact that it can run Lightroom (which sounds completely useless).


there is e.g. https://magiclantern.fm/about.html for Canon DSLRs that allows you to do some more interesting things, it's been around for a decade or more


Same. I actually bought a last gen Sony RX100 VII and the difference in quality is negligible, other than the Sony having 20MP and the iphone only 12. This means the tiny iphone 11 pro sensor can produce similar results to a 1in top-of-the-line compact camera. This is mind-boggling.

After comparing them, I'm convinced that it's only a matter of time until phones are going to catch up to full frame cameras. Software these days is more important than the raw hardware it seems...


>I actually bought a last gen Sony RX100 VII and the difference in quality is negligible, other than the Sony having 20MP and the iphone only 12

Having recently purchased both the latest RX100 and the 11 pro, I both agree and disagree. For auto mode in certain conditions, when viewed at significantly scaled down resolutions, yes. To if you're posting to instagram or facebook or whatever, this is probably true.

Manual mode on the RX100 vs the 11 pro, even when using custom apps that expose a lot more of the camera settings, the RX100 starts to pull away. If you're doing any work with larger image sizes (and by larger I mean using a computer or TV screen instead of a phone), even more so. They're honestly not even close at that point.

Meanwhile, on the full frame side of things, the A7 line of cameras has been advancing picture quality at a rate as fast as anything in the iPhone, and just absolutely clobbers it. There's physics in play here, and I'm skeptical that software or processing power is going to increase the picture quality that much, and I have no idea how you would match things like super telephoto lenses in a smartphone form factor.


The 11 Pro camera is definitely really good; nice enough for me to forgo my Fuji XE2-S on some trips. When editing RAWs from Halide vs the Fuji (as a total amateur), I do feel like I get way more usable shots from the Fuji, though. I usually just adjust levels a bit and crop, and the iPhone shots were grainy and sometimes blown out. Definitely requires a bit more time when editing.

I do really love the iPhone when I just need to take a quick shot, though. All the computational stuff means that I'm almost always guaranteed to get a usable photo out of it in any condition.


The one thing the DSLRs still have is high levels of quality zoom. I have a Sony RX10 IV with a 600mm lens. I use it for the kids' sports photos and it's truly awesome. I also use it for wildlife photography (the only handicap being my own skill). iPhones are not terribly useful for either of these things.


Doesn't the Sony have much better autofocus? There should be a pretty big difference in low light performance too, unless I'm wildly overestimating how much bigger a 1" sensor is.


Do you think, or, is there any hint that, camera manufacturers are developing this kind of thing?

What could a DSLR with iPhone-level software achieve?


Olympus and Sony have been the most aggressive about sticking clever software in their cameras, so they'd be the ones most likely to invest more in computational photography.

The problems for camera manufacturers around this, though are:

1. Their market is incredibly, loudly regressive about a lot of this stuff. A noisy chunk of the photography market is really hostile to workflows that don't mimic hundred year old darkroom processes. Doing in-camera is an abomination. Automation is an abomination, etc.

2. Building the compute in to the camera is non-trivial. You've already got a lot of compute power focusing on running the complexities of things like continuous AF tracking (e.g. on top-end Olympus that's 4 of the 8 cores available). A lot of compute budget is used for things phone cameras are rubbish at. At some point things like heat become a problem.

3. Data volumes are hard: Olympus are "only" dealing with a 20 MPix image. Sony are dealing with 24 - 60 MPix images. Olympus do that at up to 60 frames/s (20 with AF), and to read the data within the constraints of shutter speeds of 1/16000 s. That is... a lot of data compared to the relatively modest rates on an iPhone. Oh, and while iPhone users are generally OK with a bit of lag, DSLR users get really pissy if you're making them wait for that. Latency needs to be very low.

4. Physics is hard: A Sony sensor is a 24 x 35 mm hunk of silicon. Quite apart from the challenges of the data volume, reading a chunk of CMOS that large has been a challenge. Sony have done a lot of clever things to work around those limits, but still... (Olympus have been able to do high frame rates longer than anyone else in part because their sensors are smaller)


(I should add to that: the pre-post capture already exists on Olympus since they released the E-M 1 II a couple of years ago. But DSLR makers are generally more focused on capture-time optimisations like that, eye focus, Sony's facial recognition in some of their cameras - which allow you to register a specific person's face and priority autofocus on them.)


The shots I get out of my 11 pro are great. I'm amazed just how good they are. But, I would be lying if I said they are better than my aging d7100 with a nifty 50.

And, if I ever need zoom (like in nature photography), phones just can't compete with a 300mm lens.


>I was in the market for a new DSLR, but their software just hasn’t kept up.

If you have a Canon camera, check out the CHDK

https://chdk.fandom.com/wiki/CHDK


Camera, battery, durability. No flashy new features, but they hit a lot of important marks for things I care about improving.

Now if they can just make it smaller next year.

EDIT - I should mention that the UWB radios might be cool once there's a significant feature using it. In terms of future-proofness, it's a handy thing to get even if it didn't launch with any important functionality.


The battery in the new phones is pretty spectacular, at least according to reviews, even when comparing it to last year's models.


Can confirm from personal experience. Had to charge my XS Max at the end of the day with normal-heavy usage, because it would go down to around 15%. With light usage, it would be once every 1.5 days.

With 11 Pro Max, I charge once every 1.5-2 days with regular usage. This is truly amazing to me, because I never have to worry about my battery dying midway through the day, and it feels extremely freeing.


I'm on a yearly upgrade plan and now have the 11 Pro Max. The extra battery life is very helpful.

But, if you are taking lots of pictures or about 15 mins of video, the battery seems to drain quickly, by about 45% as I was doing today.

The fast charge with the 18 W adapter is also very helpful.


On my 11 Pro Max, on Sunday I shot around 1 hour of 4K video in Filmic Pro, and about 5000 RAW photos in Camera+. The battery was down to 10% after 10 hours (6 hours of "screen on"), which is miles and miles better than my XS which would have probably lasted about a third as long under the same workload.

https://i.imgur.com/O9W2QI4.png


Well thinking is more that at 2+ years I can fell my current one is no longer holding a charge well. Thanks to battery gate I could fix that for a reasonable price


Yeah I was really blown away by that. +4 hours. We're getting into territory of having virtually no 'range anxiety' with your phone.

On my S10 I now have a cheap $5 charging pad at the office, and I never really think about charging anymore when I get home.

Only on vacation trips I still have to budget my battery.


I was on a 7+ since they came out, and it was the first iPhone I held onto for 3 years. I love my 11 Pro. I never wanted to get a phone as big as the Plus, but I wanted the bokeh since I take a lot of photos of my kids. I'm glad to have a non-plus-size phone. I didn't get the X or XS because I was holding out hope for a notchless phone. This year I moved up because it seemed like the total package justified the move.

The 11 Pro's camera is so much better. The edge detection in Portrait Mode is better on the 11 Pro (I think it's the hardware, not just iOS 13). Night Mode is unbelievable. Video has improved also (I think this was mostly due to XS upgrades, but there are other 11 Pro benefits that I mostly won't use, like simultaneously capturing video from multiple cameras). I haven't done the .2 update yet, but the deep fusion stuff is pretty cool from what I'm seeing others post online.

I think folks like Brian X. Chen (NYT), who say this is an optional upgrade if your iPhone is less than 5 years old have it wrong. I can definitely see skipping this if you're on an XS or X. But 8 or older? It's a very noticeable upgrade. And if you're not on a Plus (i.e., capable of Portrait Mode), it's a huge upgrade if you use your camera much. And that's before getting to all the other features like wireless charging and the UWB chip for enhanced tracking. Oh, and the battery is noticeably better than my 7+, which had great battery life (and was warranty-swapped by Apple a year or two ago, so the battery was not a full 3 years old when I upgraded to the 11 Pro). The 11 Pro is the first iPhone I've had that I do not need to charge overnight. I just charge it periodically during the day, once every 36-48 hrs.


I do love the OLED screens. They are very pretty.


I'm not a fan. My iPhone X got burn-in surprisingly quickly, even though the screen was only on a couple of hours a day.

My work iPhone is a 6, so it's LED, and even though I keep its screen on eight hours a day, there's no burn-in.


I’ve had one a year and use it a lot. Never any burn in.

Where do you see burn in? Light or dark screens?


Everywhere. Most prominent are the unlock icon and the flashlight icon and the camera, but there are others; and some text.

Any time there is an area of flat color, or gradient color it really shows.


Have you tried taking it to an Apple store? I have an iPhone XS and I am 100% certain this doesn’t happen on mine.


Agreed. The reported behavior sounds more like defective hardware. I have my original iPhone X from launch and have used it for hours a day. Battery is a little beat up, but the screen is still perfect.


Am I the only one who sees visible flickering on the iPhone XS OLED screen? Fine white lines on a black background are terrible. It reminds me of using an Amiga in interlaced mode.


That's probably what is usually referred to as "black smearing," which affects most (all?) OLED screens when using a completely black background.


I ache for an OLED iPad.


Yeah, but I'd rather a super high refresh rate over that. LCD does that better at the moment.


I know its a common comment about iPhones, but wait till next year, for the next design refresh. If your phone is mostly serving you well, may as well hold out for the biggest possible upgrade.


I can see that argument, but on the other hand, the first iteration of an iPhone design refresh tends to be the one with the most problems (antennagate, bendgate, etc.) The 11 Pro seems to be peak maturity for the "iPhone X cycle" and has been reviewed very favorably in all respects, it might be worth just getting one rather than betting on an unknown quantity.


That’s a great way to sell an app, as 3/4 the way down I shelled out $7 for their two apps.


There are so many third party Camera apps that it is genuinely difficult to tell whether any of them actually add value or do things that the built-in app cannot - beyond the ability to do "instagram" style filters.


They do add value. You cannot, for example, shoot RAW or manually control the camera with the built-in app.


Sure. I get that part. The tricky part is which of the many many apps do that well.


My iPhone 11 Pro pictures look like they've been put through an artistic painting filter when you zoom in on them, it looks strange and smudged.


I also seem to have more trouble taking non-blurry pictures of my kids with my 11 pro vs my old 7. I'm sure I'm doing something wrong, but it's frustrating.


I’ve observed on my iPhone X that photos taken with the OS camera app are more “painterly” or blurry up close than taken with ProCam, which I assume is an effect of the additional automatic processing done by the built in app.


Upgrade to iOS 13.2. They finally fixed the “watercolor” effect.


Can you show some zoomed-in examples?


https://spectre.cam/full-res/Full%20Size%20Comparison%20for%...

This one from the article is a pretty good example. It looks like a photorealistic painting when you zoom in, and not just on the sky, like they point out in the article - the whole thing does.

I'm really impressed with the 11 pro for stuff that gets viewed on the same medium it was created in, but the trade offs are very apparent when you look at things on a computer monitor.


Those photos aren't straight out of the iPhone's image processing pipeline.

From the article:

> If we distort the contrast in an image, we can bring out the ‘watercolor artifacts’ in the clouds

I think you missed that sentence. You posted the "distort the contrast" examples, which were hand edited to exaggerate any artifacts in the photo the iPhone had output.

This is the photo that the iPhone gave the photographer: https://miro.medium.com/max/1800/1*oZgBZGHyxwMevHKiGfATiQ.jp...


>I think you missed that sentence. You posted the "distort the contrast" examples, which were hand edited to exaggerate any artifacts in the photo the iPhone had output

I think you missed part of the subtitle. The one on the left has had contrast adjusted, the one on the right has not and matches the one you posted. The watercolor artifacts are just as visible on your link as they are on the right half of the picture I posted.

I find them incredibly noticeable, even on your link.


> For a while now, you haven’t been the one taking your photos. That’s not a slight at you, dear reader: When your finger touches the shutter button, to reduce perceived latency, the iPhone grabs a photo it has already taken before you even touched the screen.

I wonder if there’s a line where copyright could attach to the developer who wrote the code instead of the thumb owner.


Nah, there was a guy (a photographer) who set up a camera system that was triggered by passers by. There was a lawsuit[1] as to whether it was legal and who owned the photo (one of the subjects happened to be very orthodox, and I guess photos are forbidden), anyhow New York decided the work belonged to the photographer who set up the auto trigger system. It was dismissed as being an artistic expression by the photographer.

There was also a photographer who was in Britain for some show. He wanted to do some photography but wasn’t allowed as he was only there as a tourist and didn’t have a work permit, so he handed his camera to his daughter. He then set up an exhibit with her work. I’m not sure who “owns the rights” but he assigned them to her so as not run afoul of his visa.

[1]https://en.m.wikipedia.org/wiki/Nussenzweig_v._DiCorcia#Laws...


And there’s this case about a nature photographer:

https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...


Now that multiple sensors are standard, I wish sensor fusion would get some attention. It isn’t clear to me that there are not significant gains from capturing from three camera sensors at once. The big tradeoff in image fusion using only one sensor is loss of temporal precision. You could gain some of that back while maintaining the gains in spatial accuracy if data from multiple sensors is used. Apple’s camera app at least demonstrates that they’re capable of getting multiple camera video streams into main memory simultaneously by showing the out-of-frame view. Maybe that’s the preview/lead-in for AppleTech Pro sensor fusion TM 2020.


> Now that multiple sensors are standard, I wish sensor fusion would get some attention.

Sensor fusion has been a hot area of research in robotics for at least a decade. Why do you feel it gets no attention?


Because it doesn’t (in the consumer space). Everyone who needs to use kalman filtering and sensor fusion does, but if it isn’t necessary then it isn’t used, even if it could result in better products.


the biggest thing to me seems the be the user experience of applying these 'smart' technologies.

for example, photoshop's liquify feature can now directly isolate eyes, noses, lips, cheek bones, facial structures etc without you needing to be good at using the liquify brushes

but ios is now doing this kind of thing automatically and integrates the editing right there either in the photo by default or in an integrated editing app


fantastic article, loved the obvious effort put into it!


Great review about the new iPhone, however as a normal camera phone user, I really don't care about the finer details in pictures. Any of the modern smartphones camera are pretty good for me.


OT but I looked at the app they sell (Halide Camera) and as far as I can tell, it's an app for iPhone 11 and iPhone 11 Pro. Is there a similar app for older phones?


Technically we support all the way back to the iPhone 5S, but those devices don't support RAW, so you're really using it for its interface and access to manual controls. I mean, I'm biased, but I'd say that's compelling even without RAW.

As far as technical features, things get really interesting with an iPhone 6S and later. That's when Apple added RAW support, which requires a third-party app. On depth capable devices (7 Plus, 8 Plus, X/XS/11), we have detailed depth visualizations.

Anyways, we're generally forward looking, we'd be crazy not to serve most of the folks out there running older phones.


Oh ok, thanks. The very first thing you see in the App Store listing is "The best camera app for iPhone 11 and iPhone 11 Pro", hence my assumption.


I use Halide on my iPhone SE, so the app is still fairly backwards compatible

It obviously doesn't have the features related to multiple lenses and smart HDR, but it is my go to camera app.


Good to know!


Halide is available for older iPhones. If I remember correctly, any iPhone running iOS 11 or newer can run Halide. However Deep Fusion related stuff is limited to the iPhone 11, and there are a few other features that similarly require certain iPhone models.


Can law enforcement convict someone due to a Computational Photograph?

Google & Apple are digitally altering photos so much so to “enhance them”, couldn’t it be agrued the evidence of a crime photo has been tempered due to computational photography.


I doubt it. “Tampering” usually requires an intent to obscure or subvert. Computation photography applied broadly to all photos doesn’t have that.

Definition from OED: “interfere with (something) in order to cause damage or make unauthorized alterations.”

The “in order to” seems to imply intent. Not sure if there’s a different legal definition however.


Wouldn’t every digital camera be guilty of that? White balance, lens correction profiles, noise reduction, compression.


Not if you shoot raw.

Plus none of these modifications really alter the content of the photo.


By the same logic should cropped photos or photos that have had the sliders mucked with count as altered?


I don’t see how cropping would introduce “reasonable doubt”. But a face being computationally altered by the photo seems like that would introduce reasonable doubt that the alleged person didn’t not in fact do the crime.


They'd have to throw out any digital picture taken in a JPG or GIF format, since those too require image "tampering" in order to get the size down


I doubt it. I'd think a court/opposing counsel would examine the nature of that tampering to see if it could have effected the evidence or not


Apps that use deepfake like algorithms to enhance photos can fall into that category.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: