Hacker News new | past | comments | ask | show | jobs | submit login
iPhone 14 Pro camera review: A small step, a huge leap (lux.camera)
353 points by robflaherty on Nov 1, 2022 | hide | past | favorite | 332 comments



The funny thing is that for all the amazing technological improvements discussed, all the pictures in the article actually show that A. having a photographer who knows how to frame things, and sees the right moment to capture an image, and B. having a subject worthwhile photographing, are way more important than having a top tier quality camera.


A photographer friend of mine always used to get comments like you can take such awesome shots because you are using a DSLR (this was 10 years back). So he started taking two pictures one with DSLR and one with a mobile cameras at that time.

He would still get the awesome shots comments even with the pics he took with the basic mobile camera and shared them on social media.


The joke goes something like.

A photographer visits a friend for dinner and shows her some of his photos, "Oh wow! these are spectacular" she gushes, "you must have an amazing camera". The photographer then replies "that dinner was great, you must have excellent pots".


That's not surprising. I'd put my money on a good photographer with a crummy camera rather than a crummy photographer with a good camera any day of the week. The biggest part of photography is being able to 'see' the photos before you even raise the camera.


they say it takes roughly 10k photos to get to the OK level in composition and this 'seeing' (and that obviously means 10k serious photo attempts not just 30 selfies per coffee mug in starbucks). Its not that much once you carry camera and ie go to nature over weekends and travel for holidays, or walk around a lot for street photography. I think it can be sped up a bit with editing photos in say Lightroom - ie once you keep cropping the photos in same way to get the best out of them, you grok what is required and do it already in camera, or electing the best composition out of few shots etc.


I guess it depends on the subject. E.g. no cameramen will take a good photo of flying birds with a shitty camera, but sure enough in good lighting conditions static scenes can be remarkably good even with worse hardware.


When he writes, "I took the iPhone 14 Pro on a trip around San Francisco and Northern California, to the remote Himalayas and mountains of the Kingdom of Bhutan, and Tokyo.", my first thought was: who IS this person? And is this article even at all real?


Its not some random article, its a commercial for Halide subscription.


that last bit gave me a huge chuckle


>...all the pictures in the article actually show that...

This simply is no longer as true as it once was. And has been for at least a decade. More than three decades ago my forward thinking cobol programmer friend stated, drunk, that in the future someone called a wedding photographer will be able to walk into an event with a digital camera, wave it around, then select images for publication without having to sit people, set tripods, or wait for that one-off unique special moment. They were right.


Kim Jung Gi (the artist) once said that the question he got most often was: what kind of brush pen do you use?


RIP to Kim Jung Gi. He could have drawn with a stick in sand and still produce brilliant art. But the tool is the thing that is most visible to admirers, and the thing that they can most easily replicate with their own work, so it doesn't surprise me that that's what most people misguidedly ask about. It's also much easier to buy a pen than it is to put decades of hard work and dedication into your art on a daily basis. Instant gratification.


It’s like asking Feynman about magnets.


Is this serious? Some of the side by sides are totally blacked out vs beautifully lit. The improvements in cameras open up the possibility of taking shots that would have simply not been possible. Sure photography is a skill. The realtime feedback on a smartphone might actually be the most significant improvement that smartphone has made to the camera. But discounting the obvious improvements in sensing an processing is just nonsense.


"Some of the side by sides are totally blacked out vs beautifully lit."

Those are the side-by-side examples to show the raw file latitude.


That's not new? Computational photography has existed for years in iPhone, and decent Night Mode shots have been possible since the Pixel 3. Nobody is changing the game here, it's just some Apple nerds talking about the new Apple product. That's fine, but it's passe to talk about how revolutionary a notched camera is, or how great computational photography has become. We know these things are good.

That's why these Halide iPhone camera reviews are perennially worthless. They've never released a review that feels critically substantially. It's simply "here's the new iPhone hardware we're being forced to support" being copywritten as ad-material for their app.


This has always been true.

Most of what professional cameras and lenses get you is the ability to get shots you otherwise couldn't - lower light, further distance, faster action, etc. Particular effect in DOF of flattening. Nothing that makes a good photo.

Without an eye, it's all wasted. But a good eye will see shots that can't be made; more of them if the camera at hand is limited.


That’s a good observation so far as it motivates people to learn photography skills. As soon as you know how to take photos, though, you resume appreciating what the right equipment lets you capture, or have a better chance to capture.


That's all true, but sometimes your overriding goal isn't to create a beautiful photograph, it's to record a particular thing at a particular moment.

Then camera technology really does make a massive difference, especially in low light, etc.


My very amateurish pov: quantity is the most important quality, take enough photos and you will end up with a few very good ones.


Kind of. If you have an image worth capturing but can’t capture it that’s a problem.


Well yeah but it doesn't hurt to make photography more accessible to everyone.


It was always pretty accessible, at least since 35mm was developed.


I am still not understanding why my iPhone 13 landscape photos look as good as those from my $900 Nikkor Z 24mm f/1.8 S prime lens with its superior optics on a $2k DSLR body.

If the reason is fancy post-processing, then why can't Nikon have a tiny lens like the iPhone 13 and just add fancy post-processing to it?


I think Nikkor assumes that you would use software to do the post processing. It gives you something a lot closer to what's coming from the sensor, so you can choose which $XXX image editing suite you want to run it through. The iPhone knows the vast majority of people won't do that with their photos, so they can go all-out, but you're stuck with their specific result baked in to your photo.


By the way I know this is not your point, but there are excellent editing options for zero dollars! Which, I suppose X=0 is valid for your comment.

https://www.rawtherapee.com/


Ehh...RawTherapee (and also Darktable) are certainly capable of editing, but I've found that in practice they require far more effort to get results comparable to Adobe or Capture One's RAW processing.


The workflow of Darktable gets me confused compared to Lightroom/Photoshop's Adobe Camera Raw tool; there are so many options and features that it can be overwhelming. Capture ONE from my experience delivers superior RAW decoding vs. Adobe Camera Raw, but I still use ACR.


Oh absolutely -- I wanted to give Darktable a chance, and I really worked to figure out enough to actually do meaningful edits.

I think it's a great example of the phrase "took many cooks in the kitchen spoil the broth". For any given editing task you want, there are like 5 different modules that all do basically different variations on the same thing, and each one has quirks that presumably only make sense to the original author of that module.


This makes sense. I run Linux and have no interest in adobe software. It does seem like sometimes it’s tricky to use RawTherapee, but so far I’ve been happy with the results. I got a used Canon Pro 100 photo printer and I’ve been printing 13x19” (A3+) photos edited with RawTherapee. I’m not using it professionally or anything, but I’ve been very happy with it for my use!


I switched to RT and I love it. Some techniques are outright impossible to achieve in LR or other raw interpretation software.


Apple has always been about making the choice for it’s users. Jobs always thought giving users too many options was problematic.


While in training at my first restaurant job I asked a party where they would like to sit in the dining room. The owner pulled me aside and said to never give people a choice. They'll get confused and think that there's a "best" table and that their dinner could suffer if they feel like they didn't get it.

I don't know if it's "real" or not but between work and my personal life I've got decision fatigue. Help make decisions. Especially about non-important ones that feel important at the time.


In one of my best sales jobs, I got a full day training course on choice fatigue and how to guide customers to both maximize revenues and their happiness. The business I worked for had high rates of customer return, so the training was centered not around scalping the most money today but how to develop a lifetime relationship where we maximize LTV.

It was pretty fascinating. I ended up doing a fair bit of independent research on the topic to see if it was just same sales BS or a known psychological phenomenon, and it turns out it's a well-known branch of psychology/sociology. (Not necessarily related to the Decision Fatigue / Ego Depletion theory which has been tough to replicate, but is probably true in some respect).

Coincidentally enough, I took that education to my next job, which was waiting tables, and it held up there very well. The key, in my opinion, is actually being deeply knowledgeable about the product so your recommendations are authentic. Bonus points if you use the product.


Circuit City stores, back when they had entire walls of VCRs and tape decks and receivers, all of which were "on sale" (ahem), instructed their salespeople to always give customers definitive advice in their price range. "Oh, this Sony is the absolute best tape deck." Once people had reassurance that they were making the right choice, even from a total stranger, they'd open their wallets. But not before.


The current stereotype for this is the peppy young salesperson having "this exact model at home".

I wonder what kind of mansions they live in, because I've seen the same 20something dude have like 5 different washing machines and 7 different fridges at home =)


It's definitely real! At least as far as my N=1 dataset is concerned. I dislike having to pick a seat when they're all kind of the same. Just tell me where to go, I have to make enough decisions at other times in life.


There is obviously better tables. Near the windows. Away from busy traffic.


Fascinating.

It seems like some table choices are better for the customer, some are better for the server. As a customer I like to be away from the other tables and the server wants everyone together in a noisy clump.


I suspect the other option here is to just ask. “We’re getting to know each other. Do you think we could sit in the booth in the corner over there?”


I absolutely loathe those setups in some tiny restaurants where they line up 2 person tables in a row with 6-12" between tables so you can hear the stranger next to you talking more clearly than your partner across the table. Nothing else triggers my unbearable social anxiety like it. I've gotten seated at those when there's, like, an entire half of the restaurant empty.


Yes, I remember that campgrounds in the American Rockies let you choose your spot, while in the Canadian Rockies they assign you one. And I realized we were simply happier in Canada!


Ah, so applying the parenting strategy to adults.

“Allow the illusion of choice”.


Not just Jobs! There is no end[0] to research[1] showing that many options is terrible[2], and "too many options" is terrible by definition.

0. https://www.usatoday.com/story/money/columnist/2018/07/31/br...

1. https://mediaroom.iese.edu/new-research-shows-why-the-human-...

2. https://bigthink.com/thinking/choice-analysis-paralysis/


That's a bit silly in this context. It's not like the person's intentions aren't fairly clear when they decided between buying an iPhone or buying a $3000 DSLR setup. Is it really so evil that Apple "decided" the person who bought the iPhone probably wanted the pictures to be heavily post processed instead of raw data from the sensor?


I don't see how that's relevant. You can tap the RAW button in the iOS camera app to switch between the two choices, both of which looks great. Most DSLRs only have one option because the second one (in-camera JPEG) looks like crap. They can barely do auto focus/wb/iso/shutter/aperture let alone postprocessing.


And honestly history has mostly vindicated him.


A lot of post processing that modern phones are doing isn't really practical though in post. For super low light shots for example, the iPhone will take hundreds of exposures and merge them. If you were going to save this all as RAW, it's multiple gigabytes of data per frame.


"How would you do {night, portrait} mode on a real camera" has been brought up a few times in this post and the answer is: you don't, because it's not needed. I can prop an iPhone on the windowsill and let it do a 30 second night mode capture, and I can also prop my Nikon Z7 on the same windowsill and let it do a 2-5 second exposure. The latter is so much higher quality it's no comparison at all. (And 2s can be hand-held at 24 mm, unlike the 30s night mode)

Though of course deep sky photographers do actually just this - many exposures composited together - but then we are talking about subjects so dim you need effective exposures of 30 minutes or so.


> For super low light shots for example, the iPhone will take hundreds of exposures and merge them. If you were going to save this all as RAW, it's multiple gigabytes of data per frame.

A bit of a digression but the reason why phones need so many exposures to get a decent low light image is due to their sensors/optics being relatively poor at low light. You would only need a small fraction of the exposures from a modern pro-grade camera to generate a comparable final image. This also lets you capture objects in motion in low light far, far better since you can get your required image data in a smaller temporal window.


It does not take hundreds of exposures. That is impractical and above all not a good idea. In low light the shutter speed is likely slow (1/10th of a second for a example). Even 16 frames is more than a seconds worth of frames. At that point the differences start to become too high for stacking to be worthwhile, especially because it is hand held. Secondly, the reduction of noise by merging is at best the sqrt (number of frames). Past even 12 frames the benefits start to be questionable.


> Secondly, the reduction of noise by merging is at best the sqrt (number of frames)

This assumes the (typically Gaussian) noise is applied to a static image. Arguably, one could exploit the slight shakiness in a handheld shot to create an image with even less noise.

As a thought experiment, consider thousands of shots of a perfectly static scene made with an idealized, noiseless camera that is moved a very tiny amount for each shot. You could continue improving the resolution of a generated, composite image quite a bit until warping due to camera displacement became noticeable.

Recent techniques like this are actually being used for cryo-electron microscopy to create extremely high-resolution imaging of proteins.


Yes what you are referring to is implemented already by Google and probably others. The Google camera uses it to skip demosaicking by shifting the image by a pixel using OIS. Still the original post was under the impression that hundreds of exposures are merged which is just not the case


Sorry, I'm a photo-noob. What type of post-processing is the iPhone doing that professional would do themselves with RAW? Looking at RawTherapee, it looks like colour profiling. What other operations do people do?


The iPhone does a lot of stuff: - correcting for lens distortion - automatically setting white balance - optimizing color and contrast - blending multiple exposures when it's dark and has some advanced denoising tricks (which sometimes gives photo's a bit of an 'oil painting' look) - sharpening to get that 'crisp' look (even if your lens is not that sharp)


I think correcting for lens distortion is the main thing that wasn't already mentioned in the article. Regular cameras can't do that, but the iPhone knows exactly what lens is being used and can take advantage of that.


Mine don't (iPhone 13 Pro Max, Fuji XT-2 and various lenses). They do in daylight, sure, but things like sunrise/sunset or unusual colors throw them off like crazy.

Textures can also throw them off - "amplification" of the texture effect, almost.

They also suffer a bit zoomed in.

The post-processing fixes a lot of problems of older phone cameras but it has its limits.

On good camera hardware there's very little that all that post-processing would add outside of extreme-high-ISO-noise, IMO. Which - would it be nice? Sure. But you can find software and stack exposures manually and such for those situations too.

And a lot of the other smart stuff gets fooled too easily.


I hate when you get one of those amazing sunsets where the whole sky changes color, but the phone auto-correct removes/corrects the color tone.


Turn off auto white balance and set it to "daylight", problem solved.


You can shoot in raw on iphones now to prevent this or fix it later.


Capturing raw on iPhone in a low-light situation shows you just how far the sensors have to come if you aren't throwing the whole bag of software magic at it, in my experience.

It's useful for cases of "you're noise reduction is fucking with my texture" in decent lighting or similar, but in lower light I haven't found it to be a worthwhile tradeoff.


On recent iPhones Apple has added "ProRAW" which retains a lot of the magic processing without losing the ability to make a lot of adjustments you'd do on RAW photos like white balance correction.


Thanks, but now doesn't do me any good then


The sun will come up tomorrow, there was a whole song about it.


This is true. I should have added "in good light".

The DSLR images also retain much more detail in cropping.


And even moderate optical telephoto lenses are pretty powerful tools as well—especially in poorer light.


I used to use a phone as my backup camera when I was out doing bird photography. After a while I gave up. In variable light and high dynamic range conditions phones just don’t hold up (they do a better job with video). If you’re taking snapshots, nothing better than a phone.


The Nikon does minimal post processing. The iPhone throws a metric shit ton of algorithms at the image data to make it passable. For normal people the iPhone output is good enough - though it often looks very over-processed.

Nikon just expects you to handle that post processing part that your iPhone is doing for you. In exchange you get way more control over the final image.

Both devices are aimed at different people. I myself have an iPhone 13 pro and a Nikon Z6ii. I tend to take snapshots with my iPhone because getting out the Nikon + playing around with sliders in Capture One is just too much hassle for a snapshot. Now would I take the iPhone and do a landscape photo where I hiked 6 hours to the photo location at 3 a.m. in the morning? Probably not. ;)


I wouldn’t have gone on the hiking trip if I had to lug fullsize camera with me ;)


^ Obviously not Les Stroud


Do they look the same when zoomed in/at 100% zoom? Phone photos look great on small screens but show weaknesses on desktop.


Yep. Dslr and large sensors count when it comes to making every pixel count without introducing artifacts


It's rarely the case that you want to look at a camera or a phone photo at 100% zoom unless you're cropping a smaller image out of a larger one. Of course, sometimes it's useful for things like reading text, but that's limited by the lens and focus abilities as much as the sensor resolution.

You'd be surprised how little megapixels are just fine for putting up on a wall or billboard. It's all about how far away the viewer is.


Just shoot your mirrorless in RAW and process them later. Lightroom gives great results, but you can also use Apple Photos to get similar color processing as your iPhone photos.

The mirrorless photos will look much better on a laptop or bigger screen but about the same on a phone.


Thanks for the tip on Apple Photos. Had not tried that.


My other favorite is the Really Nice Images plugin for Lightroom. Very pleasing results on my Nikon Z6 RAW files--better to me than Apple's iPhone processing--by essentially just clicking the filter you want and adjusting a strength slider.


Do the jpegs (or hief?) look as good on a 27-inch monitor, or just on the phone screen?

Capturing non-raw in my experience (iPhone 6S, now have 13 mini) the jpegs are heavily de-noised, and really don't look that good 1:1 on a large monitor: on the iphone screen they look very good, as they're downsampled.

The article mentions the 'watercolor' effect since the iPhone 8, but I definitely had the issue with all the jpegs taken on my iPhone 6S since 2015...

DNGs however DO look very good, so clearly the sensor is capable of pretty nice images.


A landscape photo is the easiest thing for any camera to capture. Focus is at infinity and no depth of field means the lens can be literally a greasy pinhole and still get sharp shots. You're probably taking photos in direct sunlight so again the camera has to do very little work and is flooded with light.

Try taking a portrait photo in iffy lighting, like at a concert, wedding, sporting event, etc. Something that really needs a fast and sharp lens.


Exactly, landscape photos on phones and pocket cameras ten years ago looked comparable to a DSLR. Just lower res.


I think it's processing power and engineering effort. I got a Sony RX100M2 for my mom and it has the same computational photography techniques that Google released the following year (https://ai.googleblog.com/2014/10/hdr-low-light-and-high-dyn...). But Sony's image stacking is only in "Superior Auto" mode, and is only used when necessary. Google's implementation does a lot of advanced work, including selecting and blending parts of the photo depending on motion, that Sony doesn't do. I assume Sony's imaging engineers have less expertise in advanced processing, and didn't have the resources to implement the features that Google did. Sony also has to devote engineering resources to other features - a lot of their sales are to photographers that will edit in post (RAW), and later, videographers. So features that are only in "auto" mode may have limited budget.


Good looking != captures reality.

iPhones apply filters to make the photos look more vivid and to make them "ready to share". If a professional camera would do that, it would not be professional.


It doesn’t do that unless you turn on vivid photographic styles. It’s tuned pretty much like a dedicated camera.

This does annoy many people who then switch to Snow/BeautyCam to actually take their pictures since they want to look prettier.


By default, the pictures they take have nothing to do with reality. You can check it yourself by comparing the picture on the screen with the scene you pictured. Try it with some clouds f.e. This was my experience.


I think there is some sky recoloring, but eh, that's how camera AWB always works. Dedicated cameras try to make everything noon daylight if you leave them in auto mode; there are manual mode phone apps for this just like there's manual controls on the camera.

There seems to be an assumption here that dedicated cameras* always shoot in raws you can spend your time editing later, but that's not the case. If I shoot a concert or wedding and want to return the photos the next day, and in-camera JPEG or HEIF looks good, I'm shooting that.

* you can't make me say "DSLR", they aren't all DSLRs


Every camera does that to a degree pretty much by definition. I don’t think iphones would fair worth to their adherence to reaity than a “proper” camera does on auto.


I dn, all the pictures of my friends and family and dog look pretty close to reality. I'm not sure I care if a cloud is slightly different, to be honest.


Because deep down, it's not what they do. They are optics and camera hardware companies. In the race between physics and software, they are on the physics side. This is evident in the entire user experience. They keep missing the boat with the way people actually use photography today, and don't even seem to care much. There's not much evidence they have the kind of research expertise that Apple, Google or Adobe built in signal processing and image processing, either.

See also https://www.dslrbodies.com/newsviews/news-archives/nikon-201... and https://www.dslrbodies.com/newsviews/news-archives/nikon-201...


The purpose is completely different. Camera makers put the processing power between the sensor and whatever RAW fioes their cameras create. For people that do their own post-processing.

Phone makers put the processng power at the post processing of photos, for people who never do their own post processing.

Both approaches are equally valid, they simply aim at different markets. And as far as softeare vs. physics goes, no amount of software and processing power can overcome the laws of optics.


Try zooming in, even a little bit. You'll notice squiggly oversharpening artifacts pretty quickly. Yes they probably look fine on a phone screen, but blow them up at all and they start to show their weaknesses


The oversharpening on the iphone is so frustrating! I like the utility of the basic camera app, but the pictures look so weird. I guess I should explore alternative apps.


I doubt those iphone photos look the exact same as the nikon ones on a large display (i.e. anything bigger than an iphone...) It has not been the case for me.


The Nikon photos will be under sharpened because it's so easy to apply sharpening in post, and the amount of sharpening is very dependent on the size and format of your desired output. Also, in ideal conditions (medium sized, evenly and brightly lit, static subject a moderate distance away) practically all cameras will give good results: try comparing in less ideal circumstances like a darkish area indoors.


Well, put bluntly, the reason is that you probably are not using the Nikon properly. So, your photos are not coming out that great or as you hoped. Don't take this personally, I also have my limitations using cameras properly. But I understand I'm an amateur when it comes to this and that I'm part of the problem. I try to learn to do better. I try, and sometimes it works out nicely and I get a really nice shot. Landscape photography is tough to get right. You really need to understand your camera and lenses to make that work.

The point of such a camera is not the in camera processing, which most pro users would not use on principle. Instead it's gaining a lot of control over setting up the shot properly with a lot of control over all the parameters that matter to achieve a look that matches what you want, intentionally. And then you finish the job in post processing. There's reason these things have so many buttons and dials: you need to use them to get the most out of the camera. And the point of owning such a camera is having that level of control. The flip side is that that makes you responsible for the intelligence. That kind of is the whole point. If that's not what you wanted, you bought the wrong camera.

The iphone has a very limited set of controls. You actually have very little control over it. Nice if that's what you want and the AI is awesome. But it's also a bit limiting if you want more. Of course it's very nice when that's the camera you have and you want to take a shot quickly by just pointing and clicking. Nothing wrong with that. I have a Pixel 6 and a Fuji X-T30. I use them both but not the same way.


> Well, put bluntly, the reason is that you probably are not using the Nikon properly.

Probably one of the biggest mistakes people make: not understanding at what aperture their lens is sharpest - typically stopping the lens down too much.

Other mistakes: not using a fast enough shutter speed to eliminate blurring from mirror slap (or using anti-mirror-slap features in their camera), not using a sufficiently sturdy tripod and mount, and so on.

Also, sometimes lenses just don't leave the factory in very good shape. If you struggle to get sharp results, it may be beneficial to send it in for service. The stuff sent around to reviewers has been obsessed over by the manufacturer, perfectly tweaked on a lens testing bench to get it as close to perfect as they can.


Obviously you don't look at close detail on computer, which is very strange for seemingly such a power photo user. Phones these days are good but not yet that good if you don't do some beginner mistakes with camera. Maybe Pixel 7 pro based on samples I saw, but definitely not ie iphone 13 pro max.

What people often mean by similar statements is they like default phone processing compared to 0 in the camera, and there is enough detail due to tons of light and due to landscapes being generally easiest scene to shoot.

As for why they are not comparable, also a very strange question from seemingly experienced photo shooter - compare software development department and budget in Apple vs Nikon, who is a tiny player we all love (have D750 since it came out and carried it everywhere up to 6000m), they use very specialized CPUs which are very good for 1 thing only (basic operations on raw sensor data and potential jpeg transformation), and various ML and stacking transformations aren't simply available there at performance required. The whole construction of camera and processing hardware isn't around snapping 30 pics and combining them together under 1s, pre-taking pictures before actually hitting shutter etc.


Nikon and Canon and all the other camera companies, Japanese, German, or otherwise, are not software companies and have a pretty awful track record for even the basic software in their camera interfaces. Post processing is something they could not or would not get involved in.

Apple ate their lunches and then some. While I'm an old-school photographer who thinks a great SLR camera is the photographic equivalent of driving a Porsche, I don't miss carrying pounds of gear around. (OTOH I HATE the Ux of iPhones for photography.) I digress. The camera biz is a classic biz school study in humans being human.


Apple and photography processing at this point is like TSMC and chips. They probably have a great deal of algorithmic knowhow that is in house and they're doing things that no one else is quite getting close to at any of the big camera brands. Maybe just Pixel phones have some clue in's on some of the post processing hacks. I'd guess in the dark ex apple helped consult.


Because Nikon's target audience (photographers) don't want that "fancy post-processing" done by their camera.


Because landscapes are easy. The subject isn't moving, you aren't in a hurry and the phone has all the time in the world to gather enough data for a photo. Same goes for posed portraits, the phone has time to use it's processing power to make the image look amazing.

Now try to do sports photography with a mobile phone.

The first problem is lack of zoom compared to a 200mm lens. The second issue is trying to get the 1/8000 exposure you need on a rainy football field you need to stop the action for a good photo.


There is no way they do. First of all you don't have a DSLR, but a mirorrless (Nikon Z) series. But nevermind, that's just nitpicking.

Likely a Z6 from the price point you mentioned. It has 24 MP (instead of iPhone 13's 12MP) and a much much higher dynamic range (more than 11 steps vs an iPhone's about 8). So unless you don't know what you're doing the Nikon is a way better camera (as it should be. It ways 5x as much with lens)


What settings are you shooting? How do you edit your images?


On my Z5 I can use the built-in Landscape mode, and on the Z6 I can use auto or manual.

Autocorrect RAW in Adobe Photoshop looks good. And certainly on a 4k monitor the DSLR images reveal more detail.

My post here has me realizing I need to take iPhone and DSLR shots side-by-side in the same place with the same lighting and begin to compare them in-camera and in post-processing.


"What aperture are you shooting at? You probably already know it, but for landscapes you'd be looking at f/5.6 or so unless you're hurting for light."

this is bad advice.... shoot on a tripod and set your f stop to get the DOF you need or do a focus stack, f/5.6 would work if your shooting a distant landscape but having anything in the foreground f5.6 would not be enough, i usually stay at around F10 any higher i don't like because diffraction starts to reduce image quality and if i need more DOF i just focus stack

for ISO set it to 100 or what ever your cameras base ISO is

then adjust your shutter speed until you get correct exposure


If you are using an APS-C camera or a FF above 30MP, at f/10 you are already suffering from diffraction. You really want to be between f/5.6 and f/8. Of course, composition is still the most important thing, so if you need the depth of field you need the depth field.

As for the exposure you can just use aperture priority with your ISO set to the base ISO. On a Nikon camera you'd actually want to be at 64. If the scene doesn't have challenging highlights then going under the base ISO or overexposing will provide even better results.


Actually if you shoot raw its not good to go below base iso. Recently i learned that when you go under base iso the camera just overexposes the image and pulls it down in software to give you longer exposure times. With raw you can just raise shadows in post, its easier than pulling lights down. Negative side effect is highlight blowing out easy when going below base iso. So if you shoot raw then just stay at base iso and raise shadows in post. And if you really need the longer exposure invest in a ND filter.


It depends on whether or not you have a lot of highlights. As you said it just overexposes the image, which you should do if you can get away with the highlights.


What aperture are you shooting at? You probably already know it, but for landscapes you'd be looking at f/5.6 or so unless you're hurting for light.

Popular notion is that at f/11 only will diffraction start to affect you, but with modern high resolution cameras diffraction starts to creep in as early as f/9, so f/5.6 is generally best unless you really need the deeper field. You're probably going to be fine at 24MP, though.


I've noticed it's getting easier and easier to take photos with the SUN in the frame than when digital sensors first came out.


Try photographing at night with short exposures.


I’ve never seen SLR’s do this very well either.


And yet they do it much better than phones.


Never got why camera manufactures never followed the phones. Like Live photo,and automatic hdr. Can we get rid of the shutter?


> Can we get rid of the shutter?

That's a challenge, I think. Take an R5 (only because I'm most familiar with it).

47 megapixels, 12 bpp 211.5MB, maximum shutter speed of 1/8000. In other words, you need to be able to pull data off the sensor (and to be clear, there's parallelization available, I just don't know what) at a global rate of 1.6TB/s.


They have global shutter sensors around the 12 megapixel level. They could split the sensor up into different processing paths. Do these 200 megapixel phone camera sensors have really bad rolling sensor issues? They are using electronic rolling shutters like your R5 when in silent mode or whatever canon calls it now.


What 200 mp phone camera sensors? The iPhone 14 is closer to a 12MP digital camera (quad bayer 48 million sensors). And from what I recall, 8 or maybe 10 bit. So in comparison to the 211MB of sensor data coming off the R5, there's 36 to 45MB of data coming off the phone camera. And I believe the 14's max shutter speed is 1/1000, so there's up to 35 times less data needing to be read off the sensor (45GB/s versus 1.6TB/s)

And I'm sure there is multiple processing paths - I just don't know the fine details about how that raw data is slurped off the sensor.



The Z9 has an electronic shutter, and fast one at that. So prepare to see that tech to hit lower tier mirrorless cameras in the next years.

Not sure what I'd need frame rates above 100 for stills so, but that's just me, because it sure is cool.


My 2016 olympus pen-f, which wasn't exactly ground-breaking technology when it came out, already had an electronic shutter that worked with the full resolution of the sensor.

It's a 20 MP micro 4/3 sensor - so 1/4 the surface area of "full-frame".


Just realized, the Nikon 1 we got for Christmas for my daughter seems to have one as well... Need to test that! Still, we look at a 10 MP sensor smaller in size than a DX. Heck, the more I look at the Nikon 1 series, the more I realize to ehat extend it served as a mirrorless test bed for Nikon.


Do you expect the physical size of the sensor to have a bigger influence on this than the resolution?


Depends on the sensor, doesn't it? The more dynamic range, the more data a sensor is collecting.

Edit: I should compare file size, usong the same settings, between a D700 and D300. Same MP, same camera basically, one being FF and one being DX. Or someone with more knowledge on sensors and digital cameras can answer the question.


Nikon Z9 eliminated it this past year, my understanding is that going shutterless is largely a function of having fast enough readout over a large sensor size to mitigate rolling shutter.


i am glad they don't but in any case what is live photo? automatic HDR? well shoot a bracket +- certain EV ? get rid of the shutter? would need a global shutter unless you like rolling shutter look, phone sensors are tiny they can have fast enough readout vs a 35mm or larger


This is probably the wrong way to think of it. Especially at screen sizes, there are certain photos your iphone 13 can take that are comparable to what you would get with your nikon gear. However there are photos you can get with that 1.8 that won't work at all on your phone.


Because landscape photos are easy. No bokeh, no depth of field (everything at infinity), good lighting, no need for fast shutter speed or a big sensor. Your images will be sharper I guess, but that's not worth $2k in most cases.


Looks like you never took a lanscape photo where you had to use a (Gradiant) ND, Polarisation Filter, Multiple Exposures, Long Exposures for smooth Water or anything where you had to hink about how to take the perfect shot. So yes for a quick snapshot a phone is good enough. For a photograph a bigger camera is much more benefical than "just" for sharper images.


I have done all of those when I owned a DSLR. Phones have been able to do smooth water and multiple exposures for years though...


If you take a lot of landscapes I think that's actually when you do want the big sensor. Which, remember, is not a 35mm full-frame DSLR either! Instead it might be a 4x5 large format camera like Ansel Adams used.

(Which requires a lot of patience, but an old one would be cheaper than a new camera. Though renting is always an option.)


The ratio of sensor areas between full frame (864 mm²) and for example iphone 13 pro max "wide" (main) camera sensor (44 mm²) is actually larger than the ratio between a 4x5 large format film plate (12903 mm²) and a full frame sensor.

edit: changed "sensor sizes" to "sensor areas" for clarity.


iPhone 13 photos will look just as good as a Nikon photo on your iPhone. As soon as you view the iPhone photos on a monitor or print them out, everything falls apart.

iPhone photos are excellent as long as viewers are only seeing them on iPhones.


iPhone photos will look better than a Nikon photo on your phone, because nothing except iPhone can take HDR still photos that display in HDR. They can only do video.

"HDR stacking" apps are actually "SDR tone mapping" apps; they /start/ with an HDR result and make it not HDR anymore!


I guess if you like that look. I don't; not really. The photos from my phone are fine. If taken in really good light, they are actually pretty good. But they are always processed to within an inch of their lives.


Targeted audience doesn't demand the feature.

If a users needs are being met by an iPhone then they shouldn't worry about dslrs.

If a users needs aren't being met by the dslr gotta wonder is it the technology or the skill of the user?


The best rule I've learned is that the best camera you have is the one you have with you right now. Doesn't matter if you have a $2k+ camera body with a $5k lens sitting at home while you look at the photo op in front of you so you've used your mobile device instead. You at least have the picture that unless you've really messed it up, is better than no picture.


It really won't beat modern fast lenses. Try a lowlight situation. You will not be able to beat a larger aperture for light collection, it's simply how many photons you can catch.


For the same reason you can't upload to Instagram from your $2k camera - they know how to make hardware.

But the idea of enhancing filters or social media features is completely alien to them.


I wouldn't necessarily assume that the optics are superior. Smartphone lenses are pretty sophisticated. Also, their smaller size and far greater production numbers open up manufacturing techniques that wouldn't be practical for DSLR lenses.

Edit: Apologies for commenting on downvotes, but I'd be genuinely curious to see some objective evidence that the optics of a typical DSLR lens have a superior design. Of course it is true that larger lenses for larger sensors tend to be superior because they do not need to resolve as many lines per mm and they do not need to be machined as precisely (all else being equal). But does anyone know of any actual lab tests that make relevant comparisons? I am a bit tired of people just assuming that DLSR lenses are higher quality than smartphone lenses, even though the cost of modern smartphones, and the enormous disparity in the number of units sold, makes it far from obvious that this should be the case.


I didn't downvote but I imagine at least someone did because the laws of physics dictate that smaller lenses/sensors can't capture as much light as bigger ones of similar quality. This is why cameras have started trending to be larger rather than smaller. Only so much you can do software-wise before you hit physical limitations.


>the laws of physics dictate that smaller lenses/sensors can't capture as much light as bigger ones of similar quality.

This is not really true to a very great extent once you take depth of field into account. At least, it would be helpful if you could indicate what it is exactly that you take the laws of physics to imply in this context. I made a comment here that's relevant: https://news.ycombinator.com/item?id=33426540

I took 'superior optics' to be a claim of greater sophistication or higher quality, but perhaps that is not what was meant, and the poster was merely referring to the difference in size.


If the size of the lens wasn't important, we wouldn't have built a 100" mirror on Mt. Wilson followed by a 200" mirror at Palomar Observatory.

A 14mm wide-angle lens with a primary opening of 2mm is never going to let in the same amount of light as a 14mm wide-angle with a 114mm primary opening will. Ratio of the sensor size has nothing to do with the ability of the light entering the front lens. There are some adapters that are known as speed boosters that reduce the exit pupil for full frame lenses that will actually add a stop to the lens.


>Ratio of the sensor size has nothing to do with the ability of the light entering the front lens.

I'm aware of that – that's the point I made in my post that I linked. But DoF considerations entail that you can't just keep arbitrarily increasing the size of the opening.


the size of the chips on mobile devices will always mean there is infinite depth of field without software effects. it's like 1990s news camera footage. everybody everywhere in the frame is in focus. boring boring boring.


You could disprove this with a few seconds of experimentation. Modern phone cameras have focusing mechanisms for a reason. Here is a shot taken with the main camera of the iPhone 12 Pro Max for illustration:

https://drive.google.com/file/d/1akuK5v_PB8DwzDnJC1oJXOETr_S...

Without getting too much into subjective territory, I am personally tired of the bokeh obsession. There's nothing inherently interesting about photos with blurry backgrounds (see above for confirmation!) And if you want background blur, achieving it using software effects is perfectly legitimate.


> There’s nothing inherently interesting about photos with a blurry background

Congrats! You took a bad photo with a shallow depth of field! I guess the photographer does matter more than the equipment.

> Achieving it with software effects is perfectly legitimate

Sure you can. But if you think software bokeh can compete with any dedicated camera bokeh (even a vintage 35mm camera for that matter) than you need to try and use a dedicated camera with a good prime lens, ie 50mm f/2.

The biggest thing that a dedicated camera can get you is simply options. You can choose the depth of field of the shot. You can choose the shutter speed. You can choose the focal length. Sure you have some control over these things in some apps, but it’s not the same level of control.

I shot on my iPhone for years, but after I got my mirrorless camera I have taken so many pictures that never would have been possible on an iPhone.


I have used dedicated cameras with prime lenses. I quite enjoy using old film cameras as a hobby, including large format cameras which can create extremely shallow depth of field using movements. The only point of the linked photo was to show that current cell phone cameras clearly do not have 'infinite' depth of field (as the other poster claimed).

It is true that a dedicated camera gives you more options. (I did not deny this.)

>You can choose the shutter speed

Pedantic point: you can also choose the shutter speed on a cell phone camera if you use one of many apps that give manual control over shutter speed and ISO.


> “I think the 12 MP shooting default is a wise choice on Apple’s part, but it does mean that the giant leap in image quality on iPhone 14 Pro remains mostly hidden unless you choose to use a third party app to shoot 48 MP JPG / HEIC images or shoot in ProRAW and edit your photos later.”

This, 100%.

The massive difference in image quality is when shooting in RAW. That’s when you actually get the 48MP & the images are fantastic.

But that’s not the default. The default is 12MP.

That’s why reviewers are so torn on this camera system. If the default was a 48MP picture/quality, everyone would be praising it. But when the default is 12MP, it’s par for the course.


Yeah but that is just software… and software is much more mailable than hardware. At some point perhaps the default will be 48mp


If you want to hit a time and battery life target it unfortunately isn't just software; image processing algorithms start in Matlab and moves to hardware for production. You can only move so many pixels through the pipeline at once.


Why would you need more than 12MP if the photo is only going to be displayed on a 6 inch screen?


> if the photo is only going to be displayed on a 6 inch screen

If you use photos as a way to preserve memories, then who knows what these photos are going to be displayed on in the future?

Maybe in the (near) future, we start adopting VR headsets, and then the 12MP vs 48MP difference is going to matter a lot.

On the low-tech end of the spectrum, maybe you want a larger poster printed out. If you need to crop some part of the image at all, the difference in resolution is going to be very noticeable.


Speculative and tenuous hand-wringing about distant, unlikely use cases is not a compelling rebuttal. We know 12mp photos look good on 6 inch, high resolution screens. They even look good on desktop computers and TVs.

Why would you view 2D images in a stereoscopic 3D headset designed for video? Do existing photos look bad in current VR headsets because they lack resolution? I haven’t used any of the modern VR headsets but I’m willing to bet that there isn’t a significant difference between a 12mp and “48mp” iPhone 14 pro photo on a VR headset.

You’re also ignoring the context — of course higher resolution is better in isolation, no one would argue otherwise. But it comes with significant trade offs, like the FOUR SECOND capture time and the enormous file sizes.

If you want to print a cropped photo from an iPhone on a large poster (be honest — do you know anyone who currently prints iPhone photos on posters on a regular basis? I’d argue that hardly anyone prints iPhone photos at all nowadays, and especially not at large poster sizes), 12mp vs “48mp” isn’t going to make a difference, it’s going to look bad either way.

Anyone seriously concerned about their ability to print large format posters from cropped images is going to be using a full frame or medium format camera.

We know current iPhone photos look fine on 6inch high PPI displays (displays that already exceed the resolution of the human retina) and when printed at common sizes (4x6, 8.5x11). That’s never going to change. Making the iPhone camera workflow significantly worse for users now is not an acceptable trade off for vague, hypothetical future use- cases.


Because you can make a pinch motion to zoom into photos.


I printed and framed an iphone photo of my niece 11x17 as a Christmas gift for my mother last year. It would not have been better taken with professional camera equipment.


What?? You never moved a photo off your phone?


The 48 MP bayer mode is indeed impressive, but it does not increase the spatial frequency response. I recently used it to document some color transition errors on LG OLED displays and even with the 48 MP "RAW" mode there are artifacts from the limited spatial resolution. One of the images properly captures the display sub-pixel layout, but that is taken closer to the display. Enabling/disabling "RAW" did not change the spatial resolution of the photos.

https://imgur.com/gallery/amP2lR4


If you take a photo of a subject that is closer than 7.8 inches, you're no longer using the main camera on the 14 Pro. It automatically switches to the ultrawide camera and crops in to show the same field of view. I suspect that is happening to you in some cases, but you're unaware, based on the fact that you didn't mention anything about this limit.

The 48MP camera has the same color spatial resolution as a 12MP camera, but it has 48MP of monochromatic spatial resolution. Humans aren't as sensitive to color resolution as they are to spatial resolution in general. This is why the "2x" mode on the 14 Pro look great compared to what you might expect based on your comment. The 2x crop only has "3MP" of color resolution, but the 12MP of spatial resolution from the 2x crop makes it perfectly usable.

For your specific use case, the ultrawide camera may work fine as a "macro" lens, depending on the size of the pixels you're trying to capture. A real macro lens on an interchangeable lens camera would obviously do better.


Ah that would be my problem. I took zoomed photos to avoid lens distortion. The ultrawide would have been a better option.

This wasn't meant to be a nice dataset, it was just something I quickly tried to document my observation to a single other human. I would have been more careful if I expected to share it more widely.


I think if you disable automatic Macro Mode in the camera app settings and make sure it's off when you take the photo (flower icon that appears should be grey not yellow) it avoids this. Though it seems like the other cameras can't actually focus at this distance.


Note that in the real world it's going to have much less than 48MP of monochromatic spatial resolution due to aberrations.

Even if the lens was operating at the limit of physics, you'd get a 2 micron first ring of the airy disk at 500nm, which is bigger than the size of the pixels. It can help a bit to have pixels smaller than the smallest detail the lens can resolve, but at the same time the lens isn't operating at the physical limit, so there is probably only a small increase in spatial resolution.


I've been playing block the sensor with that little IR lidar thing or whatever it is beside the lenses same size as the flash but is a dark spot. I was trying to photograph some VVT solenoid channels with illumination creeping out the ports and it got the camera into a cycle of switching lenses. I covered the sensor and the toggling ceased so I could focus the photo and get a good image.


Much better to just go in and turn on Settings -> Camera -> Macro Control


interesting, so the flower icon now shows up on the bottom left. Is the brightness/exposure slider not available in macro mode? Seems like when I spot touch a region in the photo to drive brightness exposure I exit macro mode. Looks however like I can still do the exposure slider in the control menu things just not on the screen with the spot touch technique. What is that spot touch thing known as anyways?


I've been using the Halide app to take most of my daylight shots in 48mp HEIC mode (which has the benefit of being 5-15kb per photo).

The main advantage to me is that the result is much less affected by Apple's always-on edge-sharpening processing. The effective "resolution" of the processing artifacts is higher.

In previous iPhones if you take a photo of a bunch of leaves on a tree it's almost like it tries to draw a little sharpened outline around each one, which looks like a watercolor mess if you zoom in at all and doesn't capture what your eye sees.

With the 48mp compressed shots I find landscapes and trees look much more natural and in general you can crop and zoom into photos further before the detail is lost in the processing mess.


> which has the benefit of being 5-15kb per photo

Unlikely. You mean mb not kb.


Yep, you’re right.

Looking through my recents I am seeing 5-20mb for 48mp HEIC as opposed to 60mb+ for 48mp ProRAW images.


Why does that happen? Is it because there's a physical antialiasing filter in the lens?


Quad Bayer: https://www.dpreview.com/files/p/articles/4088675984/Quad_Ba...

The 48MP sensor still has 48MP of monochromatic resolution, but it only has 12MP of effective color resolution. You'll still see fine details, but the colors are not as high resolution as the details. This is rarely a problem, given the way the human eye processes color.


I'm not an optics expert, but I expect that the physical system's spatial bandwidth is much higher than the sensor's bandwidth. That is to say, if there were more light sensing elements I think the spatial bandwidth would be higher.

I don't think the artifacts are directly from aliasing but rather an artifact of software interpolation.

It anyone knows better please correct me.


This is generally not the case here, if the lens was physically perfect it could not resolve two points closer than two pixels as per the Rayleigh criterion.


Coming from a iPhone 12 to the 14, the aggressive switching between lenses is pretty annoying in close and far shots. The lens shift screws up framing and hits at unpredictable times.


Using something like Halide[1] gives you more control - once you pick a lens, it stays picked.

[1] Other camera apps are available, etc.


Same. As a casual Camera user / not an enthusiast, I find the camera experience on the 14 pro unpleasant and a downgrade from the 12 pro.


Agreed. I thought mine was broken when I received it but alas, this is how it is. Was hoping a software update would fix the framing issues.


I have a Genius Bar appointment I've rescheduled twice to discuss this, but it sounds like i really have no reason to believe there's anything "wrong" with my phone. I do hope software fixes can help, but maybe there's just some expectations in my head I need to recalibrate.


Yeah the first time it did that to me when I was trying to use magnifier mode, I nearly threw the phone against the wall. Who the heck thinks it's a good idea to change cameras when trying to focus close-up? -poof- the thing you're pointing at disappears from the screen.


When the current lens is physically unable to achieve focus at the required distance?


Yeah I am finding this too. Open the camera and it switches lenses almost 100% of the time. So the image shifts within a second. Just a bit jarring and very un-apple.


can't you turn that off?


You can turn off automatic macro switching: https://support.apple.com/en-us/HT210571#:~:text=You%20can%2....


Not in the stock Camera app, no, but other apps let you specifically control which lens is being used.


Lol, nothing is better about the dark skyline image. It's a blurry noise canceled mess with bad white balance.


That picture doesn't have a single correct white balance; no photo does if it has more than one light source in it. It's all to taste.

But it does look like it was shot through a window without a polarizer filter.


Just because it's taste doesn't mean there is not objectively bad white balance. It's just green XD.

I do agree the other pics are great though (and own an iphone (because the company paid for any phone and I wanted the best camera...)). I'm not 100% happy with the "iphone look", too much fine contrast for my taste, but it's of course great cameras.


Reality Distortion Field.


Perhaps, but whose reality is it distorting? Apple enthusiasts, or haters?


Unfortunately, it still takes crappy concert photos. The combination of low light, crazy lighting, etc., still wreak havoc on the images.

Fortunately, most of the venues I go to for shows generally have a "No detachable lens cameras" rule, which means my Fuji X100 is allowed. Unfortunately, security at the venues often ignores the policy and I don't want to be That Guy holding up the line arguing with them about it.

(I'm also not one of those people that is taking pictures [and I never record video] the whole show, I just want a handful of high quality shots to help me remember the show)


I’m curious what you feel is “good enough” for concert images. I recently went to a concert and took a ton of pictures on my Pixel 5, which IMO performed much better than my previous android (non flagship) phones to a level I was pretty happy with, albeit I do not have an iPhone to compare with. I uploaded some (slightly decent) images to imgur, could you let me know if this is better than or similar to your experience/photos?

https://i.imgur.com/gm8jUkj.jpg https://i.imgur.com/hrBL18U.jpg https://i.imgur.com/zLPFgPW.jpg https://i.imgur.com/1kPzHZ3.jpg https://i.imgur.com/97R9asy.jpg https://i.imgur.com/3ITpUuT.jpg https://i.imgur.com/1MClydK.jpg

(No editing by me, though I did pick preferentially some photos that weren’t blurred majorly. The largest blurred object appears to be the DJ (BT) who was moving a decent bit.)


Fairly similar, though you look to be closer to the DJ than I was to my subjects in the only concert I've been to with the new iPhone. I think the 14 Pro might edge it out a bit if I was at a similar distance to my subject.


Thanks for your reply. Yep, that sounds quite likely given how old the pixel 5 camera hardware is. I was pretty close to the DJ (I guess 2 to 3 meters), none of the photos are cropped.


What is the thinking behind the "no detachable lens cameras" rule? I can't wrap my head around that choice.


There's a few primary reasons

1) Historically, some bands have been concerned about their image and felt that professional-looking photos that painted them in a bad light, whatever that meant in reality, would be more damaging than amateur photos. I don't hear this as much today, but 15 years ago it was frequently given.

2) Concerts with a lot of standing room near the stage already get quite crowded. Someone showing up with a bulky dslr (or even prosumer grade mirrorless) body and a 200mm lens is going to take up quite a bit of room. Prior to the advent of half the damn crowd keeping their phones in the air recording the show for the entirety of it I would also say it obscures vision and annoys people, but now it's really not any worse than that

3) They don't want someone to try and hold them liable if something goes wrong and some expensive camera body or glass gets broken.

On the times I've been able to bring my full camera gear in without a press pass, I stick to as small of a lens as I can and avoid being near the front of the crowd. Thankfully, even quite a ways back from the front of the crowd, a 50mm prime lens will still take some fantastic photos on a real camera vs. what you get with a smartphone. I understand why the rules are in place, though, and I don't really have a problem with them in general.


Pancake lens comes to mind


I would guess to prevent professionals from taking pictures “for free” instead of getting contracts with the bands/venues and paying fees for the privilege.

Or maybe they just annoy other patrons.


I have managed to get a press pass for some shows and bring my a7r3, nice lenses, etc. I've never been asked to pay any sort of fee for the pass.


That’s good to know. That was my “everyone is greedy these days” part of my guess.

Was it hard to get the press pass? Maybe it’s just to stop people avoiding that process.


It varies. A couple of times I just sent an email and they told me when/where to pick it up. Other times they wanted to see my work - a few of these, a personal blog with basically no traffic was enough to pass muster, a few, they wanted to see me working at some sort of actual publication (digital was fine, but it needed to be more than "cthalupa's concert blog")


No professional photographers. I was told at a hotel that i couldn't use my mirrorless camera on their property by their security and that was the reason.


Now THIS makes no sense to me. In the Before days, I used to travel extensively for work, and would frequently bring my a7r along, and never once had any issues taking photos at a hotel across dozens of cities and a dozen countries.

(Not doubting it happened, just think that's a very weird line of rationale from them)


Miami at a trendy hotel known for celebs.


I'd guess it's to avoid professional photography equipment that can take up a lot of space (think zoom lenses)


A good reason to own a Ricoh GR IIIx (or Ricoh GR III, which has a 28mm equiv. lense instead of 40mm on the "x" version). It looks very simple and non-professional, but the picture quality is comparable with the latest Fuji X100 series cameras.


Thanks! I'll have to check it out. I've got an RX100 VII that I picked up for this purpose a few years ago, but haven't been particularly impressed with the concert results.


Even DSLRs have issues with image quality at concerts. Red lights are frequently used at concerts and cause notorious issues, regardless of camera.

It’s great you found that work around with Fuji x100. I just think this is a valid though minor complaint since most cameras suffer in the same context, and I wish people would stop taking photos at concerts hahaha.


I've had pretty excellent luck with my a7r3 when I've been able to bring it in, either because of scoring a press pass or a venue not having camera restrictions.


Coming from a Pixel 5 to the iPhone 14 Pro, I gotta say I'm pretty disappointed. The pictures are mediocre at best, and just plain bad in dim light especially.

Google's computational photography is years ahead. The latest Pixel has better sensors too.


One problem that Apple actually considers a hard won feature is color inaccuracy. They like to do default images look very instagram-ish, which may be what many folks like to see but they are pretty far from reality, more than most. Skin smoothing is another topic on its own.

Maybe I am very biased by almost a decade of full frame shooting basically everything, but I like photo representing what I actually see with my own eyes at that moment.

When talking about Pixels, when I saw some non-ideal light samples from latest one, it was pretty clear neither Apple nor ie Samsung (which I own and love, S22 ultra) are in same league in many aspects of photography. But Pixel 6 had some pretty annoying issues from user reports. On the other side it costed (and v7 still does) significantly less from day 1.


Coming from the Pixel 6 Pro, I feel the same. The camera is comparatively very slow with even daytime indoor lighting. Half the photos I take of a child at play end up in the trash due to blur. Hoping to find out that I'm just "holding it wrong".


As a Pixel 5 user I don’t really have any complaints about the cameras. If you pixel peep it definitely shows up, and I suppose the 48MP would be better, but gcam (Google camera) is probably one of the best pieces of software on android.

Having said that, I would love to see a large sensor camera with gcam esque chops. It’s not too hard to run into the limitations of the small sensor.


Coming from a Pixel 4a to the 14 Pro I'm really impressed instead. Like it's the much needed upgrade that I felt I needed (I'm really passionate about photography). I'm seriously considering selling my DSLR now.

I think google computational might be better in some edge cases, but it comes at the cost that sometimes it over-processes things: some textures look painted on my pixel photos, some night shoot feel "cool" but do not capture what I'm looking with my eyes at all, no matter my settings. And with portraits I have this feeling sometimes that I capture a nice image only to see it somehow ruined when the processing ends, with the face that becomes too "beautified" (smooth skin, etc).

I feel the iPhone computation is overall better for the most common lightning situations and maybe less aggressive, but I would need to test it for way more time to form a more complete opinion.


you're comparing a phone from 3 years ago to one from today. The pixels continue to be far better at HDR and low light images. iphone is still better at video.


OP was comparing it to the Pixel 5, that's why I chimed in since it's not much different to the 4a. I can't imagine myself being disappointed by it like OP and between 4a and 5 there wasn't a huge leap in performance.


Do you have examples to share? I am interested as I shoot with film, mirrorless, and on iPhones.



I really appreciate you sharing these. To my eye, there were quite a few where the 14 pro is noticeably better. Take the first one — the pro image is sharper imo, while the pixel photo suffers from very noticeable noise reduction or some other smoothing process.

That said, Im an iPhone user and I very likely have a bias. It would be cool to take a blind quiz to see which I actually prefer and if I do have a bias. I wonder if there’s a web service that easily lets you do this with imgur albums.


Would be curious to see what they think of the Pixel 7 Pro camera. Video is still worse, but picture quality overall seems to be slightly (but noticeably) better than the iPhone 14 Pro.


You won't see one. All their past postings appear to be iPhone/pad camera reviews and they make two apps for the same. As a result, I am a little queasy about their objectivity too.


I don't see an issue, given that they don't intend to do an objective comparison to other (non-Apple) smartphones.


I believe the Pixel Pro phones also recently migrated from 12MP to 48MP, between the 6 Pro and the 7 Pro. I wonder if this blogger is already looking into the Pixel? They mentioned that it takes a couple of months to come to a conclusion on a new phone camera, and the Pixel 7 has only been out for a month-ish.


Pixel 6 switched to the 48MP. I don't think the Pixel 7 changed much of note.


DxoMark is always good for comparing cameras: https://www.dxomark.com/smartphones/#sort-camera/device-Goog...


i did a point and shoot picture of a cat while a group of friends and I were sitting on the floor of a halloween party bathroom

later, when sharing the photos, i realized we could distinctly see my partner and I in the reflection of the cat's eye. the CSI enhance memes are real :-)


> sitting on the floor of a halloween party bathroom

i feel there is more to this story


Well this detailed camera review conclusively proves that I am a mediocre photographer who doesn't go anywhere interesting. Thanks a lot, lux.camera. Thanks a lot.


"To review this camera we went to these most interesting, most beautiful places."

Okay...but how does it perform taking pictures of my toy dog in suburb-town USA?


The iPhone 14 has a fantastic camera. It's a noticeable improvement from previous models and takes fantastic photos that rival or surpass many SLR cameras on the market. If Apple had taken another leap and included usb-c port it would have been enough for me to upgrade. For now, I wait.


Phones don't rival or surpass DSLRs or any proper digital camera.

They just have lots of fancy processing that makes quick snaps look better than the same amount of effort on a camera.


Until that same fancy processing is available on a DSLR, I think the comparison OP is making is valid. At the end of the day, what I as a consumer care about is "is photo look 1. pleasing, 2. accurate to my perception, and 3. easy to create?" and in those regards, the iPhone absolutely outperforms every DSLR I've ever used.


Which is why DSLRs are for hobbyists and professionals. It's more work, for a reward.

Or for people shooting at night. The sensors are too small on phones to gather enough light to look decent.


Clearly, many people enjoy taking /good/ photos that look great /quickly/.

The point being made is, if Apple/Google can do that with a tiny sensor, why hasn't one of the remaining pro camera companies thrown money into similar work to be done on their cameras? I have an olympus e-om10 (I can't even remember the name format) and it has some filter settings, but nothing with the usability and quick results of Apple and Google regarding night shoots, etc.


This was exactly the point I was trying to make. If a "real camera" manufacturer like Canon, Nikon, BlackMagic, etc. stepped up and added Apple-tier processing as an option to their cameras, I'd choose that over an iPhone any day. It frustrates me that the best computational photography in the world is being done using small sensors and tiny lenses, but since no DSLR manufacturer has matched Apple's computational witchcraft, I'll (sadly) be sticking with my iPhone for shooting all my short films. For the price it's easily the best combo of quality and convenience.


That’s because the professionals do all this “computational photography” on a laptop or something. That way they get to have a little more creative input into it too. There’s like a billion desktop apps to get any sort of processing you want done on a shot.

DSLRs aren’t good if you want “point and click,” and that’s okay.


>DSLRs aren’t good if you want “point and click,” and that’s okay.

I disagree, that's my point. I completely understand your point that most DSLR users are pros who have tools to make things pop. What I'm suggesting is that there is a market for people who want really high quality photos that are achievable through larger lenses and sensor sizes, with the simplicity and intelligence in realtime of a phone camera.

Would it be a billion dollar market? Maybe not. But with "computational photography" being far from science fiction these days and with mobile chips being so powerful, it would seem like a strong way to stay relevant in a mainstream market to market a DSLR with phone-like usability.


The sensors are small but the software is good. Night photography is easy on a recent smartphone and you don’t even need a tripod.


Yeah as other commenters have pointed out, it’s night and day (no pun) between dark shots on DSLRs and phones.


The only catch is, at least on the iPhone 14 Pro, you can't do stuff like light trails with the included camera app. Its very smart, fancy processing carefully removes the trails as it combines exposures so you end up a very well-lit picture of traffic rather than pretty lines and no cars.


I've tried it with an iPhone 13 and an older camera with significantly worse low light performance than any camera from the past 7 years, and even if I disable the built-in image stacking, the mirrorless camera with an f/1.4 lens is incomparably better. I could properly expose my backyard with nothing but light pollution handheld.

Newer cameras can even film video with nothing but moonlight.


Which camera is it? My old X100T is a complete mess in extreme low light conditions and I don’t remember things to have improved much in the following years.


An A7ii with a Sigma EF 85mm f/1.4 (incidentally the whole set up used is about half the price of an iPhone 14 Pro).

The X100T is a great camera for the experience, but it kinda sucks for low-light. If you want good low-light performance, then image stabilization is a must and full-frame with a lens faster than F2 makes a huge difference too. In the end I can hand-hold comfortably at 1/4th of a second, and the camera itself takes in 4x more light, so low light performance is much better than on an X100T.

I can upload the pictures I'm talking about later today, if you want.


With google night sight you can basically take pictures in the dark.


I didn't say a phone camera isn't better for the average consumer... I use my phone camera far more often than my actual camera...

But there's no world in which it's technically superior to a real camera, especially one with in-camera processing or in the hands of a professional with access to and skill with professional post-processing software.


>real camera

A minor and pedantic point, but could we stop with the idea that smartphone cameras are somehow not 'real'? They are enormously sophisticated imaging devices that comprehensively outperform, for example, the 35mm film SLRs that many photographers were using in the 90s. An iPhone 14 enables you to take technically superior photos to the photos that professional photographers were taking only a couple of decades ago.


> An iPhone 14 enables you to take technically superior photos to the photos that professional photographers were taking only a couple of decades ago.

I really don’t agree, and honestly it depends on what categories you’re judging it on.

Film cameras from 20 years ago probably have better dynamic range than your phone. They probably have comparable resolution. You have a lot of options with lenses, so you can get lots of different looks.

Full frame size lets in lots of light. Photography is all about light - no amount of processing is going to make up for the size of a sensor on an iPhone. Let’s not even bring medium format into the discussion.

Hell, I’d say they could take photos that were technically superior to iphones over 100 years ago with tintypes and such. (Film was actually lower resolution than what came before it. ) There are lots of stunning portraiture, with a lot of clarity, from the start of photography that would be impossible to replicate with modern cameras.


Have you ever used a 35mm film camera? I’ve taken loads of photos on 35mm film because I enjoy the process, but the technical quality in terms of resolution and dynamic range is clearly inferior to that of a modern cell phone camera - even if you are using very high qualify scans or wet printing. And the difference in color accuracy is even more stark.

The amount of light that’s let in depends entirely on the aperture diameter, not the size of the sensor. Sometimes you can use a larger diameter on a 35mm camera if you don’t need a lot of depth of field; but equally, modern digital sensors are usable at ISOs where film is not. The acid test here is night photography. If you want to take night portraits, you're going to have a far better time of it using a modern cell phone than a 35mm SLR with, say, an f1.8 lens. Even with a wide aperture lens and 400 ISO film, you'd be lucky to get a shutter speed faster than 1/15th of a second.

You can of course get more resolution with a larger format (I enjoy 4x5 myself). However, using medium or large format film cameras is a massive step down in terms of practicality and rules out entire genres of photography that phones excel at. I enjoy lugging my 4x5 film camera around and I very occasionally I get a nice photo out of it. Even when I do, the version I take on my phone is inferior in terms of resolution (not actually that important) but better in every other respect. You can drive yourself insane trying to get a 4x5 negative that doesn't have any uneven development or scratches.

The bottom line is that a modern cell phone is a vastly superior general purpose photographic tool to anything that was available in the 1990s. We all (in rich parts of the world) have access to cameras that professional photographers would have killed for a few decades ago. Thus, it annoys me when people compare these incredible devices unfavourably with "real cameras". It's pure gatekeeping.


> Have you ever used a 35mm film camera?

Yep.

> The amount of light that’s let in depends entirely on the aperture diameter, not the size of the sensor.

That’s wrong. The size of the sensor, the aperture size (and of course the distance between the two) are all factors that together on this.

Saying the sensor size is the reason is also wrong, but the size of the sensor is a factor in the equation - and the sensor being so small forces manufacturers to go with wide open apertures. It’s not ideal for every shot.

> Even with a wide aperture lens and 400 ISO film

If I was shooting at night, why would I use 400 iso?

> You can drive yourself insane trying to get a 4x5 negative that doesn't have any uneven development or scratches.

Sounds like a good time for a hobbyist.

> Thus, it annoys me when people compare these incredible devices unfavourably with "real cameras". It's pure gatekeeping.

I’m not the one gatekeeping. Cell phone cameras are real cameras. They’re just different.

You say it’s good for general shooting, I’m talking about professional and hobbyist use.

I will say though, it’s just an interesting fact AFAICT - digital cameras are still behind - or are only just hitting parity - in terms of dynamic range.

https://petapixel.com/2019/05/02/film-vs-digital-this-is-how...


>That’s wrong. The size of the sensor, the aperture size (and of course the distance between the two) are all factors that together on this.

For a given angle of view, the amount of light incident on the sensor depends solely on the diameter of the aperture (the absolute diameter, not the f number). You can see this visually in the comment I made here: https://news.ycombinator.com/item?id=33426540

If you really want to think of it in terms of a combination of sensor size and f number, you can do so. But it's easier just to look at the size of the hole the light is going through – which not surprisingly, determines how much light ends up being collected, once you fix the angle of view.

>If I was shooting at night, why would I use 400 iso?

Because films with higher ISOs are unacceptably grainy for most uses.


I agree on your first point, it is the amount of light coming in through the aperture. A wide open aperture lets more light in.

The point is just that they have to balance DOF and brightness in their designs. It’s a multi-variable equation, and sensor size is one of the factors that constrains the options you have available all things considered.

Like you said

> once you fix the angle of view.

And of course they’re limited to relatively small apertures anyway.

> Because films with higher ISOs are unacceptably grainy for most uses.

Not in my experience! This is just nitpicking now.

We can keep getting more and more technical, but I think you understand how it works.

You’ve made your point. You don’t like gatekeeping. I’m not gatekeeping.


Sorry, there’s something about discussions of the relative merits of camera systems that seems to get everyone very hot under the collar. Indeed, I think we have both made our points based on our own experience.


>They are enormously sophisticated imaging devices that comprehensively outperform, for example, the 35mm film SLRs that many photographers were using in the 90s.

I'm not so sure about that. I'm impressed by what smartphone cameras do these days, but the Nikon F100 snuck into the 90s and beats the pants off my iPhone 14 Pro's camera, while still being very much in the hobbyist/prosumer price range.


Have you done any side by side comparison shots? Even with a high quality scan, you're unlikely to get the same resolution and dynamic range from a 35mm negative. And that's leaving aside the obviously vast differences in convenience and flexibility. (I'm old enough to have used 35mm SLRs, and I have absolutely no nostalgia for that era.)


I've long since lost or sold my F100, but you can go to Flickr or 500px and search for F100 photos - it's still relatively popular among people that want to shoot on film.

Doing a quick side by side comparison of the 'selfie in the woods' shot with a shot of a person with a beard on 500px ( https://500px.com/photo/89633601/ge-by-nika-topuria ), the F100 shot has similar levels of detail in the facial hair, despite being taken from farther away and with what looks to be a wider angle lens by my eyes. You can pick out single strands of hair and bits of fuzz on the subject's clothing, etc., as well.

Bokeh is, of course, massively better on the F100. And I'll take basically any of the Fujia films over the color grading in the iPhone.

Meanwhile, basically every single concert photo on flickr for the f100 absolutely destroys the iPhone in quality ( https://www.flickr.com/search/?text=nikon%20f100%20concert )

Looking at landscape shots, it does appear you'll get more detail out of the iPhone camera ( https://500px.com/photo/10782613/silent-chorus-by-chris-froe... ), so I can't claim it's universally better, but I think the idea that phone cameras "comprehensively" outperform quality film SLRs from the 90s is false.


Without side by side comparisons it's hard to draw any conclusions from some random photos on Flickr. The concert photos you link to are good photos (which is the important thing, of course) but they're hardly excellent from a pure technical perspective. Look at e.g. the blocked out shadows here: https://www.flickr.com/photos/ginandsake/32014019777/in/phot...

I guess you're just using the F100 as an example, but it's worth pointing out that the camera body is almost completely irrelevant to image quality with a 35mm film camera. It's all in the lens and the film. (Autofocus might be better on the F100 than on earlier SLRs.)

>And I'll take basically any of the Fujia films over the color grading in the iPhone.

You want the same color balance regardless of time of day or lighting? If you want to get accurate color balance with film you have to use color balancing filters to adjust for lighting and conditions.


>Look at e.g. the blocked out shadows here: https://www.flickr.com/photos/ginandsake/32014019777/in/phot...

There are tradeoffs, for sure. Even in that photo, the details you can see look better than the weird mess that iPhone 14 Pro produced at a concert the other night. I think that's largely due to whatever computational garbage is happening, but that's what the camera app gives me.

>I guess you're just using the F100 as an example, but it's worth pointing out that the camera body is almost completely irrelevant to image quality with a 35mm film camera. It's all in the lens and the film. (Autofocus might be better on the F100 than on earlier SLRs.)

Sure, though the body determines what lenses you can use, ergonomics, features like autofocus, etc. There was a lot of solid Nikkor glass available for the F100. We could also point out that the developing process is also important to quality, etc.

>You want the same color balance regardless of time of day or lighting? If you want to get accurate color balance with film you have to use color balancing filters to adjust for lighting and conditions.

I'm not sure how you get that from my statement. I think that the iPhone's color grading and computational stuff looks pretty awful, and think that Fuji made quite a few excellent films. I don't see where this says I wouldn't use a filter when needed for an SLR... I certainly make use of CPLs and NDs on my cameras today.

Though, I was misremembering when my go-to film was introduced - looks like the FujiColor NPH 400 came out in '02. I'm not sure what I was using prior to that as my standard film.


I don’t think there are any Nikkor lenses that work only on the F100. Nikon has always been pretty good about lens forward and backward compatibility.

There’s lots of nice looking iPhone concert photos on Flickr. I just don’t see any slam dunk there, sorry. But hey there's obviously a subjective component here. If you prefer the results from the F100, that's just as legitimate as whatever preference I have.

I'm confused about the color stuff because digital basically lets you do whatever color grading you want, within reason. Getting the exact colors that you want from film is an arduous process if you aren’t scanning and using a digital color grading workflow. You either need a big set of color correcting filters or you need to do complex work in the darkroom. (Color wet printing is a HUGE pain in the butt.) Back when I shot film for real in the 90s and got the films developed and printed by regular cheap consumer labs, it was pot luck how the colors came out. I think a lot of people have a kind of false nostalgia for film's color rendition based on the results of a film+digital workflow that wasn't available to regular people in the 90s. There's a similar effect with grain, which looks quite different in scans compared to wet prints.


The world where it's superior is when I'm making a short film and want to be able to shoot it without a camera rental budget. As a low-budget filmmaker, my options for cameras are

1.) My old Canon 80D. One time purchase, decently compact, short battery. Produces an okay-ish image with moderate effort.

2.) Renting something nicer than my Canon 80D. Bulky, requires lots of know-how to operate, expensive if I break it. Have to use special cords, cards, lenses, mounts, etc. Produces a top-of-the-line image with high effort.

3.) My iPhone 12 Mini. One time purchase, very compact, multi-hour-battery. Works indoors, outdoors, day or night. Plug-n-play, extremely user friendly. Produces a darn-good image with minimal effort.

I fully understand that it's not the best camera out there—but it seriously competes with everything up to fairly expensive professional cameras. I would not hesitate to use my iPhone for professional photography and videography if the client never found out about it (or was okay with it). At the end of the day, the ratio of quality to convenience it provides is simply higher than any other offering.


The OP didn’t say it was technically superior as a camera - they said it produced better photos. Which you can argue it does, with all the fancy post processing. The statement was only about the final result, which for the eyes for whom most peoples photos are presented to are excellent.


Idk man.

If I zoom out from parsing word by word...

...feels like I'm saying a Keurig "rivals and surpasses espresso machines" because it can produce a better espresso than an arbitrary espresso machine in arbitrary hands.

Yeah, true. Not very meaningful though.

There's probably a McDonald's hamburger analogy here that's better, but, here we are.

My challenge to an ambituous reader looking to comment: make that one work too.


A lot of restaurants do use Nespressos when you order a coffee at the end of a meal, I think.

…McDonalds isn't bad either these days. It's just too salty.


Couldn't you take those photos and apply it back on the computer? Yet when i try to do HDR with 5 shots in Lightroom, it takes 10s of seconds or even minutes. It seems computer haven't caught up either?


Take a photo with your iPhone and very quickly tap the thumbnail of the new image that appears in the bottom left. You can watch it progressively process the image.


Yes and no, Apple's models operate on the raw sensor data as far as I understand, and take advantage of hardware acceleration in the A-series chips' Neural Engine specifically designed for computational photography. Furthermore, Apple's models are proprietary—if you could access them, you could probably run them on your computer and get the same result, but you unfortunately can't access them.


Yeah, it's hard to compare 35mm full frame sensors with miniscule camera sensors. As you said the magic is their processing which compensates for that.


it is hard, but it's getting a little easier with the larger cameras


Are there any large sensor cameras that are working on computational photography? Seeing what Apple and Google get out of tiny sensors in a phone makes me wonder what would be possible on a big camera with a better sensor.


There is computational photography happening on cameras but I think it's mostly opt in features.

Olympus and Panasonic can take 80–100MP shots by shifting the sensor a small amount and stitching multiple shots together. This can even be done handheld on the newer models. I imagine phones will get this eventually (maybe some do already).

Then there's all the subject detection auto focus available on basically every camera nowadays.


> Are there any large sensor cameras that are working on computational photography?

Not sure about the current state of the art but I do know Fuji has had some pretty fancy in-camera processing for years now.

When I was more into photography however it seemed the 'culture' was more into post-processing with computer software for reasons.


The computational photography for those is done on a computer.


Where it has more power, but lacks info like how the OIS was moving as the pictures were taken, so isn’t necessarily as good.

And none of those products have nearly the sales of a smartphone to sustain R&D. Similar to how the headphone dongle of an iPhone is much cheaper and yet better quality than most audiophile equipment.


I would consider myself barely passable in skills compared to the average hobbyist photographer and have never seen in-phone computational work that I would take over what can be done manually in Lightroom, or if you want to go the automatic route, with Skylum's offerings.


Even if you aren't interested in the computational work that supposedly enhances a photo, there are other things camera phones have given us that DSLR companies could do but don't.

For example, I really like live or motion photos.


The difficulty would be recreating Night Mode by yourself using phone raws as opposed to Sony a7s raws.

Or handheld HDR stacking but they're more able to deal with that.


A phone can do on-camera stuff or capture RAW and let you mess with it on your computer. Are there any big sensor cameras that give you those same options? I’m not aware of any and it baffles me that the camera companies feel so little need to innovate. I kind of wish Apple would do a dedicated big-sensor camera.

Also, are you saying you could reproduce any of the on-phone stuff easily on your computer? I’m thinking about the astrophotography modes, portrait modes, live photos, and low light features.


innovate what? what pro needs in camera processing? when you can process on a computer that is way more powerful and better UX for processing a large number of images?


The consumer market is a pretty big part of most DSLR sales. There really aren’t all that many professional photographers, relatively speaking.


Raw photo off DSLR is very bland looking till you process it. No one is using raw information off a sensor as a final product and you need "fancy" processing to make it decent looking.


I always wonder what a one-off joint Apple DSLR/mirrorless could be. If they provided the smarts for a Sony or Canon or Nikon, just how good could the pictures be?

Too bad we’ll never know.


Its not a phone. Its a camera with comms and apps capabilities.


Every iPhone release since the 11 I've been waiting for them to add support for Wifi 6E. So far not yet...


To be clear, it's the iPhone 14 Pro models that have the 48MP camera (with telephoto) described in the article. The non-Pro iPhone 14 camera is said to be improved somewhat over the iPhone 13 (slightly faster lens, software magic, etc), but it is not the high-end system in the iPhone 14 Pro.


The EU will force them switch to USB on the next iPhone so better start saving :)


I upgraded from an Xs Max to a 14 partway through a road trip this year. The massive difference between these phones is in video (and particularly stabilisation) but I’ve also enjoyed the 0.5 and 2-3x options on the camera.

I haven’t had enough wifi to backup to iCloud and free up space to work in raw/48MP, but you can see the sorts of shots here with the stock app and 12MP.

http://instagram.com/isaacforman/

I am carrying a GoPro, an Osmo Pocket, an Insta360 Go 2, two drones and the two phones - the 14 is the one I used as a priority because the quality and options were so good.


-- anyone else find it annoying how protruded the lens is? --


As long as it's smaller than a smartphone plus an RX100 taped together I consider it a net win.


You’re not wrong at all. My 12 Pro was much better, but still a bit annoying.

But optics can’t beat physics. Better cameras need more space for lenses and sensors. They could make the whole phone thicker, but haven’t done that yet.

The trajectory seems unsustainable. We’ll see what happens I guess.


yeah not a fan, my iphone12 mini is bad enough. I get that taking a photo and posting it to instagram is 95% of the use case for the iphone but i still don't like the way the camera lenses protrude.


Since the iPhone 11, the primary stated purpose of the Pro variants has been photography.

For users less interested in photography, the regular iPhones have a more subtle camera array.


Only marginally. The iPhone 14 non-Pro has the (also enormous) camera system from the 13 Pro, only with the telephoto lens removed.



Yes, it is a minor annoyance and actually seems to be counter to Apple's historical product design


Apple? The company that added a notch to a laptop? :-))


When you say annoying, what do you mean specifically?


VERY disappointed there is no 48MP RAW available.

The iPhone cameras are superb I think, but the "Apple Image Processing" renders stock camera photos useless for me. The watercolor smear and color smoothness that the article talks about. I like noise, don't smooth away the reality of life.

ProRAW is confusing because it is not RAW at all, instead it's an uncompressed image that has had a milder "Apple Image Processing" applied.

I've tested the iPhone 14 Pro, and the ProRAW files are too processed for me. They are not "reality". This is a philosophical concept. Do I want to capture what I see, or a smoothed watercolor ideal of what Apple think I should see?

I want a 48MP genuine RAW file that I can post-process in Lightroom or Photoshop. Here's looking forward to the iPhone 15 Pro!


These relatively cheap tiny cameras would be so useful in so many other products, from medical to sports.

It is really a shame that most new consumer tech is locked behind the doors of large corporations these days, that will keep the tech from reaching its true potential in a myriad of products.


I've been looking to buy "portrait focal length" cameras. You can get the AliExpress equivalent of a standard phone camera, dressed up as compatible to Raspberry Pi, and somebody even brought wide-angles to that market once, but no-one seems to have the long focal lengths.

Optics is weird. Either a device becomes a thing and then it's immediately a huge product, or the availability is straight up zero.


Are you sure these cameras are cheap? Regardless, these are regularly off-the-shelf sensors that "anyone" can buy (such as the ISOCELL GN1 https://semiconductor.samsung.com/image-sensor/mobile-image-... ).


> And yes: 48 megapixel capture is slow. We’re talking up to 4 seconds of capture time slow

Pretty disappointing that a US$1000+ flagship device still can't batch capture at high resolution. Even entry-level DSLRs/mirrorless cameras can do 15+ fps in RAW mode.


Hell, my Nokia 1020 was faster with the 48MP resolution


I just can't help but notice how soft so many of these shots look. The composition and lighting is beautiful, the immediate impact of the photo is great, but then if you start to really look for more than a few seconds on desktop, you see how smudgy so much of the detail is.

I recently gave up my mirrorless in favor of my iPhone because the latter was just so much more convenient and largely good enough. I wonder if it is physically possible, however, for these smaller lens and sensor packages to ever get to the point of eliminating that phone camera smudge?


A bit of knowledge of optics and photography is enough to know that little sensor with miniature glass cant compete with a dedicated camera. (Eg regardless of 'equivalent' field of view via crop factor, a 50mm and 10mm are going to produce a very different depth of field.)

You can definitely take nice photos with it given the right variables and/or some creativity (as you could with a point-and-shoot a decade ago)... but don't be taken in by their marketing if photography is a goal for you.


The size of the sensor isn't as important as people think it is. What really matters is the diameter of the aperture (the absolute diameter, not the f number). Consider a cone of light for a given angle of view hitting a small sensor close to the aperture and a large sensor further from the aperture:

         o < aperture of a given diameter
        /\
       /  \
       ----    < small sensor (less area, more light per unit area)
      /    \
     /      \
     --------  < large sensor (more area, less light per unit area)
If you compare typical shooting apertures for DLSRs and camera phones, they're not radically different. Say you are shooting a 50mm lens at f8 on a DSLR. That's an aperture of 6.25mm. A typical smartphone camera will have an aperture of around 3-4mm. In this scenario, then, the DLSR is getting about 3 times more light (or ~1.5 stops).

Of course you can use much wider apertures on DSLRs, but their use is more limited given the shallow depth of field that results. If you're shooting e.g. landscapes, then you're probably not going to use apertures much wider than f8 anyway.


This is related to the conservation of Etendue [1] in an optical system, which is basically a statement of conservation of power: you rightly point out that radiant flux is determined by the source and constant – and for that reason, the primary numerical aperture or f-number of the lens is ultimately what really matters – assuming, as you point out, that you want to use the narrower DoF that arises (in which case SNR scales as the square root of sensor area).

However, sensors get noise from different sources: and while you're right to point out that you might be up against photon shot noise, read noise goes down with pixel area: so, as long as pixel area scales with sensor area, and that scaling is performed by uniformly scaling the pixel, the larger sensor is intrinsically "a little bit better". Quoting shamelessly again from wikipedia [2]

> The read noise is the total of all the electronic noises in the conversion chain for the pixels in the sensor array. To compare it with photon noise, it must be referred back to its equivalent in photoelectrons, which requires the division of the noise measured in volts by the conversion gain of the pixel. This is given, for an active pixel sensor, by the voltage at the input (gate) of the read transistor divided by the charge which generates that voltage, CG = V_{rt}/Q_{rt}. This is the inverse of the capacitance of the read transistor gate (and the attached floating diffusion) since capacitance C = Q/V. Thus CG = 1/C_{rt}.

As capacitance is proportional to area, pixel area matters here – read noise is proportional to it linearly. In low-light conditions, read noise dominated most cellphone sensors (mostly for the above).

[1] https://en.wikipedia.org/wiki/Etendue [2] https://en.wikipedia.org/wiki/Image_sensor_format#Read_noise


That is a good point that I hadn't considered, thanks.

Are smaller sensors also faster to read, given the lower capacitance? I wonder if that might give them an advantage when it comes to stacking and averaging images to reduce noise.


> I wonder if that might give them an advantage when it comes to stacking and averaging images to reduce noise.

That's another good question and (as ever) the devil is in the engineering detail. I work with noise professionally (well, with signal in a low SNR environment, where the SNR per acquisition of interest is definitely << 1...) and those sorts of questions are good to ask but depend a lot on the exact noise stats: your approach requires that the average value of the noise is the same in both cases (read-out N times, digitally average; vs read-out once vs "analogue" averaging). In practice, because the distributions differ of different noise sources and images are positive semi-definite, I doubt this is true. The advantage of stacking is that helps with motion (a lot) and also helps with a finite dynamic range instrument (i.e. you don't blow it and can do gain compression).

> Are smaller sensors also faster to read, given the lower capacitance?

This Stanford paper with a model of a CMOS sensor [1] is rather old but quite a good explanation of where the readout time comes from; the capacitance across the active area is C_{pd} but the minimum read-out time is dominated by the capacitance of the readout bias, C_T, across the ADC line. As a result it scales by transistor feature size (fig 5) independent of the sensor area. Of course, as 'moar megapixelz' came along, this got higher and other designs were explored to mitigate it – a paper from Rochester [2] states that removing it buggers up the noise statistics unless you do "clever things" (which they describe in detail).

> That is a good point that I hadn't considered, thanks.

I should procrastinate more productively, but thank you!

[1] https://isl.stanford.edu/~abbas/group/papers_and_pub/tcas1.p... [2] https://sci-hub.se/10.1109/ISCAS.2008.4541803


>The advantage of stacking is that helps with motion (a lot)

Yeah, that was my assumption that I didn't articulate very clearly. For the same total exposure time, you wouldn't intuitively expect an average of multiple short exposures to be much different from a single long exposure (all depends on the mathematical details, as you say). But of course if you have a good alignment algorithm, you can often in practice use a much longer total exposure time. Then, to state the obvious, you can use a lower ISO and get less noise.

Is it the case that C_T is independent of C_{pd}? Looking at the (obviously rather idealized) circuit diagram, it seems odd that their values would be entirely independent in a realistic design. I guess I am basing my intuition on the simpler case of reading a single photodiode, where it is undoubtedly the case that the larger capacitance of a larger photodiode makes it more difficult to achieve a high bandwidth. Perhaps 'read time' as such is not the issue. But being able to scale up the individual photodiodes without sacrificing bandwidth seems a big magical.


I guess the physics of the problem are really against these tiny cameras, but even so the bokeh on the caterpillar macro shot is surprisingly painful. Is there anything they can do about that, optically or computationally?


A real macro lens on a DSLR is like that. You could stop the lens down for more to be in focus, but then because you’re getting close to a pinhole camera you need a TON of light.

Maybe the iPhone would do better in studio conditions, I don’t know.

But it’s a very tough problem.


No? If I took that photo with my 100mm f/2.8 EF macro, everything behind the caterpillar would just be vaguely green. There's be no structure to it at all. I mean it would be way out of focus. The DoF even at f/11 would be ~1cm.

The problem with the photo in the article is the structure of the background is quite apparent and all the details in the background have been multiplied into hexagons which is very distracting.

Directly comparable macro photograph of a moth. https://www.slrphotographyguide.com/images/butterflymacro.jp...


Oh, I thought you wanted more depth of field.

My mistake.


I still thinks Pixel camera is much better than iPhone . I guess this lux apps play a niche role in iOS as Android have many competitors


To me that's a downgrade, it should be smaller, not bigger


The author clearly knows a lot about photography, so I don't think this is a mistake in the article... When did "depth of field" begin meaning the opposite of what it traditionally meant in photography?

For example: A nice side benefit of a larger sensor is a bit more depth of field. At a 13mm full-frame equivalent, you really can’t expect too much subject separation, but this shot shows some nice blurry figures in the background.

For as long as I can remember, depth of field referred to the range of distance that is in focus. So more depth of field would mean more things in focus and less subject separation. And "distance to subject" and "field of view" equal, a larger sensor results in less depth of field.

But in the article it is clearly the opposite. This isn't the only place I've noticed the change in meaning either.


Perhaps it’s technically inaccurate but seems easy to read it as:

“ A nice side benefit of a larger sensor is a bit more depth of field effect”.

Eg referring to the visual effect of shallow DoF, not the DoF itself. Because the following sentence is unambiguous about his intent, I’m inclined to not be overly pedantic here and let us slide.


I think you're right. The point they were trying to make is that there is more separation between the subject and the background. So, in a way, an _improved_ DOF if not a _greater_ or _shallower_ DOF.

Maybe?


You're right, he should have written shallow depth of field.


I think it stems from the "depth of field effect" tool in post-processing, which then caused people to call the visual effect "depth of field" (instead of "shallowness of field" i guess)


I'd guess that Photoshop et al. are the cause, with virtual depth of field effects.


Because “depth of field is when blurry”


Why is "huge" not in the title ?

edit: it's fixed now


The HN game:

- Was it editorialized by the submitter?

- Was it changed by dang to de-clickbait it? (probably not in this case)

- Did one of HN's title filters remove some words?


That is strange. lol


Probably unintentional, but enough to get flagged for "editorializing" the title.


Because it's huge in clickbait. I've put it back in the title above.


character limit is my guess


These pictures are fantastic. For anyone but professionals the reasons to buy a DSLR or mirrorless are virtually gone.


I disagree and as a n00b/amateur I recently picked up my first "real" camera, a Fujifilm x-t20. I've managed to take some amazing photos that simply wouldn't have turned out as good on my iPhone 12.

I was sick of the smudgy look that happened often on the iPhone when the lighting wasn't perfect, and also there is a unique "look" that the Fuji mirrorless cameras spit out due to their x-trans sensor[1]. In my short 2 weeks with the camera I've had a ton of fun and gotten some great shots.

While no doubt the 14 pro is amazing, your statement isn't true.

[1]: https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor


> I disagree and as a n00b/amateur I recently picked up my first "real" camera, a Fujifilm x-t20.

Welcome to the hobby! The X-T20 is a great camera.

> also there is a unique "look" that the Fuji mirrorless cameras spit out due to their x-trans sensor[1]

The performance of the Fuji sensors aren't the main reason behind the processing, for sensor type comparison, there's a great article here:

https://medium.com/@nevermindhim/x-trans-vs-bayer-fantastic-...

Fuji cameras have built in post-processing that allows it to render images closer to film stock. Fuji has spent a lot of time refining their post-processing to match how it would like across their own film stock and really the only manufacturer that achieves such a great out-of-camera processed images.

If you are working and shooting in RAW, you'll find the output from Fuji when processing the RAWs yourself are not much to look at when you look at a RAW file in RAW processing tools like DarkTable etc. They're fairly similar really to any other camera mirrorless or DSLR. You'd really have to be pixel peeping most of the time to really see a noticeable difference (assuming lens etc. is very similar). When working in RAW, you can often get similar processed colours and images out of most cameras so really this shouldn't be a limting factor. From my Android phone, I can save RAWs and generally achieve similar colours and overall appearance etc. Although, due to the sensor size, it will be notably lower quality if you start pixel peeping and generally the colour depth is usually less to work with on mobile sensors.

Edit: Had the wrong link for the article.


Word! Thanks for the additional info, reading that article now. I'm sticking to JPEGs and leaning on that fuji sensor/processing for now. I mostly got this camera to document my family and 9 month old, time is limited at the moment and post processing is low on the list :)

Maybe one day I'll start messing around with the RAWs though!


that smudgy, painting look on phones is the phone trying to remove noise in the image


Yes I’m aware, and my point is that it doesn’t happen with a better sensor/camera and the lack of “smart” processing. Even my small compact point and shoot would take sharper photos in the same lighting than my iPhone 12. As mentioned in the article the processing the iPhone does has seem to become more aggressive as well.


> For anyone but professionals the reasons to buy a DSLR or mirrorless are virtually gone.

I'd say aesthetically, it's very difficult to get decent background seperation on a phone. The practical reasons for using a camera body (I kind of consider phones to be real cameras) is to deliver quality images and aesthetics that are not achievable trivially on the phone. There's a point where trying to use a phone the same way as a camera body actually becomes really awkward from both a user interface perspective and very difficult based on capabilities.

It's actually a bit infuriating how camera bodies have not been modernizing very much. You won't find many camera bodies having a built in GPS directly, no camera security options (anti-theft, preventing photo access, automatic cloud backup), I haven't found a single camera manufacturer phone app (tethers with camera) that actually works decently with RAW workflows. There is a demand for post-processing in cameras too, not for every photographer, but, fuji users in particular tend to use that camera because of its post-processing capabilities.

GPS is actually a pain for professionals that work in teams too even when you aren't using GPS for positioning. If you're covering an event and don't have GPS, your camera bodies won't usually have decently synchronised clocks and if you're trying to get photos from multiple photographers and trying to make a second by second story, the metadata in the photos will lead to a very jumbled mess quickly. Meanwhile, most phones are accurately keeping the time so perfectly, that if you import images from them, they're typically very well ordered.


Since my wife bought a Canon 90D for shots that matter I have since "meh"d about camera phones.

I wish I could just have a simple dumb phone with a calendar, GPS, and texting, and it doesn't cost 1400 bucks.


You can have that. Just don't buy a new model "pro" iPhone...


Any cheap Android phone offers all that for 200 or even less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: