Hacker News new | past | comments | ask | show | jobs | submit login
Experimental Nighttime Photography with Nexus and Pixel (googleblog.com)
380 points by monort on April 25, 2017 | hide | past | favorite | 74 comments



Amateur photographer here.

This should be the future of DSLRs. Provide some sort of API so that I can create recipes for my photography project. Bonus if the hardware is powerful enough for me to process the way I want to on it (as opposed to the builtin, mostly useless, features).

As a silly example, say I want to take 10 photos. I want the first photo to be 1/30s. The next 1/15s, and so on - doubling the interval each time. I just want to be able to program this, and assign it to a button/menu item, so it will do it automatically.

Or I want to do custom focus stacking. It should automatically take N shots of predefined focal distances, and if powerful enough, stack them.

I've never coded Android apps, so I don't know how much control over the camera is exposed to you, but why can't camera companies provide the same level of control?


You can do this sort of thing already with Magic Lantern[1] on a Canon DSLR. It has a lot of features for running large custom brackets, complex intervalometer sequences, focus bracketing and so on. If that's not enough there's also a Lua scripting capability[2] that lets you control the camera programmatically.

The one thing it doesn't do is much in the way of image processing on board the camera. Generally it is left to the user to perform post-processing on a PC - partly due to lack of CPU power on the camera, and partly because you can get far better tools and hence results on a PC anyway.

[1] http://www.magiclantern.fm/

[2] http://www.magiclantern.fm/forum/index.php?topic=14828.0


Cool.

I'd heard of Magic Lantern, but never looked into it as it's for Canons. I own a Pentax :-(


If you're looking to try out camera scripting on the cheap, Canon high-end P&S / superzoom cameras also support it (through CHDK). With something like a used ~$100 Canon SX40 you can get relatively decent image quality.


CHDK is amazing and is supported by [many Canon Powershot or Canon IXUS cameras](http://mighty-hoernsche.de).

It inserts hooks into the Canon firmware to give you many professional features, and supports scripting so you can customise what it does.


http://www.pktether.com/

I haven't used it (yet) but it might do what you need.

Also the Android (and probably iPhone) stores have many like this https://play.google.com/store/apps/details?id=de.dslrremote


For more recent Pentax cameras pktriggercord [1] seems to be the best option. It's a shame Pentax can't just use PTP for tethering instead of their weird extensions to the USB mass storage mode.

[1] https://github.com/asalamon74/pktriggercord


Amateur astrophotographer here.

The techniques in that blog post are known since 10 years ago. In fact they are quite crude, as they don't account for other types of noise like readout noise, also it does not takes a map of individual pixel sensitivity taking "flat" frames.

Stacking pictures is the foundation of astrophotography and there are many free utilities that does this, for example:

http://deepskystacker.free.fr/

BTW digital cameras already take a "black" frame and substract it from the "light" frame automatically, that's why sometimes the camera takes some time to show you a long-exposure picture: it's taking a black frame with the same exposure time.


I'm well aware of all these techniques. However, I'm looking for more flexibility, like the type I mentioned in my comment.

Something like auto focus stacking. The camera need not do the stacking - just automatically focus on various portions of the scene and take pictures. I can then stack on my computer. Currently I have to focus at one spot, take the picture, then focus on another, take the picture, etc. What I'd like to do is to specify the five focus points, and have it then do the rest of the work.

The in-built bracketing capability in cameras is really minimal.

I was looking at Magic Lantern, and it has a time lapse that essentially maintains a constant brightness - so when the day transitions into night, the camera auto-adjusts the exposure to make sure the subject does not become darker.

Lots of possibilities.


You make it sound like DSLRs do this automatically, which is not the case. My Nikon doesn't and my old Canon definitely had the option but didn't by default.


I've never tried it before, but Sony allows you to purchase third party camera apps to do such a thing. Example: https://www.playmemoriescameraapps.com/portal/usbdetail.php?...


You can even write your own apps, but since that's not officially supported by Sony the process is probably quite painful. See https://github.com/ma1co/OpenMemories-Framework


That's cool.

I never considered Sony. Too limited in terms of lens choices. But if this capability is flexible/powerful enough, then it does kind of compensate.


Sony has gotten a lot better in the last 5 years. Their OLED viewfinders are an incredible improvement in photography.


This is an excellent idea, but I suspect the camera manufacturers aren't keen on it as they use software features for price discrimination.

There are some options available: http://chdk.wikia.com/wiki/CHDK , http://magiclantern.wikia.com/wiki/Magic_Lantern_Firmware_Wi...


Other people have mentioned Magic Lantern, but it's also possible from an Android, there's an app called DSLR Controller which let's you do photo bracketing. You just use an OTG cable, and I think if your camera supports it you can connect through Wi-Fi too.


> there's an app called DSLR Controller

Link to said app: https://play.google.com/store/apps/details?id=eu.chainfire.d...

DSLR Controller is written by Chainfire. The same developer who also created SuperSU, FlashFire, CF-Auto-Root etc.


> This should be the future of DSLRs. Provide some sort of API so that I can create recipes for my photography project. Bonus if the hardware is powerful enough for me to process the way I want to on it (as opposed to the builtin, mostly useless, features).

Most DSLRs already have something like that. Both Nikon and Canon, at least, have free SDKs downloadable from their websites, and on Linux gphoto supports a ton of cameras.

As far as doing everything on camera, most higher end DSLRs have built-in controls for bracketing, multiple exposures, time lapse, etc.


Sadly, Nikon's SDKs are quite limited, specially on supported devices


Limited in what ways?

I haven't used it directly myself, but Capture One uses the Nikon SDK, and it's able to fully control my camera with live view. In fact, they even have an iPad app for controlling the camera, with live view on the iPad.


DSLR camera should integrate Android + Phone SOC inside.

Get the best of both world by combine Android as GUI/API layer + some big DSLR sensor/ custom image processing HW magic.


I love this idea. I'm imagining a user interface something like Apple Automator, where the user selects from a library of inputs (bracketing, long/short exposures, delays), then stacks some operators (HDR merge, noise-cancelling algorithms), and then outputs (tone mapping, other post-processing). It could be a whole new sandbox to play in before an image even touches Lightroom.


Once you know some basic principles of photography, this actually shouldn't be all that surprising. The Google Nexus 6P uses a 1/2.3 inch type sensor that measures 6.17 x 4.55 mm. That gives us ~28 square mm of sensor area. The initial example image was taken with a full-frame camera with a sensor that measures 24 x 36 mm, yielding 864 square mm. That gives the DSLR ~30x the sensor area of the Nexus. Then with the same amount of light per square mm per second (measured by the f-stop of the lens) the Nexus needs to expose the image for 30x longer than the full-frame camera to gather the equivalent amount of light. It just so happens that this approach used 32 exposures - it makes sense that the results look comparable to a full-frame camera because the phone gathered just as much light.


My Nexus 6p takes amazingly good photos - I rarely use an SLR now.

The main problem I have is that it shuts down after 10-30 minutes of use - no help from google. https://code.google.com/p/android/issues/detail?id=227849 Makes me want an iPhone. (see below Huawei might be fixing)


There's actually a class action lawsuit related to this issue [1]. I first read about it here [2]. I've got a Nexus 6P too and it's great except for the battery and shutdown issues. Mine will shut down at like 20% or so so it's very usable, but it's often at night when I most need to use my phone (calls, rides, etc).

[1] - https://chimicles.com/google-nexus-6p-battery-early-shutoff-...

[2] - https://www.engadget.com/2017/04/21/lawsuit-takes-aim-at-goo...


The 5x too. It may well be the same design flaw as the 6p.

https://issuetracker.google.com/issues/37117345

Despite the comments there about software it is most likely about heat and a component which is affected over time.

I've had a 5x just go dead. Bought a Pixel, it died after a week (probably unrelated to this fault). The replacement is fine though and the phone is great overall and the camera is fantastic.


I really cannot understand why consumers persist in buying smartphones with non-removable batteries. All of these battery degradation problems are reduced to minor annoyances if you can buy a replacement battery for ~$35.


I had this issue but less severe.

I called Google (bought it from the play store), they asked me to try a couple of things in order to repair the phone (completely useless but the operator has to follow the script).

Of course it made no difference, but just after confirming that, they sent me replacement device.


Google replaced my 6P a few weeks ago, it was also shutting down early. It was months out of warranty (bought at launch) and they made me take lots of pointless steps like running in safe mode and doing a factory reset, but the process was straightforward.


Huawei replaced mine. They have something like a 15 month warranty from date of manufacture so check with them and see if you're still in that range.


Also if you're in the EU remember you have a statutory 24 month warrantee for non-consumables. (The standard example of the sort of thing not covered is an oil filter which is expected to have a shorter life)


Contact the google store about this -- this is something they should replace.


Thanks - I actuallly bought from newegg so Google store didn't want to help. I rang Huawei today who I think are about to replace or repair it so hopefully I'll have a working 6p again soon. Maybe the lawsuit helped as previous comments I'd read gave me little hope. Huawei support is 1 888 548 2934 in case anyone else needs it.


I've been pretty happy with my Pixel XL's camera (easily keeps up with the incremental improvements in cell phone cameras over the years) but I also appreciate how they didn't make this into too much of a puff piece where they "hand wave" away the amount of work still needed in Photoshop or similar.

The takeaway is that with some effort and intelligent use of other software tools, you can put together a nice image with all sorts of lower-end cameras. The bit at the end about hopefully adding some of this functionality to software available on the phone was a nice touch, as I imagine all of the big players are always working on that sort of thing.


The photos are impressive. But considering the introduction, it would have been nice to have some actual side-by-side comparisons with dslr. And then just for fun, the same exposure stacking applied to dslr at max iso.



The author really should have cut down on the blue saturation for the phone examples, they're pretty ridiculous IMO. But regardless, it's an incredible result considering the sensor capabilities.


"The camera cannot handle exposure times longer than two seconds."

What's the reason for this? Is it a hardware restriction? I suppose an artificial software restriction could be removed by using root / other camera software.


Mostly hardware.

Shortest explanation: The sensor exposure time register has a maximum value.

Next shortest: But it's actually in units of row readout time, on many sensors, which is also configurable, so the exposure time can be made longer at the cost of slower image readout. In normal operation, readout has to happen at 30fps at least, so extra code is needed to switch to slower readout for extended exposure values. This code then needs validation, the image processing tuning tables need to be updated and verified for the new long exposure durations, and any preview glitches, etc, from resetting base sensor configurations need to be addressed. So a lot of extra work, for a relatively niche feature on a smartphone.

Even longer: Many sensors also have an external shutter trigger signal pin, for unlimited exposure duration. But that needs to be wired to the CPU, and all the SW considerations above also apply.


Couldn't it just be a heat dissipation issue? These sensors pack a lot of photo sites into a tiny package.


To first order, long exposures are probably less power-hungry - much of the sensor power is burned in the image readout, and longer exposures mean you're reading images less often.

When collecting light, an image sensor pixel isn't really using up any active power (each pixel is basically a capacitor collecting electrons generated by light hitting the silicon).


I don't believe this is entirely correct. DSLR sensors get very hot during long exposures - to the point where excessively long exposures can introduce noise into the output from this heat. I don't see why this wouldn't apply even more so to phone sensors, with their incredibly high photosite density.

However really long exposures are going to suffer from star trail effects - the earth is rotating relative to the stars, so a long exposure changes stars from a point to a short line, which _usually_ isn't what you want. On a 35mm camera with a fairly wide lens you can get away with ~ 30s of exposure time.

On a pixel phone I think you'd be able to get away with ~ 3s exposure time, but as it's going to get pretty hot over this time I'm not sure how much extra image quality you'd actually end up with.


It's interesting to note that the Nexus 5, released in 2013, could take exposures up to 30 seconds, iirc. Different sensor hardware.


The article states that the author wrote his own camera app for the experiment, so it's almost certainly not a restriction that can be worked around by using different camera software.


It's almost certainly not something you can work around with a user space Android app, but "software" covers far more than just that. By the time you're triggering the Android camera APIs you're already very far removed from the software actually talking to the hardware, which is probably where this restriction lies.


reminds me of the low light video camera that was on here ealier. http://kottke.org/17/04/incredible-low-light-camera-turns-ni...

Not as fast, but the results look fairly similar.


FWIW Olympus already provides in-camera shot stacking for night shots in its Micro Four Thirds cameras via their "live composite" mode. See http://www.duford.com/2016/08/explaining-olympus-live-compos... for example shots.


Incredible results!! I would really love to be able to do this on a phone.

I've tried with iPhone to take long exposures and crank up the brightness/HDR to bring out as much signal as possible. This is the best that I could do on a fully moonlit night: http://imgur.com/a/km1D9


Here's an example of what you can get on an iPhone with a 15 second exposure I just shot out my window: http://imgur.com/a/r2YKX

Look at the detail added to the mountains in the background

I used Camera+ which stitches together images similarly to the article


You could probably get better results by taking multiple independent shots and merging them later.


The app Hydra does this. It's not specifically meant for super dark photos so I don't think they extend exposures to the max.


400 bad request


What I don't get is why JPEG is still so commonly used when it introduces significant artifacting (even at high qualities). When a more advanced format like WebP can deliver higher quality images for the same file size.


From the article: "The app saves the raw frames captured from the sensor as DNG files, which can later be downloaded onto a PC for processing."


What does this have to do with the article?


The images in the article are JPEGs with visible compression artifacts which isn't good if they want to demonstrate the quality of the photos that they can make with the phone.


I think the timelapse style stars is a feature and not a defect, although it goes against the original challenge.

Although this article is about the challenge - decent photography with a mobile phone - it does outline how easy it is to layer up lots of images in Photoshop, median everything out and get a long exposure image. Taking out the sensor 'median' was clever too.

So you could use this with DSLR images too, to take better long exposure images whatever the sensor, so long as everything is fully HDR and manual.

I think that I might just give it a go. With PHP Imagemagick so that I can automate the Photoshop part and tweak settings easily.


By "taking out the median" you're talking about what's known as dark frame subtraction. I believe most DSLRs already do this internally for long exposures - try setting your camera for 10s exposure and time how long it takes before it's ready again; I'll bet you dollars to donuts it's about 20s. Smartphones can't do this because they lack an internal shutter.

On advanced models (and hacked cameras like MagicLantern/CHDK), you can turn this off and do it manually, e.g. shoot a dark frame for only the first image in a series so you can get ~95% duty cycle rather than ~45%. Especially useful if you're trying to capture a rare event, e.g. lightening.

What's weird about the technique in TFA is he takes N bright frames and then N dark frames, computes both medians and then subtracts. By interleaving bright and dark frames, and doing the subtraction before stacking, I'm pretty sure they'd get better results.

As for image stacking, there's tons of ways to do it, and quite a few turnkey apps for image stacking for astrophotography as well, e.g. here's a review from last year of a Mac app doing just this:

https://petapixel.com/2016/02/20/stack-photos-epic-milky-way...


Doing all the bright frames together and all the dark frames together is surely suboptimal, but it's hardly weird: there's an obvious reason for it. (Namely, that the transition between bright and dark requires covering the camera lens with black tape and it's easier to do and undo that once than 64 times.)


If the sensor is stable over the period when the frames are shot, why would subtracting the mean of the black frames from the mean of the exposures be worse than subtracting individual black frames from individual exposures? Aside from rounding errors and possible overflows the two procedures should be equivalent:

  (e1 + e2 + ... en) / n - (b1 + b2 + ... bn) / n = (e1 - b1 + e2 - b2 + ... en - bn) / n


The suggestion isn't about what order you do the arithmetic in, but about what order you do the captures in. ++++---- versus +-+-+-+-. The former has the disadvantage that if something that affects the images (temperature, say) is varying gradually, or changes abruptly at a particular point, it's easier for it to have substantially different effects on the light and the dark frames.

It's possible that something like Thue-Morse (+--+-++--++-+--+ etc.; one way to define this is to look at the parity of the number of 1-bits in the binary representation of the frame number) might be better than alternating. If whatever disturbances you might worry about are smooth in the right sort of way, it gives you more exact cancellation than alternating.


Well, at night you just need to cover the camera with some thick black cloth, which takes <1 sec - no need for tape.


if this could become an app it'd be great


I'm even more interested in mentioned SeeInTheDark app (that's unfortunately not available yet as it's part of Google research). The video demo is so impressive. I don't really care so much about photography, but decent sensors and optics combined with innovative processing in real-time is really starting to pick up.

I've already started using phone camera app to enhance my vision for text that's too far to read. Advances in post-processing will increase our vision even more, as well as allow for night vision and understanding text in other languages. We are transitioning into cyborgs by means of android platform...


Seems like quite a cool app and simple to implement as well. Aligning the images if the phone is not held still could be a problem.


This is great that they are investing so much in camera. In modern smartphone, I do not feel any difference in performance between my moto play and s7 edge. It is just the camera that makes me keep coming back to s7 edge, despite the $300 price tag difference. It is just much more convenient and I have stopped bringing my dslr on trips now since the phone camera works really good.


I think it would be interesting to compare an image shot in the daylight with the one created with this processing scheme in the same daylight. I expect the normal exposure shot to be better than the processed one, but it would be interesting to observe the difference in quality.


for the stars shot the next obvious step would be to automatically segment the image with optical flow. Could also try solving the hand-held problem with ORB feature matching.

given the previous posts from the googleblog I kind of expected a bit more algorithmic involvement beyond image stacking.


I know on the XDA forums, at least for the 5x, there is an app someone threw together to do long exposure times. I played with it some and got a pixelated mess, but I didn't try a tripod. Some people on the thread produced some pretty good pictures.


Interesting work. However, in order to reproduce the marin headlands DSLR image the smartphone would have needed to also stitch a panorama from multiple frames (because the lens used in the DSLR shot has roughly 2x the FOV angle).


>Still, this may be the lowest-light cellphone photo ever taken.

Not so fast buddy. Some of us have been playing with this stuff for a really long time :)


Fascinating article. Thanks for all the work.


Why take the mean instead of the median of the images? Wouldn't the median be less sensitive to outliers?


Cameras, like so many things, have been increasingly software limited during recent years.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: