Hacker News new | past | comments | ask | show | jobs | submit login
2bit Astrophotography with the Game Boy Camera (leidenuniv.nl)
222 points by cremno on July 4, 2017 | hide | past | favorite | 55 comments



I doubt the CMOS is only 2 bits, otherwise we wouldn't have such nice dithering pattern. Somewhere in RAM there's probably at one point a representation of the image with at least 3 bits per pixel.


The camera outputs analog values. I don't know for sure but I bet the dithering is applied right after the analog pixel is converted to digital...


Yep, here's the technical data sheet for the chip:

https://people.ece.cornell.edu/land/courses/ece4760/FinalPro...


Looks like the maximum supported exposure time is ~1 second... That mixed with the fact that the fact that it has a fairly shallow well depth probably means that the CMOS would be best suited for LUCKY imaging. Also adding a cooler to the back of the Gameboy camera may help reduce amp glow. Anyone have the specs on the ADC chip?


There is some reverse-engineered documentation here:

https://github.com/AntonioND/gbcam-rev-engineer/tree/master/...

The ADC appears to be part of the ASIC which contains most of the circuitry to interface the sensor to the GB.


"Artificial Retina LSI"... beautiful. :D


It's an analog output. You can very easily add an ADC to get higher bits per pixel and bypass the whole digital Gameboy interface. I did this once when the game boy camera was about the cheapest digital camera available. I even converted it to color by taking successive photos with colored filters and combining them in software.

Curiously, the camera has hardware edge detection which might have helped with very early machine vision.


It is the Game Boy screen that has only two bits.


That's worthy of the term 'hack'. Very neat :)

There should be some silly way of doing this with just a single photo diode and a speaker to provide one axis of motion and to let the Earth's rotation provide the other axis.


You can do this even without any motion and possibly even with no focusing optics, just with a single sensor, diffraction grating and a lot of math. This is called single-pixel imaging and is an ongoing research topic in computational imaging.


Could you explain in laymans terms how that would work? If you're only obtaining a single value for a single pixel and there's no movement, how does the pixel value change even if there's a grating in front? Or do you mean the grating can still move?


The pattern still needs to be different at each sampling, yes. Moving the grating is one way to do it, or it can be done with a micromirror array, for example.


Gotcha cheers! A while ago I saw a single mirror MEMS device, could be useful for something like this, I believe you could control the horizontal/vertical angle.


And noise? I used to do long exposure photos of interiors of buildings on film (~2h) but digital cameras seem challenged to me.


That's very interesting re. film. I'd be curious why that is. I've only ever done exposures less up to around a couple of minutes with film & digital. Do you have any examples that you shot for 2hr out of curiosity. Was that with ISO 50 or less, or did you use a darkening filter too?

Edit: I wonder if it is due to reciprocity failure with film - https://en.wikipedia.org/wiki/Reciprocity_(photography) ?


125 asa FP4 roll film (all monochrome and hand developed needless to say, I used a Rollieflex camera on a tripod when I was playing with this stuff decades ago).

I found the correction chart below on t'web [1] just now.

I recollect using 2.5x per stop over 1 second which gives slightly longer exposures than [1], but then I did tend to 'pull' the development a bit (contrast goes up), so a 2 minute metered exposure would be 2.5^7 seconds (10 minutes) instead of 2^7 seconds roughly. A 10 minute exposure on the meter corresponds to two hours and a very significant contrast hike - I recollect printing on grade 1 and grade half filter settings (multigrade paper). I only used extreme exposure times in old dark interiors e.g. churches in the UK in November or something. Inside so no need for any filters - not astro.

I'll dig a few prints out and scan them over the weekend. Nowt astounding. See if you can find a book about Edwin Smith if you are near a library with a very good photography collection.

I have a fantasy of using 10x8 film with a pinhole camera and just doing contact prints... but then I remember the darkroom days and the amount of water that got used up...

[1] http://home.earthlink.net/~kitathome/LunarLight/moonlight_ga...


That chart looks very interesting, thanks!

Cheers, I'd be very curious to see your prints.

I've just had a little look on google images at some of Edwin Smith's photos, they look stunning! The light in the photos look amazing.

I still have an old Pentax K1000 film camera I think it is, I'll have to dig it out again some time :) I'd really like to get a medium format camera, but they're still not super cheap, I was looking at the Mamiya RZ67 (I'd actually like to try shooting landscape with it, which could be impractical due to the weight, but would be fun nonetheless ;).

10x8 film would be awesome to play with!


Well I did say fantasy - a sheet of 10 x 8 film is the same cost as a roll of 35mm (similar area if you think of a contact sheet).

A 35mm camera on a tripod with shutter release would allow experimentation and allow you to decide about weight and carrying for hiking. I used the long exposures mainly in buildings in a city.

My images are not on Edwin Smith's level by any means, but the movement of the Sun direction over an hour does make a sort of smoothing of the light.


That's why most actual astrophotography is done with peltier cooled CCDs - firstly a lower temperature means less noise, but more importantly more consistent noise. You can then shoot dark, bias and flat frames, respectively average those, then subtract them from your stacked (averaged using any number of algorithms) light frames.

No reason it couldn't be applied to regular long exposure photography using a DSLR, either.



This is incredibly cool! I wonder what the simplest possible non-mechanical modulator is. (LCD?)


Heh that'd be very cool! Do it!


Can you explain what you mean?


To make an image you need two axis, one horizontal, one vertical. Because the earth is already moving if you have a single photodiode looking at the stars you will scan a line if you wait a while. By adding a second axis of motion such as a speaker which you modulate with a saw-tooth curve you can add a second axis. If you then scan over time you can re-constitute the image by plotting the dots with the intensity captured by the diode taking into account the driving voltage of the speaker and the time that has passed.


I get that. I guess I was just confused as to why your first thought was to use a speaker to move the photo diode as opposed to an Arduino with a stepper motor.


It's an exercise in simplicity. Adding an Arduino is a little against the spirit.


Can you explain how a speaker with a square wave would move the diode? I'm so confused.


Sawtooth, not a square wave. Hook the speaker up to some kind of frame, glue a small hinge to the cone, hinge to an arm that's anchored relatively close to the speaker, diode at the far end of the arm in a tube so the light is directional, instant scanner :)


Now you're thinking with superresolution!


Oh, I just had an even nicer idea that I really will have to try. I won't say what it is yet to not spoil the surprise but I'm sure that it will work :)


In case anyone here knows - where can one buy an "old-school" low-res / monochrome / dithered camera module that produces images this one, for easy use with a raspberry pi or arduino or something? All I get is weird electronics company websites with "request more info" buttons - nothing that gives me pricing and a "buy" button.

Or alternatively, what's a way to transform high-res images to look like that?


Bin the image from a raspberry pi camera by a factor of.. 8 or so (maybe 16 if you need smaller)? Load the image as grayscale and divide the counts by 64 to get 2-bit values.

Kinda difficult to buy crappy low resolution detectors these days because nobody makes them and you'd need to build the readout electronics yourself. You might as well just use a decent detector and make it look bad.


You can also interestingly get the raw sensor data from the Pi camera too, as in before the demosaicing is applied.

I wonder, this may be silly, but could you not potentially get a slightly higher resolution image, if you just created a pixel for every sensel. Than if you were to apply the demosaicing algorithm then convert to grayscale.


> I wonder, this may be silly, but could you not potentially get a slightly higher resolution image, if you just created a pixel for every sensel. Than if you were to apply the demosaicing algorithm then convert to grayscale.

That's along the lines of what the Leica Monochrom promises. One drawback of doing it with a regular Bayer sensor is that the colour filters eat light, so while the number of pixels will increase significantly, the spatial resolution may not be that much better.


You'd get the same resolution because colour cameras interpolate (demosaicing). If you converted to grayscale you might get problems because each pixel has a different colour response with the bayer filter. You'd still need to do some intensity correction.

The performance of the Leica Monochrom is due to this - you don't get more pixels per sensor, but you get proper intensity values at each pixel. This corresponds to better effective spatial resolution in most cases. All they're doing is selling you an identical sensor and pretending it's something fancy and new...

When you buy industrial cameras, the colour and mono versions are identical, but one is filtered (they use the same detector). We use mono in industry often because image quality tends to be much better than colour, and you have a third of the bandwidth requirements (or you want to filter a specific waveband).


Cheers, that Leica looks very cool.

I just had a little play with the raw data from the Pi camera yesterday using the excellent PiCamera lib.


Don’t older cameras have a higher SNR? How would you simulate that? Lower the bit depth?


Use a normal camera and post-process. Use Imagemagick[1]

Like so: http://imgur.com/a/YpHzZ

http://www.imagemagick.org/Usage/quantize/


Dithering is something done in software; not in the camera (well, camera firmware could do it for you). I'd just get a modern camera and apply filters to get the aesthetic you want. You can take a look at the output of several dithering algorithms on the Wikipedia page:

https://en.wikipedia.org/wiki/Dither#Algorithms

Floyd-Steinberg is a reasonable first choice if you want to implement one. There's pseudocode on its Wikipedia page too: https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dither...


There's an iOS app called Retrospecs that has some cool lo-fi filters.


8Bit Photo Lab on Android is pretty good.


Very cool.

I'd be very curious to see what happens if you do exposure stacking-- in principle, you could get a lot more bit depth!


Came here to say exactly this! In addition, you could stack and them once you have something that's at least 8 bits per pixel use Photoshop to clean things up.

I guess instead of making an HDR image from SDR images you'd be making an SDR image from LDR images. :-)


The images are remarkably good, considerning the general quality of GB camera images. I suppose this demonstrates that if you stuff enough glass in front of a camera, you can get half-decent images even from the worst of sensors.


Really fun read, thanks

Next step, find the Gameboy Printer and print out your shots!


CASIO WQV-1 is would be much better choice for 2-bit astrophotography.[1]

[0] ftp://ftp.casio.co.jp/pub/watch/wc/WQVLink_Manual_K.pdf

[1] http://www.3wheelers.com/elvis/body_wqv-1.html


Man I loved that watch - I still have it in my watch box somewhere, but it miss behaves now. I regret not buying the black plastic versions for spares when they were about £12 each. Ho hum!


In 2007 my friend got working WQV-1. I tried it and for me it was something like "007" artifact. Think, if I could buy one sometime - I will buy it immediatly!


They still come up on ebay, but the prices are higher now. But still much less than the £200 it cost in 2001 or whenever I got it.


digital camera watch from the early 2000's : 6 months of battery life on a CR2032 coin battery.

Current day smart watch (without a camera in most cases) : 24 hours to 1 week battery life using a lithium ion battery.



Super-resolution [1] using the Game Boy Camera: any taker?

[1] https://en.wikipedia.org/wiki/Super-resolution_imaging


If you guys don't have the original Game Boy Camera and would like to try, I wrote an iPhone app a while ago to get a similar effect. Sorry for the shameless plug :P

https://itunes.apple.com/us/app/pixl8r/id981531620


Neat stuff! I wouldn't have thought that old of a digital sensor would have the sensitivity to see distant dim objects like this. And being able see the same thing in Stellarium by matching up the coordinate view with the actual photo is also cool.


Is this the most Hacker News title of the year so far?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: