I saw a website of someone who either painted or gave printable file of photos so that you can put them on an actual sphere. Was quite amazed by the result.
Wow, I haven't used/seen panorama stitching with Hugin for over a decade. I got the impression that ultra-wide angle cameras/lenses are quite widespread nowadays, especially with smartphones. Furthermore, they do "online" live-stitching while doing panoramas, instead of stitching them manually "offline" in the postprocessing. (Of course the quality of the manual stiches are way better)
> Furthermore, they do "online" live-stitching while doing panoramas
This way it may be used for even creating video "on-the-go" with standalone camera position (with low FPS it may be useful for landscape timelaps video capture or panoramic webcam) - and as a result it could be cheaper alternative to such things as Facebook's Surround 360[0,1] DIY 3D-360 video capture system.
Just as addition to software part it would require to design some sort of automatically rotated "panoramic tripod head" hardware synchronized with camera photo capture control (to take photos).
I was actually looking for a way on how to generate such panoramas in Android using the accelerometer and magnetic sensor as support. Essentially what Google does with PhotoSphere.
Does anyone know of any libraries or implementations of such a project?
I'm actually about to embark on exactly such a project. My plan is to use OpenCV's "stitching_detailed.cpp" example, and improve it from there. OpenCV's stitching is quite good out of the box, but I think I can improve it by adding knowledge from the phone's IMU. What do you need this for?
I don't know specifically about Android but Dermandar[1] have an SDK which claims Android compatibility (I use their iOS app which takes horizontal panoramas as the phone rotates.)
Are spherical panoramas usually stored as a single rectangular bitmap? Are there any compressed image formats that account for the distortion and optimize for the final projected output?
Just as a point of interest: If you're willing to pay the cost for a really good resampling algorithm, you can use a Gall-Peters projection. It's equal area instead of equal angle, so each pixel holds the same amount of surface area from the spherical image. Doing this, you could get a 2/Pi savings from the uncompressed image.
It splits your single 360 pano into 2 images, one for each eye, but it doesn't do any stereo depth stuff. It's like being in the middle of a sphere with everything painter on it all the same distance away.
When there's something 3d rendering the scene it can too stereoscopy, but if there's actual cameras trying to create stereo images in 360 - you always end up with the "what does it do when you're looking directly at the other eye's camera?" paradox.
It does stereo. You have to hold the device at arm's length and it reconstructs each eye by picking different rays from the side of the image. i.e. the left eye pano is made of rays looking to the right in the original frames.
I played a lot with it a few years back, even wrote a library to read the files in C#.