That camera had an MSRP of $899 when it was new in 2006 [0]. Amazing that they now sell for only £50, and still take fantastic pictures even by today's standards [1].
Technological progress is nothing short of remarkable.
It is a great moment for people that want to get into photography, because the market is moving to mirrorless but there's plenty of old high end DSLR gear that people are getting rid of but that can produce excellent pictures. The important part of the gear is the lenses, and those can be had used for a fraction of the cost of new ones. For the bodies, the sensor quality has consistently improved over the years, but the quality of 35mm at ISO 100 has been matched and surpassed a long time ago. As a first approximation, you can say that about 10 Megapixels are enough to match film[1]. Subjectively you could say that digital matches and exceeds film since around the mid-2000s.[2] Anything newer than the D2 will be more than enough for any hobbyist. The subsets of photography that benefit from the improvements of the past two decades are low light photography (birding), sports (if you can shoot 9fps and the pal next to you can shoot 11fps, they are the ones more likely to get paid for the picture) and videography (both due to quality of the recorded video as well as ease of use, like better auto-focus).
Why do you think the market is moving to mirrorless? I remember there was a craze when they first came out, but died out in a couple years. There still are a lot of good and "good for the price" mirrorless cameras available, but they certainly don't seem to be taking over the market.
As an active photographer, I still see most people using DSLRs or their phone. Actually, everyone I know who bought into mirrorless actually gave it up.
Back when they were popular I did look into getting one. But there were too many competing lens mounts, and not enough good lenses. So you could buy in to one mount only to have a different mount take over the market, or have yours go away.
I’m sitting in a hotel room before a wedding I’m going to photograph and I just laughed at this.
Back when they were popular? (Practically?) Every new camera coming out is mirrorless. Every pro is moving to mirrorless. It won’t be all that long before you can’t even buy a new DSLR.
Canon, Sony, and Nikon all have their own mount just as they always have. Canon and Nikon took the opportunity to make theirs larger/better/shorter distance, but you can use adapters to use old glass. As a matter of fact, I don’t even have any Z lenses for my mirrorless body.
Mirrorless is better, full stop. The only real disadvantage is battery life. Some people prefer an OVF, but personally I have a much easier time seeing through an EVF. The jump in focus accuracy is an absolute game changer, and it’s worth switching for that reason alone.
I have a Z8 on the way and then I’ll be fully mirrorless myself, and I can’t wait to ditch what’s largely considered the best DSLR ever made - the D850.
I haven't used one lately, but ten years ago the delay between seeing the image and capturing it was so great that it made them useless for people photography (unless people are posing, but I like to capture real emotions as they are happening).
Are they better now? In my experience, the shutter lag must be close to 0 to be able to capture the right facial expression. Lag ruins pictures.
They are much better now. The pro I know went from a canon 5D mkiii to mirrorless R5? (Classical performance photos). It’s quiet fast and the new lenses are better/smaller. The lack of
A mirror means it’s faster as it doesn’t have to physically flip it up to take the picture. They’ve embeded the focus sensors on the image sensor so it’s much faster focusing than previous mirrorless[1]
Even the best mechanical shutters and mirrors in DSLRs have lag too, there is a delay between button press and capture - the relatively large mirror can't defeat physics and move in an instant - the latest mirrorless models are generally competitive now with the ultra-short lag time of the best DSLRs.
DSLRs are worse for capturing the right facial expression in some ways too, because the mirror flip temporarily blocks the viewfinder. With an electronic shutter enabled, a wedding photographer on mirrorless will never have the viewfinder blackout during burst shots when trying to capture "the moment". The mirror/shutterbox puts the "real" world lag on many DSLRs in the 60-120ms range, which is not so hard for an electronic viewfinder to get competitive with.
Mirrorless models with electronic shutters in theory also have the option to continually "precapture" and then use an image from say 100ms ago when button pressed - the iPhone does this to reduce lag time even more, and try to eliminate human response time from the process. A DSLR can only do this concept if the mirror was up blocking the view finder.
While people assume that there is a degree of "WYSIWYG" with the mirror, this isn't always 100% true either. Cheaper/small prisms or pentamirrors result in much dimmer and smaller viewfinders than a good electronic viewfinder, especially on APS-C sized sensors, and 100 percent view coverage is rare outside of high end ones. AFAIK all electronic viewfinders to date are generally 100 percent coverage, and often bigger.
One enormous benefit for wedding/event photographers that mirror elimination accomplishes is making the camera silent. A high end DSLR operating at its highest burst speed is very noisy as the mirror slaps back and forth, which genuinely can be problematic in quiet or intimate settings. The mirror boxes also generally are only good for typically 100-150k activations, which lasts a long time, but they can wear out.
The Canon R5 has no lag at all. The EVF (a 1600x1200 120Hz OLED display with 100% coverage) feels as fast as an optical viewfinder, and the R5 can shoot 20fps with full autofocus tracking and without blacking out the viewfinder. Shutter lag is actually less than on DSLRs as there's no mirror to flip and electronics have gotten way faster in the last 10 years.
The newer ones like the Canon R3 are even faster and the Sony A1 can do 240fps in the EVF.
Another perk of EVFs: they're much brighter indoors as they simulate the actual exposure.
Ten years ago is a long time when it comes to camera tech.
I'm very sensitive to lag to the point where I couldn't play Smash Bros on a friend's TV that he insisted was fine. I'm not sure what the actual numbers are but the EVF latency is for all practical purposes imperceptible, and the cameras can shoot much faster bursts to boot.
Plus, people tend to change their expressions when a shutter/mirror is clacking away and mirrorless cameras can be silent.
I'm a working pro photographer with over 30 years of experience. I frequently buy older camera tech for hazardous duty. The optics and sensors are quite good and the current software can upres and denoise the images without looking weird. I've been photographing wild mountain lions in the central Oregon high-desert using unmanned cameras and sensor arrays. I'll leave them deployed for months in the field. I recently suspended a camera just above the waterline for capturing unique images of wild beavers at night. This is hard on cameras, but they are almost disposable because of the form rapidly becoming obsolete. Yet a very high level of optical quality is available with this old tech.
Editors never ask me "what camera did you use?". They evaluate images based upon impact.
This moment in the camera market seems really interesting to me because we have phone cameras becoming more and more capable thus shrinking the DSLR/Mirrorless market exponentially. High end cameras are getting more expensive. Where is it all going?
For anyone who is interested in older cameras I use a Canon Rebel T3i as a webcam and it works fantastically. In 2020 because of you-know-what Canon released a webcam driver for a bunch of their cameras and the T3i is one of the oldest ones supported (the most recent version of their driver drops support for this camera but older drivers work just fine).
Paired with an AC power adapter and a mount for my monitor it's a nice webcam. I bought a 24mm lens for it that was reasonably affordable and I get a nice "real" bokeh effect in my background (it's not as blurry as the computational one but it's a nice to have). (FWIW I also push it through OBS to crop it).
Mileage varies between cameras, but one small warning to anyone considering this if they spend even a couple of hours a day in meetings with the camera on from time to time - the sensors in some DSLRs/mirrorless models can overheat while live streaming and your video will cut out.
Most of them were only specced to record up to 30 minute clips at a time in original guise and are passively cooled, so running for an hr or more can push the sensors beyond their original specifications etc. I've had this happen rarely with some m43 cameras I use as webcams via an HDMI capture card. There's a ton of youtube videos on this topic with all sorts of homemade cooling setups:
For anyone that might also already have a Nikon D5100 or D7000 to repurpose as a webcam, you can use a custom firmware[1] to clean up the HDMI output, an HDMI capture card and OBS to fix the aspect ratio. That's the setup I use.
I have a canon r6 and I cannot for the life of me get the webcam utility to work. The camera utilities work just fine. I can even remote shoot from my pc. But I have not been able to get the webcam utility to work. The support around fixing it that I have found does not fix the problem, but is always suggested is to turn off the camera utility software but again, I’ve tried that and many other things and can’t seem to crack the code..
how do you use this with an AC power adapter? I have one and I've used it but I run out of battery quickly. is there a way to power the camera with AC??
Most cameras enable using an AC adapter with a dummy battery insert that provides a barrel power jack. A quick search for that for this camera shows that it seems to be the case for this one.
There’s a kind of power cord that has a dummy battery that plugs into the mains. The camera’s battery door also has a hole to run the cable through. Searching for “dummy battery $CAMERA_MODEL” would get you what you want
I use several EF lenses on my Canon mirrorless. They tend to focus faster then on the old SLRs (and this is on the EOS-R which has the least sophisticated focusing).
This camera setup for personal social events is a great idea, I wonder if it could be done with some sort of phone app so among a group of friends or at a party you could use everybody's cell phone.
in watching the video though, I found it a little disappointing. I think people need help with choreography, what actions and timing are going to make for good shifts in perspective. For example, you want grandma to rotate in mid air, not something she throws up. Another example, I think you want the action to start from the periphery, and then bring the characters in the shot "into the spotlight" so the clip delivers a climax or "keeps getting better". (although, he did use the "move away from the action" technique to good effect to create transition places to stitch together a series of clips into a sequence https://there.oughta.be/assets/images/2023-05-26/demo-blog.m... )
And could it be done using video instead of still shots? seems like turning raw footage into something interesting would be easier? Or is that how it works anyway?
Make the equivalent of an an LED strip light that is just a bunch of cell phone cameras at regular intervals and a controller that can trigger and capture from each treating them all like a single camera independent frames.
Strip light can then be put into any flexible pattern you want and strung pretty much wherever, or taped to pretty much any location. Whole thing could probably have almost the same electronics size as one of those common color changing LED strips.
I'm eagear for the sequel, divorce bullet time, it may not be appreciated by any of the subject but is hard to argue that mid-air tears in bullet time would be quite a sight to see.
That subtends a much lower angle. Also the "acceleration" and alternating mirroring in TFA to give the illusion of a 180 degree spin is a brilliant idea.
In the original video, he says he needs the wall to achieve the looping effect. But in this case, what's the reason it seems to need to be indoor, or within some kind of booth?
Pinnacle Effects in Spokane, WA did this for an ABC Sports Baseball promo in the late 80s using 60 point-and-shoot film cameras arranged in a 360° ring. Gerry Cook was the director and the guy who came up with the idea and physically created the “ring cam” as he called it. It was used to create a 2 second, 60 frame, scene of a pitch being thrown.
I really don't know. There are a few pieces that Pinnacle did on YouTube — the "pinball machine" ABC Monday Night Football open, for example — but it seems most did not survive or are not publicly available.
EDIT: Turns out that the director, Gerry Cook, has a youtube channel with quite a bit of his work shown there. In particular, there is a Heck Yes Productions demo reel, where at 1:50, there is a ring cam shot of a girl jumping rope. It is very similar to the ABC Baseball ring shot but done in the weeks following the baseball shot.
Nice, but aren't the cameras supposed to be triggered in sequence instead of all at once? I thought there was still a little bit of motion in the movies, but in these the subjects all freeze in place.
yes. in the special edition DVD, the special features show the bullet time scene in the studio and you can hear the shutters from the camera clacking like dry fire from an automatic rifle
A nice compromise (since a wedding photo booth isn’t going to be super choreographed) could be to group the cameras in to, say, two groups. Trigger them periodically, 180 degrees out of phase with their neighbors, and assuming the virtual camera is moving by one real camera per frame, you’ve got double the frame rate but can still choose a path in post-processing. And of course it could easily be abstracted out to n camera sets.
This is getting pretty complicated though, he “hard-coded” the camera motion into the shape of the frame. Physical things are hard, haha.
Depends on the setup you're using the trouble with doing video is to get the effect to be smooth you need frames at very particular times from each location which is tough to ensure out of video equipment where a bank of cameras you can trigger to take a single frame in sequence. To do the same with video you need to have a system to send a slightly out of phase sync signal to each camera in the array or just wing it and fudge the jittery motion and ramping with some VFX in post production.
He's using digital single-lens reflex cameras with moving mirrors in an application where you don't need that, but do need a good camera. What do you use today when you need a good camera with no user interface?
it's a bit weird because it often turns out that cameras with user interfaces are cheaper because of economy of scales. that being said, a market for almost barbone sensors on pcb's seems to be building and is lead by raspberry pi's camera modules, where the hq module has a reasonably sized sensor but is still a far cry from mft, aps-c and full frame sizes. the hq camera is almost on parity with most phones. from there it pretty much moves into cameras with ui's but controllable with some kind of vendor sdk or libgphoto. i imagine it's a matter of economy of scale and it's almost impossible to buy a good almost barbone sensor. If it's even possible e.g. the 61mp ff sensor used in the Sony a7rV is available almost barbone for astro photography, it's still very expensive. i imagine there's a pretty big market for all kinds of industrial cameras but from tip toeing into that, it seems very very very expensive.
A Pi HQ camera with a lens of your choice. It's easy to synchronise any number of Pi cameras together, and with the RAM on them each camera can store a long sequence of frames (add an RF-trigger flash using BBC microbits to each one to control motion blur).
That's a component. It needs packaging.[1] That bulky lens needs support that's reasonably rigid and resistant to vibration. Four 2.5mm screws on the PC board of the sensor are not enough. Support needs to be near the center of gravity.
In the original article, the author describes his camera mounting problems.
The speed the camera arm would need to move to nearly freeze time is astonishingly fast—if you were to try doing the entire shot on one camera. But I could imagine a setup with a static beginning and end camera, with a motion controlled camera timed to whiz by both so it could ramp up the acceleration.
But much cheaper to put a camera on a rail/track instead of complex motion control.
I'm usually a big critic of AI, and I remember seeing people complaining years ago about increasing framerates with AI[0], but maybe this could be a good usecase. You do, say, half the cameras, and then some crafty solution to interpolate (filling in the edges with AI, some sort of angle adjust)
The project already used frame interpolation (through DaVinci Resolve) to double the number of frames in the video. Plus a low-tech "inside the wall" panned and blurred frame. Given the set-up, fancier scene reconstruction to build a 3D model of the environment could be combined with texture reprojection to improve the interpolation and allow the "inside the wall" frame to be perspective corrected, but that would be much more involved on the software side than what was done here (and beyond what I could do too).
The tech to look out for in this space is called NeRF (Neural Radiance Fields).
Just do a quick search on that term and you will find a whole bunch of videos that reconstruct 3d scenes using digital stills. I just saw a paper a couple of days ago showing advances in reconstructing human expressions that change over time using advanced techniques.
The creator of this is also the person behind Phyphox, an app that turns your smartphone into tool to conduct physics experiments with and collect data from them, creator of a Game Boy WiFi cartridge and GB Interceptor that allows to record videos of Game Boy games - among other things. Amazing what some people do, isn't it?
Phyphox is such a cool tool. The acceleration spectrogram is so cool to look at - you can put your phone top of your PC/laptop and measure the RPM of fans and spinning drives.
Agreed, thank you and the parent comment here for the mention of it - looks great, works great with the little bit of experimentation I've done so far.
The challenge here is cost. You need 20-40 perspectives at ~$1000 bucks each including the camera body, lens, trigger system, remote download, and output pipeline. So a hand built cost optimized system will cost $20k - $40k in hardware. Operating is also kinda painful: calibration of each camera is required every time the rig is moved or exposure is adjusted and all shots must be designed ahead of time. It's just kinda a limited market due to the complexity and cost.
Neural Holography should arrive to market within the the next 6 months, which will allow for any frame to be seen at any angle with as few as 20 cameras for 360° x 180° coverage, allowing for bullet time shots to be designed after the fact with the added bonus of having cinematic quality 3D live action content for use in VR and other 3D environments.
During the Pride festival in Toronto last year I swear they actually had something like this!
I fully remember I left for the bathroom and came back to my girlfriend and her bestie doing it, it was like you stood in the Center of the platform and the cameras rotated around or something?
I wish I could remember it more specifically, and it wasn’t perfect; but it struck me as strange that this was the first time I’d actually seen anyone even try!
Wish now I’d written down the company who was putting it on. It looked neat. :)
And there was a lens for this kind of effect, it allowed capturing three images at once [1] [2].
There were also dedicated cameras [3].
> Using its four lenses, four images from slightly different viewing angles were taken simultaneously. With the individual images half the size of the usual 35mm image frames, each 3D photograph taken used the space of 2 full 35mm exposures on the film. So a roll labeled as "36 exposures" would yield 18 3D pictures with four images each.
If you want to obtain the same effect with way less cameras, 4 maybe, check the paper 'Real-Time Depth Video-Based Rendering for 6-DoF HMD Navigation and Light Field Displays' (Bonatto 2021)[0], the supplementary material contains the video view synthesis [1]and some results can be found on YouTube [2] and finally..
You can do the view synthesis for free using the (MPEG) free software: RVS (reference view synthesizer) [3].
These actually exist. My kids’ local grammar school rented one out for a sort of field-day event where parents could come too. A staff person from the rental outfit operated it. It was a single camera that rapidly spun around a small central platform, and at the end you entered your cell # and received a text message to view & download the result.
Just saw a service on the weekend that rents out for events with a setup to take fun shots like this. Assume it's more common than this might say. Kind of an update to the 'photo booth' trend.
This is a bonkers idea! I fucking love it, man! I bet I could do this with a single device on a rail and use NeRFs to actually make the effect. Could even do more complex camera flight. Love the idea.
I went to the Monza F1 GP in 2019 and they had a bullet-time video tent sort of thing, with a trampoline in the middle if I remember correctly, it was free to use in the fan zone.
The Insta360 can do bullet time video with minimal effort. Ok, it's no exactly the same as known from Matrix, but considering the simplicity it is pretty cool.
I didn't know what "simplicity" meant in this context, I looked at a tutorial [1] and saw instructions to swing the camera on a string, that's indeed surprisingly simple!
Is there a way to make a cheapskate version using multiple mirrors and only one camera looking at all of them at once?
Then chopping the single capture into smaller frames one bit for each mirror, each with a different view of the subject, and stitch them into a video.
I can imagine a 3x1 layout but trying to arrange the mirrors in a curve so the camera sees a 3x3 or 4x4 or 5x5 arrangement might need some optical cleverness.
It's not abundantly hard to bang together something functional, really. It's one of those projects where you just think about the components you have (Camera, Monitor), and then make an enclosure (or find 3d/laser files), then some code that's surely on github to sync them together.
A small printer, though; would turn it from a novelty into something you could e.g. install in a store you owned, and even if you just charged a buck a go it could still be a fun draw.
Got me thinking now what a tiny printing solution like that would/could look like!
EDIT: unsurprisingly, there’s a bunch of iOS/iPadOS solutions. You could theoretically base it off a custom app on an iPad with one of these iThing printers stuffed away and use Square as a POS?
There was a similar installation at the Australian Centre for the Moving Image, Melbourne in 2015. Not sure if it’s still there, at least when I was there last it emailed your video in FLV so I gather the setup was pretty dated.
I just looked them up and you are right. It varies from 5 cameras to hundreds of cameras. Interestingly though, I haven't seen any of them demo the continuous rotation through a wall effect that Sebastian shows here though which kind of makes the professional solutions feel like they have some room for improvement.
Technological progress is nothing short of remarkable.
[0] https://kenrockwell.com/canon/rebel-xti.htm
[1] https://onfotolife.com/camera_sample_photos?camera_id=30&pag...