Absolutely fascinating! I have always wanted to visit these sort of things in real life but the political climate in the past decade or so has put me off. Shame - this is a great alternative though.
I really wish more places would do this sort of thing! Google does some, but I like the quality of these.
However as someone who doesn't know much about ancient Egypt it would be spectacular to have some kind of annotations or guide to explain what we are seeing.
Awesome - thanks for this and your work! I'd love to hear a bit more about the production of these sort of things (any "behind the scenes" info etc) if you have any interesting stories and time to share them here ... I am sure I would not be alone :-)
Thank you for this, and all of your work. There are many places you've shot and produced like this that we will never be able to get to, and this is a true joy to spend time in.
For some more about what we do in general in the digital humanities for preserving cultural heritage, there's some info at my startup's website also! https://archimedes.digital/
> guards would allow you to go off the pedestrian walkway for a tip
Actually, no. The guards (in the Valley anyway) take their jobs very seriously and have seen it all... you'll get unceremoniously removed for trying to break a rule.
Spectacular. It's well lit, and has a smooth experience.
And in one way, this virtual tour is better than the live one- at the end you can step past the guardrails! Check it out (deep link to show it wasn't possible, so just walk or click to the end of the long hall to see what I mean).
Matterport's camera uses a mixture of structured light, deep learning, and other techniques to build a 3D point cloud from each capture. Then, the app aligns the different captures together to produce a 3D mesh.
It's also possible to use consumer 360 cameras with Matterport's app, though accuracy is lower. In this case, the 3D-from-2D estimation primarily happens using deep learning.
Seems like a combination of 360 photography and off-the-shelf photogrammetry algorithms. Don't know if there is some secret sauce needed (not obvious from the content that there would be).
Well, do you know where these shelves are? I'd be interested in photogrammetry software to reconstruct 3D models of environments, mostly to plan construction or aesthetic changes to buildings or surroundings. This is for home use, so I don't really need fancy texturing, etc. But having it reconstructed from a smartphone video feed would be a must.
I looked for open source software a while back, but didn't really turn out interesting results :/
Edit: there seems to be a few interesting resources. It looks like MicMac could be quite simple to use, and desn't have a CUDA dependency, for instance.
It's been a few years but in my photogrammetry days the open source options were very unfriendly to use -- either a command line utility lacking documentation or an extension for an outdated version of something else, and I never got a result I was happy with. it can take a couple of hours to process large photo sets and it's really frustrating to get a bad result.
For $180 (non commercial license) Agisoft Photoscan worked the first time and gave me lots of tools to get a good, meshed and textured result that I could export to other software for viewing.
Smart phone cameras are actually ideal because their tiny sensors give huge depth of field -- everything is in focus in every picture == happy stitching
EDIT: you'll be waiting a long time for alignment without GPU !
If you want to investigate free software options then may I suggest you look into Visual SFM (http://ccwu.me/vsfm/) or AliceVision MeshRoom (https://alicevision.org/). I haven't used MicMac, might be good as well.
I tried several years cheap approaches as you requested with no luck. Then I upscaled my gear.
I've used Agisoft's Metashape with great success (https://www.agisoft.com/). The cheaper license offers really good functionality as is.
You do want to have a good camera for good results. My Mavic Pro's (a drone) 12 MP is barely tolerable. With Sony Alpha 6000 (24MP) and good lens, the results are fantastic. Camera phone can work, depending on the capabilities of the camera, but I would use photography and not video material - the images from photography seem to be better quality than frames extracted from video (YMMV).
If you have the patience to collect the image material, the results can be really good.
For example, as a hobby I've been collecting photogrammetry models from an office building being built near my home:
So what you see there is the model as presented by Sketchfab. The textures and model are coming from Agisoft Metashape, Sketchfab is just used as a platform to display the model for public viewing.
The data closer to ground is captured by my Sony Alpha 6000 while the data from above is from drone. I'm happy with the portions of model based on 24MP DSRL images but the drone based material does loook "melted" occasionally.
As a reference, the source data for that model was roughly 800 images.
The data the photogrammetry algorithms consume are coming from the pixel data. More pixels, the better the outcome. Roughly, the resolution and precision you can expect from the result mesh is equivalent to the pixel density of your source material. The algorithms don't invent anything that they can't see in the pictures. This means, that for example for columns you need to take 360 of tens of images around each column to make them appear ok in the model.
I think it would depend what sort of penetration you're after.
It was not possible to work with robots in/around the reactor in the immediate aftermath of the accident -- the robots would fail due to exposure to the radiation. It's quite possible that technology still hasn't advanced far enough to allow for the in-depth exploration that would be required to achieve anything like the linked example.
I'd be curious to know if anyone knows more about recent attempts and the state of the art of hardening robots, drones, computers, etc. against radiation.
Striking how _different_ this is from the pyramid of Giza. For this one, every inch covered with illustrations and hieroglyphs; for the great pyramid, completely bare, devoid of images.
To be fair it might as well have been flashy on the outside, but erosion of the surface exposed to the elements (and sandstorms) is very high. We supposedly already know that it was covered in limestone, and some sources even mention looting of the limestone to build forts
The dollhouse view is a nice way to zoom out without occluding what you want to see. Very impressed how smooth and detailed the tour is. So much better than streetview based tours
High Fidelity missed the Coronavirus WFH opportunity by only months. They claim to still be working on something new but, wow, its hard to think of a case of worse timing.
Beautiful! I had this same system shown to me for a couple of houses I was looking at buying last year, the real estate agent had these photos taken in the house and stitched together this way with a dolls house view too. Very useful for that kind of application.
Combined with VR glasses it would be a great way to preview things like furniture/new kitchen layouts in somewhere like Ikea or paint colours in a hardware store.
The picture/texture quality is mind-blowingly amazing.
Imagine if video games had textures this high-definition!! They'd look so insanely good. And it's not like our present-day technology/GPUs can't handle it. It's probably more that video game makers don't want to spend a huge amount of money on artists and graphics people to create ultra-high-definition textures...
I visited the valley of the kings ten years ago. This is a really good recreation: before I went I had no idea how well preserved the paintings in some of the tombs were. It's an amazing place to visit.
Anyone able to access this via Firefox and VR? It seems to launch SteamVR with my WMR headset active, but no content - and no VR button on the web UI (which the help mentions).
Why are parts of the tunnel blurred out? Censorship? Surely it isn't anything objectionable we wouldn't see on Wikipedia or Google Image search.
Censorship of educational material is unnecessary.
Edit: it appears to blur the textures immediately above on the y axis. Perhaps the software isn't correctly interpolating those values even though it has the data. I hope this is the case rather than the censorship I posited.
This is really cool, and I hope more things become accessible in this fashion.
Probably just a hardware blind spot, this is quite common to find on panoramic views (google street view, etc). Interesting that it sits above, rather than below the capturing apparatus, as it usually does: the tripod gets in the way. Maybe they lifted it from above?
https://my.matterport.com/show/?m=d42fuVA21To
https://my.matterport.com/show/?m=QaGBAsT6yg4&mls=1
https://my.matterport.com/show/?m=ui3dfrQDqB2&mls=1
https://my.matterport.com/show/?m=xmDbt2rfa82&help=1&brand=1...
https://my.matterport.com/show/?m=PKxweZaPG3P&help=1&brand=1...
https://my.matterport.com/show/?m=zBpDdPqxTKz
https://my.matterport.com/show/?m=zBpDdPqxTKz
https://my.matterport.com/show/?m=o5Ex5Xo7UkE
Absolutely fascinating! I have always wanted to visit these sort of things in real life but the political climate in the past decade or so has put me off. Shame - this is a great alternative though.
I really wish more places would do this sort of thing! Google does some, but I like the quality of these.