I'm disappointed that the illumination is not uniform in the stitch. Artifacts from the sampling and stitching leave subtle bands in the output that I'm fairly confident are not in the original. This is likely a biproduct of using some non-uniform scope-attached illuminator and not having sufficient overlap in their sampling.
Nonetheless, when zoomed in, the detail is impressive.
I worked on this project. The images were indeed captured by a 3D microscope in about 9000 separate captures with non-uniform illumination, so this isn't so much a stitching artifact as it is an illumination artifact. The specular component of the varnish and the different illumination angles of the light sources make it almost impossible to capture a uniformly-colored field without the use of polarization filters, and the best stitching algorithms don't do well with different opinions from multiple images about the color of a pixel (they do fine with different opinions about brightness).
For purposes of browsing, you can source your top-level "zoomed out" layer from a single photograph and blend at lower tiles in the quad tree or normalize the downsampled capture data against a reference.
Nonetheless, thanks for doing this. It's an amazing piece of work.
have you tried to extract the vignetting pattern form the captures and using it to normalize them? My first try would be to calculate the median grayscale image of all 9000 captures and then using this to normalize the intensity.
Yes, and the vignetting pattern isn't so much the problem. It's a color distortion problem (spatially varying chromaticity rather than brightness that is not just a function of the position within the field of view but also of the underlying material, unfortunately). So there isn't a nice way to correct each capture in a predictable way to ensure that overlapping pixels have the same colors consistently.
I've worked with similar problems for agricultural mapping from drone images. You would need to build a BRDF model for the different colour channels for various types of materials, then assign the material based on a combination of best representative models. Then you can re-render with uniform normal lighting.
They are talking about a bidirectional reflectance distribution function, a now common modeling technique for practical representation of complex surface properties when illuminated.
That said, a full BRDF is not the only way to approach this problem, especially at reduced resolution where the artifacts are more apparent.
If you're an ACM member, there is a wealth of information in SIGGRAPH publications. Having been in graphics since the 90s, I started with Foley and Van Dam (Computer Graphics: Principles and Practice) and Watt and Watt (Animation and Rendering Techniques) and then stayed on top of SIGRAPH (attending regularly, though not annually) since then.
For a more whimsical survey from the pen of a straight up genius, Jim Blinn's books (e.g. Dirty Pixels) are fantastic reads.
The very limited depth of field of a microscope necessitates doing depth stacking for every field of view. This is done automatically by the apparatus. As a side effect, it gives a heightmap, but with the lens used for the whole-painting scan (what HIROX calls "35x", corresponding to ~5 µm sampling resolution), the elevations are not reliable, especially near the edge of the field. For selected areas, a higher magnification was used that gives much more reliable elevations. Sadly, only a small fraction of the painting was imaged using this higher resolution, so the 3D data is spotty.
Yes, because the surface isn't flat. At these magnifications, any given field of view may be tilted toward one of the light sources, changing the relative contribution of the specular reflection off of the varnish as well as the reflected color from the paint surface. The apparatus doesn't have the ability to tilt to maintain a constant angle to the surface; it can only pan in the x-y plane and do focus stacking in the z-direction. Additionally, the left and right light sources were hand-positioned and there is no way to calibrate their exact geometry and relative brightness and color.
This is incredible work. The 3D component is superb. The effort is clearly apparent. I spent half an hour fiddling with the thing before I realized what had happened. This must have significant value to archivists.
I want to plug a documentary in which someone invents a tool in an attempt to replicate a Vermeer. The documentary supposes that Vermeer may have used such a tool. It's an interesting cross section of art, engineering, and history.
I wonder, for some of the heavy brushstrokes Impressionists, a lidar scan would almost be appropriate. Some of the paintings where the paint is in physical clumps on the canvas, it has its own 3d quality to it that looks different from different angles. I don't think it will get captured with a single image taken from straight on.
Good post to plug the documentary "Tim's Vermeer" ( https://www.imdb.com/title/tt3089388/), which is a joy to watch as they try to reverse-engineer the particular style Vermeer painted in.
Yes, because the scan is intended to study and document the painter's technique as well as the state of conservation of the paining. Looking into the abraded painting near cracks, for example, we can see a layer structure that is like taking a virtual cross-section of the painting. (I worked on this project and also made the image of Rembrandt's Nightwatch at http://hyper-resolution.org/Nightwatch).
The painting has also aged & cracked significantly. Does it make sense to look at the painting since it does not look like what the painter originally made?
These are parts of the art artifact we have, and there are just lovely amazing details. They capture technique and process wonderfully. I could not be more thrilled to look around these craters & cracks & little particles of paint.
I wouldn't sell Vermeer's eyesight so short. The human eye has about 500 Mpixels. In a healthy individual, the central yellow spot of maybe 100 Mpixels can be zoomed in with perfect focus on a spot no larger than a few square cm. Sequentially looking at the painting in small patches from very close, you have a similar order of magnitude for the resolution.
This certainly has little if anything to to with enjoying a piece of art, more of an exercise in microscopy. Imagine looking at the magnified pores in the skin of a lovely face...
I don't understand why they chose to present these parts as .jpg. Why go through the effort of creating a 10B pixels masterpiece, when you compress it with a lossy format like jpg. Or am I missing something?
I don't understand why they chose to present these parts as .jpg
I suspect that this page is primarily for getting publicity and 'buzz'. If you want the data for actual research I'm sure they have it in much better formats.
It'd be fun to edge-detect the close in shots at a couple different frequencies, & use it to build a terrain map. Go run around the cracks in the painting.
Nonetheless, when zoomed in, the detail is impressive.