I was amazed until they got to the cat (about 3:00 in). At that point I was blown away. The inside surfaces of the "outside" legs were also accounted for in the mapping, and it was spot on.
Sure seems like something that could be easily remedied in future iterations. They've already figured out the hard part; alignment is a solved problem for all intents and purposes.
If anyone is interested in the other papers that are presented at SIGGRAPH 2015, Ke-Sen Huang usually has a complete list with links to preprints and videos. You can find the list for SIGGRAPH 2015 here:
I love how this is a complementary technology to 3D printing by the way of a traditional printer. Simple. Effective. The biggest obstacle for widespread adoption might be software, in other words a robust toolkit for designing "wraps" of these imprints for your own 3D models.
What kind of surface doesn't fit this computational model? The elephant one for example, how to map the region which is occluded by the elephant's nose to the PVA surface? Anyway, really nice work. Looking forward to reading the paper and learn more!
The failure modes listed in the paper are models with a lot of concavities or self-occlusions. Looking at the video, it looks like the elephant cup succeeds because of surface tension properties of the water, which let the surface on each side of the trunk "snap together" when the trunk begins to be completely immersed. It's that wrapping or stretching that lets it be more than a simple light projection, which would fail due to self-occlusion.
Very cool. Anyone know how a sphere (the globe at the end) was printed in a single immersion? Based on what I saw in the video, it seems like it would have to be a different process involving rotating the sphere as it was dipped, rather than the linear motion used for the other objects.
From the paper[1], it sounds like the same linear motion with the film wrapping around to the top:
> The sphere is dipped with
> its north pole pointing downward. The maximum error on the
> northern hemisphere is within 2mm. However, near its south pole
> the error is much larger (about 5mm). This is because after the
> water surface passes the sphere’s equator, the film gets stretched
> largely, and near the south pole the relative angle between the
> water surface and the object surface approaches to 180◦, leading
> to an ill-posed boundary condition for our simulation (recall
> Equation (1), when θ ≈ 180◦).
You can see the potential for a similar wraparound even on e.g. the mask dips.
Was the sphere inside the globe framework when it was printed? If it was merely a sphere being immersed, then perhaps a polar projection map would be able to cover all of it.
Don't believe so, a single projection with antarctica on the outside and the north pole at the centre would likely work just fine - and they showed it top-down.
Do you know what those cloverleaf shaped metal beams are called, and how you can learn more about building hardware prototypes with those kinds of products?
I know very little about mechanical engineering and hardware prototyping, but I saw those metal thingies about a year ago in a DYI tinkerer community (it was used in a DYI 3D printer), and I have been wondering about that topic ever since.
Thanks! What other items often go together with these aluminum extrusions? Is there a place or a book to learn about this topic, or is it something that people only learn through experimentation and mimicking?
Primarily they are used for building structures quickly and easily - a saw and a wrench are the only tools you need. The standardized brackets for each beam type allow you to make 90 and 45 degree angles.
But they are often used for more than just framing. The 3D printing community has embraced extrusions because you can also use them as bearing surfaces, mount motors and servos, limit switches, etc. Basically anything that has a hole big enough for a machine screw can be mounted to a beam either directly or through an easily made mount (usually to get the angle that you want - all it takes is some sheet metal).
The quickest way to learn is to look at examples. The OpenBeam website has lots of examples. The system is so simple that you can understand exactly what is going on just by seeing a picture.
Wow, this is very cool. They invented a new method for printing taking advantage of the fluid nature of liquid to get to every surface of a 3-D object.
FYI hydrographic printing isn't new... nor is the computational part to be honest. I assume both of these techniques have been used together before, but what we are seeing here is both: very well done, and: using off-the-shelf components.
Bonus the-future-is-now moment: "3D vision systems" are "off-the-shelf components".
Around 2:10 he mentions "you can actually set it for project as well" which is a lot closer to what's being done in this case.
Blender has to solve nearly the exact same problem for projections, the only differences being the projection has to be mapped backwards to a flat texture, and you have to account for the way the film clings to the surface and how it stretches. Topology and topography aren't an issue though, we've got that so covered. (But the material physics is something you would have to construct a model for, so if you wanted to solve this precise problem in Blender you might have better results with the physics engine.)
I have seen some variation on this available in even low-end 3D modelers since the 90s. IIRC Truespace's version of the feature did shrink-wrapping by running a simulation, much like they do here, but with different physics.
I thought I had even seen this used in printing before but I could be mistaken.
This is seriously cool.