Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: ThreadTone – Halftone images made of thread (thevelop.nl)
303 points by theveloped on Dec 25, 2016 | hide | past | favorite | 44 comments



A couple thoughts:

Unless your image library is way fancier than I imagine, you would get a much less biased result if you convert to a linear color space. This code:

  # Invert grayscale image
  def invertImage(image):
      return (255-image)
doesn't accurately compute amount of thread cover desired because "image" isn't linear in brightness.

For this particular use case, though, you probably want to further transform the output. Suppose that one thread locally reduces light transmission by a factor of k. Then two reduce by k^2 (assuming there's enough blurring, everything is purely backlit, no reflections etc), and so on. So n reduces the log of the brightness by n*log(k). I would try calculating -log(luminosity) and fitting threads to that.

Finally, this sounds like a wonderful application of compressed sensing. I bet you could get an asymptotically fast algorithm that is reasonably close to optimal.


I like the assumption you make on the transmission factor. One can implement it fairly easily by replacing:

    imgMasked = np.subtract(imgMasked, lineMask)
By the following lines:

    lineMask = np.subtract(np.ones((height, width)), lineMask)
    imgMasked = np.multiply(imgMasked, lineMask)
lineWeight is now equal to the k you propose.


I would look at tomographic reconstruction (https://en.wikipedia.org/wiki/Tomographic_reconstruction) algorithms for this for inspiration. This is a very similar problem: what projections best approximate a given image?


This is a good idea, each thread is basically a single point in the 2D fourier transform of the image so an approximate image would sample that space accurately. Not sure what accurately means exactly since the fourier space isn't 1 or 0 (thread or no thread). Maybe include dithering to approximate?


I think you're thinking of the Hough transform, not the Fourier transform. https://en.wikipedia.org/wiki/Hough_transform



How do you figure that each thread is a point in the 2D Fourier transform? A thread looks nothing like the ifft of a single non-zero pixel in Fourier space.


Sweet! I had not thought of tackeling the problem from this angle. This might indeed give for some very interesting results!


I had the same thought - VERY cool project and fantastic that you took it all the way through the automated fabrication. Impressive already, but I bet you can improve the optimization to get even better results.


This is so weird, I just decided to make one of these thread images last night and came up with the same algorithm which runs in about 10 minutes. But I think the loom robot is the real piece of work here. It takes me about an hour to string up 3000 passes manually, but on the other hand, I like my design of nails on a painted piece of plywood better. :) Nice work!


Very interesting! The inspiration behind this post, artist Petros Vrellis, uses computer algorithm too: http://artof01.com/vrellis/works/knit.html


I did something like this with a pen plotter and CMYK pens. I biased the random walk to follow/bounce off edges.

http://imgur.com/a/n7OOd


OMG, thank you! I am the creator of https://comments.network/ which you are using on the bottom of the article, and it's the first time I spot it in the wild! If you have any problem/suggestion/anything just write me: public@francisco.io

BTW I love what you made, have you considered selling those? It looks like it could have a similar business model as instapainting: http://instapainting.com/


    I am the creator of https://comments.network/ which
    you are using on the bottom of the article, and it's
    the first time I spot it in the wild!
That's pretty cool! I made my own version of this a few years ago, and if anyone is interested it's open source: https://github.com/jeffkaufman/webscripts/blob/master/commen...

(Mine can also do facebook and google plus, though to do facebook it needs an API key.)


That's amazing. I came across your project a while back when you posted it on HN and love the idea. Would be great to see it get some wider adoption and can't wait for version 1.0!

Selling is not going to be my intent. I feel this should be left to the artist behind the original idea. But indeed instapainting's model seems like a nice fit.


Very cool. I wonder if there could be an improvement in image quality if you added some form of lookahead, and chose the path that gives the best results. The branching factor would be horrendous, but I suspect that it could be pruned significantly using some A*-ish cost/ordering criterion.


You most definitely could! While experimenting with it it turned out however that the quality of the image is far more sensitive to the number of pins on the loom. And it is this that quickly escalates your branching factor.

Increasing the amount of pins thus resulted in a far better result for the same amount of computational power than the improved algorithm you propose.


True, but an increased number of pins is slower on the plotter and making the scaffolding is also harder. Plus it is easier to rewrap the loom better than to make a finer loom.

I suspect that a forward-look of 2 or 3 will give a noticeably better result, particularly if you combine it with the length-normalised score like was suggested by another commenter. Using a Dykstra algorithm and taking the first path that reaches length N should also keep the branching reasonable.

If you haven't seen it, there was something similar (algorithmically) for the FaceBull problem, which should still be kicking around online somewhere.


There seems to be one potential improvement: it seems the original artist algorithm privileges detail in more detailed areas than an overall correctness of line positioning

(Might be that it tries to optimize for edge contours)


Agreed! I think there should be quite some improvement possible for the fine details. I'm playing with some ideas although any help is welcome!


Tobias I think it's extremely cool, nice work.

I am curious, how would the final quality of the result be affected by:

- Changing the shape to say, a square. Or maybe a circle is optimal given a limited number of points.

- Inreasing the number of endpoints. There must be some limit to this. For example, even giving an infinite number of endpoints would not look like a photo, but how much better could it look?


Thanks!

- For now I have only looked at circular looms although various shapes would indeed give some interesting result. A circle however gives an great even distribution of possible lines and thus resolution.

- The number of endpoints greatly improves the resolution of the final image. The smaller the number of endpoints the more prominent the actual spacial patterns of the loom become.


That's an interesting idea, though it'd give you less resolution in the center than at the corners, which is the opposite of what you'd want?

Might be cool to try including pin position as part of the optimization.


As a physicist my first approach would be using Radon transform [1]. Should be tweaked a little, as overlapping threads are not additive in color (for black threads it's multiplicative though).

[1] https://en.m.wikipedia.org/wiki/Radon_transform


Additive and multiplicative can be moved between through exponentiation or logarithms. After that, yeah, using the Radon transform seems like a great idea.



Very cool! How much is it affected by the method of new pin selection after each time we add a line? I.e. currently you are doing oldPin = bestPin, and the first pin is selected at random... wouldn't it be better to just add lines based on a Hough transform (with rounding to the nearest pins) - starting from the line which covers the maximum number of points, and going down?


The current implementation is very similar to a hough transform limited to 1 dimension (the angle). Your approach would allow a little more variation and thus a better image. In practice this would however mean one would have to retie the thread at both the start and the end of every line.


You are right, I didn't really think about the practical implications. oldPin = bestPin makes a lot more sense now :) Thanks.


could maybe guide it around neighboring pins to get to the next optimal start pin instead, at least for some amount.


really nice! in your fitness function - wouldn't you prefer to normalize somehow for the the line's length? i.e. - longer lines (going between two far away pins) will cover more dark pixels, and are more likely to be picked, even though they might also cover many bright pixels. I wonder if the average darkness of pixels along the line will work better?


Normalizing would most likely only change the order of chosen paths. I do feel there would be something to be gained from penalizing the crossing of light areas. I there say this would increase the quality of finer details.



Dis you make the 200 pin loom or buy it? I can't seem to find such a product online.


The loom was lasercut using the facilities at my uni. I believe however the original artist used a bicycle rim as a base.


I did something like this myself:

https://github.com/danielvarga/string-art

My algorithm uses a bit fancier math: I reformulate the approximation problem as a sparse least squares problem over positive integers, solve the relaxation, and truncate and quantize the solution. It works quite well in practice, check out the images.


That's awesome, nice work!

Personally, can't say I understand image pre-processing well but did you notice that original Petros Vrellis images have deeper black color in some parts like face edges or hair and much lighter parts in cheeks and foreheads which creates a more detailed portrait?

Also, this name reminds me Jason Bourne movies as they have Operation Treadstone there.


I wonder if anyone still has some automated wire wrap equipment that could be programmed to do this kind of thing.


An interesting modification could be to have a small straight wall instead of each pin. A ball, whose trail remained visible, could bounce off each wall and create the image. The orientation of all the walls would determine what image was produced.


I worked on a script to make a similar effect for my avatar picture, but never got it working, so my hat off to the author!


This is very, very cool. Thanks for sharing!


we're could I get irl versions of this (actually spun, but printed is ok)


If you are looking for an actual spun portrait I would contact Petros Vrellis the artist behind the original idea. A printed version could be made yourself using the script on GitHub https://github.com/theveloped/ThreadTone.


Like what you made here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: