Hacker News new | past | comments | ask | show | jobs | submit login

You're right, sorry if my other comment didn't make sense. I was talking about both the data itself and also the YCAdvice discovery and search interaction layer on top of that data.

We did first construct the spreadsheet (where lot of categorization was automatic) but then the real value IMO is being able to understand and make use of that multi-dimensional data really easily. The goal being to explore efficiently and develop rapid intuition for anything. The actual spreadsheet is here so you can see the difference: https://docs.google.com/spreadsheets/d/1xTMF_t_EDG34IjnXo-ho...

We are improving stuff so that Polymer's stack can auto-convert spreadsheets with numerical data and do great visualization etc so it can improve human intelligence beyond just categorical data which is the case here.

Let me know if I can clarify anything. Thank you for checking it out.




So the visual/interaction layer is generated somehow based on the underlying spreadsheet? How are those interface decisions made and how does the UX/UI change from one spreadsheet to another?


Hmmm nice questions. So when I was at Google before, I saw a lot of different kinds of datasets. But I also saw commonalities in terms of not just data types, but also data abstractions or the kind of stuff people like to do with that data. The goal (a difficult one for sure) is that can we automate a lot of that for any dataset so it's still unique and powerful enough for that particular dataset but at the same time can be done by anyone without writing code or ton of manual customization ?

In summary, there is a fairly complex process towards making those decisions. Feel free to pm me at ash [at] polymersearch.com if you want to dig deeper.


Thanks for the additional info.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: