Hacker News new | past | comments | ask | show | jobs | submit | more boto3's comments login

To me the most interesting bit in the article is they managed to scrape Google results at scale.


This is sad. If the report is correct, Robinhood might be negligent here. I don't know if there's security law required, but Schwab has 4 levels of option tradings [0], and you'd need to apply and get approved. I've investing/trading for more than 10 years now, but they've only given level 0, and rejected my level 1 application. It seems this guy's trade is at level 2.

[0] https://help.streetsmart.schwab.com/pro/4.36/Content/Option_...


Everything is factored in the real estate price: the weather, the school district, the night life, the ethnic grocery stores, crime rate etc. and last but not least, the opportunity to change to a new job with much higher comp.

Real estate, while not as liquid as stock market, is quite efficient in my opinion.


It's not that hard to DIY. You can start with: https://facebook.github.io/planout/


> startups have tremendously high variance. Most startups are st. There's a few of them that are really exceptional, and if you land at an exceptional one, you can do really well. If you randomly pick something, you'll probably have a bad time.

Great words of wisdom. How does one find a great one though? Even for YC with their expertise and experience, most of the startups they fund are not great.


Are they trying to solve hard problems? Do they have talented engineers? (Though I guess you only find out after working there for some time)


That just goes to show that Pearl's approach on causal reasoning is solid. Instead of looking patterns and correlation in data, one could consider building causal graphs from the domain knowledge.



ok, I'll be the one to ask. How much is this check worth now?


Probably 3,000$ and 12,000$ respectively?


GP is asking, how much did this investment translate into.


2007 Dollars != 2018 Dollars.


Assuming that's what was meant, and not what the other comments did:

$3000 -> $3614.05 according to https://www.bls.gov/data/inflation_calculator.htm


Probably around 0.06 * 10,000,000,000

Assuming no subsequent investments, dilution, selling of shares, etc.


So 600M is the upper bound. Drew Houston owns roughly 25% of the company now, and he probably owned ~75% at when YC invested, for a dilution rate of 3. I'd say YC investment probably has dilution rate of 10, for the real amount of 60M. Not bad, but not Whatsapp scale either.


I know. Sucks to be them, right?

Helping other people realise their dreams and only making 8/9 figures in the bargain. Can't imagine why they would bother.


Dumb question: Why would YC get diluted so much more than DH? I've started to read up on venture financing and this doesn't match my mental model for any of the mechanisms I've learned about. (Like if anything I'd expect YC to have less dilution due to follow-on/pro rata investments, if they did any.)


I believe that when founders stock is fully vested, additional grants are an option to retain the founders beyond 4 years (or whatever the founders stock beating period is). 11 years as CEO likely led to additional grants. For a much bigger example, look at the additional grant Musk got this last week for Tesla.


Using that math, the initial investment of 15000 returned 4000x


Hahaha. This comment is hilarious.


The gap between the levels of abstraction that humans and machines operate is much bigger than the most AI researchers think. No amount of computation for various kinds of gradients can compensate for that. The next AI breakthrough will be a radical development in knowledge modeling and representation.


I suspect perhaps that the AI community were used to (for decades) solving _no_ problems. Now that _some_ problems are solved in their field (e.g. facial recognition for social networking purposes, playing pong, playing chess) the thinking is that now all problems are going to be solvable. I think we are going to learn that this isn't the case. Perhaps there is just a threshold of problem hardness beyond which we can't get yet at any particular point in technology-time, or perhaps there's a hard barrier waiting to be discovered beyond which current approaches just can never take us regardless of cleverness or hardware speed/density?


>> I suspect perhaps that the AI community were used to (for decades) solving _no_ problems.

Where's that coming from? There's certainly been some important advances recently, but to claim that no progress was made is strange.

Just to give one blatant example, Deep Blue defeated Kasparov in 1997; Chinook had fully solved draughts (checkers) by 2007; TD-Gammon played backgrammon consistently at world champion level by 1995; two computer programs, JACK and WBRIDGE5 won five bridge championships between 2001-2007. All of those are 10 years older than AlphaGo/Zero and each has a very long history going back to the 1950's in the case of draughts AI [1].

You probably haven't hear dabout most of them because they were not advertised by a company with the media clout of Google or Facebook, but they were important successes that solved hard problems. There are many, many more results all over the AI bibliography.

And, just to settle this once and for all- this bibliography starts a lot earlier than 2012.

_______________

[1] All that's in "AI: A Modern Approach", 3d ed.


Wasn't it sort of the same way during the first AI boom? Expert systems providing some real value in limited and sexy prototypes working with a simplified blocks world that hinted at something power world-changing. Then...

> the thinking is that now all problems are going to be solvable

...and then failure and the "AI winter" for a generation after the initial promise had been discredited.


Either humans will drop various language variations to accommodate AI or it will take a looooooong time.


Probably some convergence between the two. You're probably used to changing some syntax around to get Siri/Alexa/* to understand you. That puts some mental tax on you, but you get used to it. Devices will seek to lower that mental tax, but ultimately you'll probably get used to it enough that the tax will feel free and devices won't need to evolve the syntax much past a certain point.

What seems missing in a lot of these threads is the idea of "context", and I think that's where there's lots of room for innovation. Current voice-assistants work "okay" for single-sentence queries, but if the device doesn't understand (or if I fail to phrase things in a way that it's expecting), it doesn't ask clarifying questions, and it doesn't use past exchanges to inform future ones (beyond perhaps some voice training data). It also limits the kinds of things it can do by requiring that all of the necessary information be presented in one utterance. It also raises the "mental tax" on doing "real things" because I know I have to say a long phrase just right or start over, and that's sure to raise anyone's anxiety-levels...


> Current voice-assistants work "okay" for single-sentence queries

They might work "okay" if you're native speaker. As ESL speaker with an accent, it's nowhere near close. It's a total PITA beyond "what time is it".


Not only native speaker, you also can't speak with an accent. Most speech recognition currently only works with a very "clean" language free of regional expressions or accents.


Good point. I changed Siri to Spanish to practice my speaking and comprehension, and it was a huge challenge. I assumed it was because Spanish support wasn't as developed as English, but I guess it's that there's not enough training for accents in any language.


Your google search queries use english words, but do not resemble english. (Or at least, I hope they don't.) Humans adapt to tech just fine.


My money is on humans adapting to AI.


I'd say the choice really depends on the what you are trying to do. Python, Java (and even Golang) all have their pros and cons. If you are doing personal projects that involve with data processing, and maybe a simple webapp, nothing beats Python and its ecosystem. If you are planning to grow a team, and the projects are enterprise-ish, then Java is a good contender. In my case, while I wrote some 30K lines of Golang in my last job a few years back (and still got compliments from the current maintainers to this day), I just tolerated the language and never enjoyed it. Before that, I did a lot of Java and before that C and Perl. I don't program daily anymore, and only play with data science / ML ideas these days, and so I do Python (mostly in Jupyter notebooks) and run the scripts on real datasets in remote servers, with a simple Flask app to display the results/charts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: