Hacker News new | past | comments | ask | show | jobs | submit login
A.I. Has Arrived in Investing, Humans Are Still Dominating (nytimes.com)
215 points by smollett on Feb 2, 2018 | hide | past | favorite | 124 comments



Everything is always painted in such an adversarial light, it makes you despair sometimes.

I think that The Atlantic's recent article on this topic is a more nuanced insight[1]; human-machine cooperation is probably where the big money will be. Companies that seek to cut people out of the loop will probably run into a lot of problems, as will those that smash the looms. Whereas trying to smooth the interface between AI/ML conclusions and human oversight is probably going to see the most success.

[1]: https://www.theatlantic.com/education/archive/2018/02/employ...


> human-machine cooperation is probably where the big money will be.

As it has been for as long as machines have existed, really. This reminds me of Douglas Engelbart and his vision for computers. I'll cite the section of his wikipedia page that paraphrases an interview with him from 2002[0][1].

> [Douglas Engelbart] reasoned that because the complexity of the world's problems was increasing, and that any effort to improve the world would require the coordination of groups of people, the most effective way to solve problems was to augment human intelligence and develop ways of building collective intelligence. He believed that the computer, which was at the time thought of only as a tool for automation, would be an essential tool for future knowledge workers to solve such problems.

He was right of course, and his work lead to "The Mother of All Demos"[1].

Machine learning is the next step in using computers as thought enhancement tools. What we still need to figure out is an appropriate interface that is not as "black-boxy" as "we trained a neural net, and now we can put X in and get Y out".

EDIT: Now that I read that quoted section of wikipedia again, it's funny to note that computers were "only seen as tools of automation", and how modern fears of AI are also about automation. Automation of thinking.

[0] https://en.wikipedia.org/wiki/Douglas_Engelbart

[1] https://www.youtube.com/watch?v=VeSgaJt27PM

[2] https://www.youtube.com/watch?v=yJDv-zdhzMY


It's funny that you bring that up - it does seem like the concept of 'extended cognition' is one of the biggest benefits that we've collectively realized from computers (and other relatively nonvolatile communication mediums like books.)

This is a computer-oriented analogy, but most fields have their own tables and charts and maths that are tedious to keep on the tip of your mind. Still, for example, I don't need to remember the details of every API that I use; I can just remember that there is a 'do X' call available, and refer to the documentation when and if I need to actually use it.

In the same vein, I can quickly get a feel for whether an idea is possible by stringing together a bunch of abstract mental models. "Can I do X?" becomes, "are there good tools available for doing A, B, C, and D?", and that information is only a quick search away. Actually using those tools involves an enormous amount of detail, but it's detail that I can ignore when putting an idea together.

And in most cases, that 'detail' is a library or part that already abstracts a broad range of deeper complexities into something that I don't have to think about.

The question becomes something like: how do we expose people to enough information that they are aware of how much they can learn if they need to, without drowning them in trivia that they will never be interested in?


Your example also is related to my experience briefly working in P&G chemicals R & D lab; the ChemE's around me routinely used google to look up reaction kinetics of different compounds (as well as other similar queries) rather than rely on their memory of such. I was attending a local university at the time (mostly for calculus and mathematical modeling using mathematica, and french), but I'd say this experience is largely one that started my questioning the value attending university in general (I dropped out an ivy about two years later, for this reason among others).

I suspect that the concept of 'extended cognition' as it is realized with the use of computers and how people use it day to day to get work done is in conflict with how we all are mostly taught via rote memorization, and then application of information; therefore it should naturally follow that those who are heavily invested/exposed in 'non extended' cognition services have relatively more to lose, as well as any currently realistic answer to this:

>The question becomes something like: how do we expose people to enough information that they are aware of how much they can learn if they need to, without drowning them in trivia that they will never be interested in?

will bring cognitive dissonance to those who need the answer most (those with heavy exposure to relatively 'non extended' cognition services).


When you're looking at effects, I think you need to dig down into what exactly is being extended.

Are more data sources being made available? Is data being preprocessed? Is an initial task being automated?

Because the truth of any worker (in less than a ruthlessly specialized huge company) is that they may be an "extended cognition" worker, but still perform many "non-extended cognition" activities as part of their job. Because there was previously no alternative and work needs to get done.

Fast forward that, and you're never going to fully automate a goal. But you will automate sections of the process that are amenable to machines.

Advice? Recognize which type of work you spend most of your time in, and don't get caught being the "non-extended cognition" person...


Ironically enough, Engelbart was often derided by his colleagues at the time who thought hard AI was just around the corner and so all of this intelligence augmentation stuff would be obsolete soon enough. Today we are closer than ever (always just 20 years away!), but still IA rather than AI is very much the way to go.


(if you check an old comment)

Do you have any sources for this? I find "sentiment at the time" especially hard to find, historically speaking.

And I'd be fascinated to read something about this.


This is a fantastic point, and I think a lot of AI development that goes in the direction of trying to replace human beings is essentially absurd. We already have humans, why would we want something that can do what humans can already do? Rather, we want something that extends the capabilities of humans into areas where they aren't proficient. For instance, why do we put all this effort into natural language processing when humans are already totally optimized for it? What we need is a solution to scaling, not a solution to NLP itself.

EDIT: To expand; one way to do something like Siri would be to have a system that routed requests to human operators. The human operator would give the correct answer to the request, and then the system would use that as training data. If the system was reasonably confident it already knew the answer to the request from previous training data, it would answer right away, but if it was below a certain confidence it would route to a human. This seems like the smartest way to leverage machine learning in these kinds of scenarios, and I'd be surprised if someone hasn't already tried it or something similar in the past.


Some of the most successful investors, like Warren Buffet, are old school and also operating on a completely different level than most investors since he can force companies to change course, or like after the 2008 crash he was able to come in and offer deals to big banks in return for preferred shares.

It would be difficult to see how to apply narrow AI to this kind of thing. It seems really good for routine tasks, like high frequency trading, but maybe not so great for these big one-off deals which constitute many of the best investments.

Of course, Buffet might still benefit from AI analysis of broad market trends, and the like. If I'm wrong, I'd be interested to know.


> It seems really good for routine tasks, like high frequency trading, but maybe not so great for these big one-off deals which constitute many of the best investments.

I understand what you mean from an opportunity identification perspective, but you have to keep in mind that even the "big one-off deals" require routine tasks at a lower level to verify the merit of such deals. If you think about tasks like reviewing financial statements, AI could provide faster evaluation and potentially identify trends that would elude a human analyst. In any case, Buffett is known for avoiding investments in companies he doesn't deeply understand and I would bet the same stance holds for employing new technology in his investment process.


I don't see any reason to believe this isn't happening already. As old school as Buffett is (and the man is a legend in my eyes), it just doesn't seem a stretch to think that they're running at least some deep-learning-based analysis over the deals they consider.

Whether or not they give any weight to the output is quite something else, but it would be such a (relatively) trivial cost to get that answer that it seems less likely they'd not do it.


I'm pretty sure Buffett doesn't use that stuff. Although he reads the papers so he may pick up on things other people have done.


Buffett’s investing algorithm hasn’t changed in 65 years. Every day he reads financial reports. If he likes a companies business model, he makes a mental estimate of the companies value, and then looks up their trading price. If there is a big enough discrepancy in his favor he will start buying.


Buffet was a small time investor at some point. He just made a lot of good investments over a long time.


He also had a very very good mentor who wrote the original book on securities analysis.


This.

Also check out The Intelligent Investor, which is arguably Ben Graham's most accessible work on the topic.


Absolutely the modern copy I have had some commentary comparing some of Grahams analysis of problems in the 50's and 60's and how some of the problems in the dot.com crash where strikingly similar.


[flagged]


Given his level of success, even if his talents are considerable, he must've also been extremely lucky and thus even starting a day later or earlier would've probably crushed his entire life trajectory.


I agree that he was lucky in starting at the general time he did, but I think that claiming the exact day was critical is going too far. It's like the person who supposedly said that if the earth was an inch farther away from the sun it would freeze and an inch closer it would cook.


Even a couple of inches, extrapolated over time, can have a massive effect on the final trajectory.


Buffett started out as an expert in the kind of arbitrate deals that are much more available in bad markets than good. Still he was able to average 40% returns his first decade despite the lack of a good bear market.


LOL, no. Buffett’s success has been done year by year for over 60 years. Take his best year out and his success is basically the same.

Note also that he’s intentionally hamstrung himself for the last 46 years or so by investing through Berkshire Hathaway. Had he kept investing through his partnerships he would have made another $80 Billion from performance fees. Had he just invested his own money his returns would have been much higher given he’d have far more investment opportunities,


He's still doing well in the current world.


It's easier to do well if you have more money than everyone else.


Yes, not sure why you are called out. You can influence the system to your benefit when you have that kind of wealth


It’s actially way harder to earn high returns on $400 Billion than it is $4B. Buffett’s returns have been declining for 50 years because of that. As his portfolio size grows it increasingly limits his investment options.


At a certain point humans and AI can also play off one another predictably.

Everyone has played games where the AI can beat you in a straight shot, but you can lead the AI into situations that are predictable where you can gain a predictable advantage, and vice versa with humans.

Example: Buy on the dip strategy and technical strategies, big players could drive down the market and HFT can buy in the dip based on fundamentals. Bad news floods the market and HFT reacts.

Humans can predict what AI would do, then AI will reactively start to predict what humans will do when this happens, but humans always have one step ahead in new techniques, AI will be built to defend against it.

Regarding defending against a buy on the dip strategy, AI can start to learn player specifics and not react, or react differently (preemption) if those players return, however this can also eventually be played.

Humans and AI will be playing a cat and mouse game for eternity, microcosms of this can be seen in gaming AI. I think of it more like a game that will be fun to play, yes sometimes you will lose, other times you will predictably win. Bots will be challenging bots unexpectedly and predictably, but they will almost always originate from human programming originally.


This is sort of tangental.. I think it's curious that in a lot of writing AI is almost being defined as that which replaces human labour. The context of technological unemployment and whatnot.

In that frame, I think its natural that discomfort is linked to autonomy. Autonomous taxis & cruise control may be points on a continuum technically, but economically, no human involvement is different. Autonomy separates the PCs from the Looms. Cooperation where a human is involved the machine is tool use, recognizably. The humans labour gets more efficient with tools. More trinkets per human.

Maybe the Luddites thought of looms as autonomous, with humans in a supporting role.

Anyway, I think it's hard to predict where this goes on a 25y scale.


This kind of my beef with most SciFi movies. They always seem to paint an antagonistic relationship between man and machine when the reality will probably be something in the middle.


That's because science fiction movies are mostly not projecting a predicted future, they are projecting and exploring our fears and aspirations with the future.


I actually find it really funny that a lot of SciFi predicts a future where we are actually talented and intelligent enough to create a human-like, sentient being. The reality seems so distinctly far off from that- though I guess that is the point of fiction.


> human-machine cooperation is probably where the big money will be

The sceptical counterargument to that, which I go back and forth on, is "that's what they said about chess". There was a transitional period when this was true, then the engines disappeared into the middle distance.

I work on the hunch that the middle-ground of tasks where humans improve on, or with, machines is both small and unpredictable; computers will tend towards being either useless or strongly superhuman for each problem.


this is still true in chess. Humans use chess computers to play the game of chess. The tournament money still goes to humans, everybody still cares about the world's best players, and so forth. Even as chess engines vastly have surpassed humans in technical capability, they haven't somehow sidelined humans in all aspects of the domain. Not even literally in competitive games.

It's not like everybody has somehow switched to watching engine games. That is in fact just a niche market of the chess world. We are humans and as such we still enjoy seeing real humans thrive and compete in chess more than we care about machines.

If anything chess is the perfect example that the pessimism is misplaced, chess engines have not killed chess as a human endeavour.


I don't agree. I feel that chess engines have killed chess as a pastime that people play. I know growing up in the 80s I played it during lunch at school, and schools had chess clubs which really weren't on the order of forming neophyte pros, but just students who liked the game.

But now it's rather futile to play it, because at some point you'll run up against a machine that can outplay you perfectly. I mean, before you could play a grandmaster or something, but the moment you or someone else for you buys a ten-dollar chess program, you realize that despite improvement, the computer will always defeat you.

I don't really see chess as that big any more in the public eye. The kids might play Minecraft, which rewards human creativity and doesn't really force you to compete against an A.I. optimized to beat 99% of all Minecraft players.


As a sport, yes. We do the same with the 100m dash, and there is no-one anywhere who would employ humans based on the business utility of being to run 100 metres fast outside of entertainment. Pro chess is in the entertainment business; the strength of the players isn't really material to that.

You're confusing the economics of recreation and entertainment with the economics of efficiently making things. They both exist, but they're very different.


But the purpose of chess is taking part and having fun. The purpose of finance is making money. The comparison is weird. No one will give their money to someone to manage for fun if an AI could make more money.


>it makes you despair sometimes.

On the other hand, if you are optimistic and excited in a world where everyone else is in despair, you have some distinct advantages. :)


As Alistair Cook said on Black Tuesday back in 29 tomorrow might be a good day to buy some shares.


That's not entirely comforting


AI arrived in Investing a long, long time ago. If you limit AI to deep learning, as in deep neural networks, maybe only 3-5 years ago. Strategies based on news have been around for decades. Figuring out what the news means isn't necessarily as helpful as it seems because it's hard to put much size on when there is limited time even if you are first. However, various patterns around news is much easier and to do that all you needed to know was that some important news had arrived, not necessarily whether it was good or bad, the goodness or badness was plainly visible in how the price action. Figuring out the magnitude but not the sign of the importance of a news item has not been difficult for a long time. Yet somehow we keep getting articles about how AI has arrived in investing.

As far as the return forecastability deniers out there, particularly the ones who claim to be doing it on the basis of some sort of empirical thing, well, if you can't be bothered to actually look at the data or even read academic literature on the subject, I can't be bothered to educate you.


If you can reliably predicts the magnitude, even without the sign you can still trade very profitably on the volatility of a stock.


Not so sure about that.

I've literally missed the sign on a trade before, and it was 7-figure disastrous. (I've missed the direction of movement on individual symbols a number of times, but this one time I literally went the wrong way on everything by accident.)

Markets adjust too quickly to flip your position and profit in any reliable way. On planned or anticipated events, people are all locked and loaded waiting for something to happen.

However, I'd much rather know the sign because at least I can put on some position and guess a little at the magnitude.


I have no professional experience in finance, and I wouldn't try to go long volatility because it's too expensive and risky, but why can't you buy puts and calls at the same time? Or, I have read that there are options on the VIX.


Obviously not every strategy will be sign independent, but you can design strategies to be that way.


Yes, but there are easier ways to make money trading volatility than forecasting single asset volatility. While you can fairly easily forecast volatility with R^2 higher than 60% for most assets vs 5-7% for the best models for returns, that's not the important bit. The important bit is whether you are better than the rest of the market. I would argue that implied volatility is harder to trade off a forecast than straight return because a greater proportion of the participants in the vol market are professionals, and also more likely to be highly quantitative geeks. Also my comment wasn't about being able to forecast large moves but being able to determine how much a news item was going to move an asset. As far as handling important news events and getting out of the way is concerned, options market participants are very good at it and have been for a while.


Only if the vol mkt is mispriced. Spreads on most single names are wide enough to make this difficult or impossible, and deeper/tighter mkts tend to be much more efficient anyway.


Breaking news: robots and humans both equally unable to predict the next digit in a random sequence. Obviously an incredible over simplification of whats happening in finance and this article.


The entirety of the article is summed up with this statement hidden inside.

> Mr. Amador attributed the underperformance to a normal variability in returns. The fund’s programming beat the market when tested against historical data, he said, and he expects the same in real life as time passes.

Backfitting in all its forms is known to give false confidence and usually fails. It may work for a moment, but then other traders exploit whatever the backfitting had noticed causing the backfit to no longer work.


Probably not that much of an oversimplification.

Side note: Why is it that we need something so physical to attach these concepts to?

The photo of the monolithic POWER7 rig that houses Watson with it's translucent logo is akin to all of the Bitcoin articles with shiny gold coins with an icon. I understand the need to have some kind of image, but it's just so detached from the reality of what's going on in practice.

Getting back on topic, I do wonder how much data they're feeding in - it's one thing to pass masses of historical trades into the algorithm, quite another to have it watch for relevant news events that affect the asset prices.


> Side note: Why is it that we need something so physical to attach these concepts to?

Posts with images get more clicks.


It's compression, not noise. Markets compress corporate reality. A perfectly compressed stream is by definition perfectly random (because if it had a predictable pattern, you could use that to compress it further), but the getting there is different - markets are not intrinsically but mechanically random; they approach randomness because of how they operate. The hope is that humans are not all that good at compression, so AI may be able to pick up on patterns that we can't.


That's just not true. There are whole companies based on algorithmic trading. Jane Street, for instance, has a tech talk on how they take advantage of Caml in automated trading.

https://www.youtube.com/watch?v=hKcOkWzj0_s


The problem is not that the sequence is random (it isn’t), but that the seed changes on a regular basis.


Since our reality is based on randomness at a very low level (eg radiation patterns), what makes you believe that this randomness is somehow lost on a higher level?


> randomness at a very low level (eg radiation patterns)

Sometimes I wonder if one couldn't actually decipher these background radiation patterns, given enough resolution in sensors and enough calculation power to crunch possible models fitting the patterns.

Imho not even the drop of a dice is random and a seed is nothing but a pre-defined set of variables.

But at that point, we could probably just simulate our own realities.


I‘m no expert and all, but AFAIK there is true randomness going on in quantum effects. Here is a nice discussion: https://physics.stackexchange.com/a/210609


Because there is true signal in the way markets move (after all fundamentally the movements are based macro and micro economics events). People find patterns in the data that occur at a far higher rate than chance even once you account for data mining. The problem is a system that works in one era doesn't work in another.


I agree, there are patterns (as everywhere in nature), but they are highly probabilistic because they are noisy due to randomness.


I run a similar experiment, with real money and allow my robot to trade on my behalf. For long-term investments, I continue to follow the indexed-only ETF-based couch potato model, but I'm happy to let this run. I view it as a risky investment, akin to investing in any startup, and have invested accordingly.

The other reality is that over the long-term it's highly unlikely to beat the market. Realistically (almost) nothing beats the market over a long-enough period. At the same time in my testbed, with real data, real 'money where your mouth is' it worked. It's no crazier than any other idea.

Ultimately whether humans or AI drive investment is immaterial if you believe in an indexed portfolio. Should those investment approaches succeed, they'll join the indexes in some way. Similarly, should they fail, they won't


I'd also really love to create a trader bot for part of my money. Any chance you could give a few pointers on how to get started in this field? (good resources to read, frameworks to use, etc...)


Sure. I use a variety of free datasources - including Alphavantage, and the nightly Nasdaq dumps, to collect a bunch of data nightly, in addition to real-time. My robot is based on errbot - which I integrate with a private slack organization/channel so that I can interact, and have all the logging infrastructure I need.

The database is MySQL, and communicated with via SQLAlchemy (through errbot of course), with a series of commands and crons (errcron) set up, in order to both notify myself and execute on various data gathering activities. The rest of the processing code is likewise - in python. I don't rely on scipy, numpy, or anything else, given that I don't see the need.

The reality is that there are a series of activities that are profitable at the micro) level in the geography in which I trade, which is why my robot currently integrates with Questrade - specifically so that I can execute from Slack, while I work at my 'regular' job. All passwords and reusable tokens are stored in an ansible-vault, so that I can commit and push my repository around.

I'm running two different experiments actively: one that does an arbitrage based on data I'm looking into, the other than specifically tries to eke out a $0.10 gain per share, closed daily. Going into Jan 1 2018, I'd made ~57% from August 31 (first day of trading). This year, I'm down ~8% overall so far. Passively, the return has been great.

Now, I'm changing my focus - enough people I know are generally interested and willing to light the same amount of money that I am on fire. So, I'll keep experimenting, but I'm taking 1% of the overall return for the 'bank' (i.e. my corp).

This will all clearly catch fire.


can you provide details on your setup?


> the E.T.F. runs most of its calculations on I.B.M.’s Watson supercomputer

Every time I read an article that mentions Watson, it's sprouted a new thing the name is applied to. Previously it was a question-answering system, which famously won Jeopardy. Then it became a general NLP platform. Then it became a brand name for basically all IBM machine learning offerings. Now it's also a supercomputer?

If what this really means is that they built a bot that plugs a bunch of data into IBM's cloud ML platform and trades on that basis, I'm not really surprised it's not beating the market. Building an auto-trading bot using off the shelf ML techniques is actually a pretty popular university project that's worth trying if you're curious, though (at least with simulated money, or money you can afford to lose). They can probably do better than a typical university project, because I assume they have more extensive financial data feeds. But everyone else serious about automated trading (which lots of people are) also has those data feeds plus the same off-the-shelf ML, so unless they have something else...


Watson is a marketing term, and a division of IBM.

Think of it similar as "Amazon Cloud", which really consists of over 100 different type of services/products, some of them very different, and build by different teams, but the "Amazon Cloud" is more of an umbrella.


and one that hasn't been terribly successful in a lot of areas! It's often sold as almost a software/business consulting effort, which requires a ton of money and time to get up and running

MD Anderson Cancer Center wasted $62 million on it: https://www.healthnewsreview.org/2017/02/md-anderson-cancer-...


I don't like bringing politics into HN, but I like this quote: "IBM Watson is the Donald Trump of the AI industry—outlandish claims that aren’t backed by credible data."


But would Bob Dylan really lie to me? ;_;


It's amazing how a little bit of light insider trading can trump all the algorithms...


To take your comment a little deeper despite me knowing you are being facetious, I think that's exactly it: the algorithms cannot communicate to facilitate these types of advantages. They cannot, in essence, be human.

In a world ran and dominated by humans, there will always be an inherent advantage to being part of the race that creates the game. If algorithms perfect a system in such a way that there stands no gain to be made by those at the top, people will simply create a new game to play.


until they can. And at that point it gets really weird. I have heard reports (but cannot confirm them obviously) that machine learning techniques are already creating trading strategies that exploit weaknesses in other trading system algorithms. At what point does the algorithm correlate what it can see in email inboxes on a connected cloud service with advantageous stock trades ...


> I have heard reports (but cannot confirm them obviously) that machine learning techniques are already creating trading strategies that exploit weaknesses in other trading system algorithms.

This is true, but does not require machine learning.


> I have heard reports (but cannot confirm them obviously) that machine learning techniques are already creating trading strategies that exploit weaknesses in other trading system algorithms.

That was being done in HFT long before ML came about. In fact, it's supposed to be the primary source of profit.


This is where the real money is to be made in AI trading. Of course this sets of a very interesting series of countermeasure/measure battles.


Is money being made? Seems to me that all trading does is just redistribute existing money, and no wealth is created.

What a waste to have all these computational resources engaging in a continual 'series of countermeasure/measure battles' instead of calculating something useful.


Trading results in price discovery. Accurate prices allow more informed investment decisions and the development of more real wealth. The alternative is something like a centrally planned economy which have generally been unsuccessful.


It may seem pointless, but trading like this is actually incredibly useful. It (mostly) removes emotion from the equation, thus lowering the chances of market shocks and decreasing volatility.


I wasn't being facetious -- I meant exactly what you said. Two humans having coffee and trading secrets "they heard around town" will beat an algo any day of the week.


My impression was that humans are still routinely bested by indexes in the long run, so being "dominated" by humans sounds downright scathing.

> Those programs may be useful, but they are not A.I. because they are static; they do the same thing over and over until someone changes them.

Oh, I see. It's better because it's AI. My mistake, then.


Indeed! how could all humans beat the index?


In college in the 70's, a fellow student was developing a stock trading program on the institute's PDP-11. He figured it was going to make him rich. I asked him what the algorithm was, but he was very secretive about it.

It was likely some form of technical analysis.

I wonder sometimes if it ever worked out for him.


> artificial intelligence has an edge over the natural kind because of the inherent emotional and psychological weaknesses that encumber human reasoning.

It's Mr Spock's problem. He always produced inferior decisions because he failed to take into account the emotions of others.


The Mr Spock's problem is fictional, designed to make for an interesting plot, not to reflect reality.

For example, in humans, an innate lack of empathy (the ability to feel the emotions of others) and being unemotional yourself are factors correlated with being a better, more effective detector of emotions and manipulator of emotions; taking into account the emotions of others can be done better if its done in an analytical way (however, it requires attention, it's not an "always active" skill then), and lack of emotionality allows you to express the emotion that's most beneficial for your goals in current situation instead of whatever you actually think.

If anything, a realistic advanced AI / Spock should be expected to have the communication skills of a good hostage negotiator combined with a charismatic politician combined with a wise psychotherapist combined with a sleazy car salesman. Having and feeling emotions is not required to understand them in others and show them yourself. For normal humans (excepting e.g. some cases of sociopathy) it's hard to fake emotions because we're evolved to have emotional expressions as a somewhat trustworthy, hard to fake signal; it's a limitation built in homo sapiens, not an inherent limitation.


> not to reflect reality.

Oh, I know that well. I just find it amusing. Spock is actually the most illogical character in the show, and the most emotional.

I'm not convinced this is intentional on the part of the scriptwriters. For example, how does a scriptwriter write a character who is more intelligent than the writer is? Most "advanced intellects" in scifi seem remarkably average in their intelligence, reflecting the intelligence of the writer.


This is in the context of a certain work of Harry Potter fanfiction, but you may find this set of notes for how to write intelligent characters interesting. I specifically direct your attention towards the section "Level 2 intelligent characters", which goes into how to write a character that appears smarter than the author.

http://yudkowsky.tumblr.com/writing

Also on Spock in particular, there's a good talk by Julia Galef, The Straw Vulcan, about how irrational Spock really is and what a rational Vulcan should look like. https://www.youtube.com/watch?v=Fv1nMc-k0N4


It seems that both of my observations are well-trod territory!

Anyhow, the book "Brainwave" by Poul Anderson has the best description of what more intelligent characters would be like - they spoke with fewer words, as the rest of the information was more obvious from context.


Taking account of other's emotions and being affected by emotions and biases yourself are two different things.

That's exactly why I never believed Spock-like characters in fiction. Human psychology isn't that complicated. If you're logical and smart, how on Earth wouldn't you be capable of understanding human emotions?


AI and humans have arrived in investing. S&P is dominating for now.

I just pulled out of my "intelligent" portfolio from a 401k rollover into the S&P. Using that portfolio tool was unintelligent for me :(


The more people that follow index funds, the larger my portfolio grows. Definitely follow this advice, nothing can go wrong and the price can only go up. Unless of course, there is a large withdrawal event looming around the corner that will incredibly impacts the current market price of every stock.

When were baby boomers set to retire again?


> When were baby boomers set to retire again?

they've been retiring for years. 1945 births are 73 now, well into retirement age. boomers will start retiring over the span of 2007 to 2034, depending on when they were born and the age they choose to retire at. they'll then be drawing down their retirement funds for decades.

are you trying to suggest that this ongoing multidecadal process will constitute a large "withdrawl event"?


What does it look like when a large group of people start selling a large amounts of stock directly into a buy wall?

Fear of an insolvent retirement can trigger this behaviour which then can compound on itself as other retirement plans are jeopardized. An entire new generation of wealth giving up on prior security and stock distributions in favor of new markets can also trigger this, such as what almost happened in South Korea with crypto currencies.

Hope none of this happens of course, but please be aware of the risks you are implicitly taking.


My retirement is primarly S&P at this time. Of course a 20% correction could always be around the corner especially with the market being so high for so long.

South Korea has a much different demographic than the United States. Samsung plays a large part of the whole countries GDP.

Insolvent retirement is actually a fear of anyone. Primarily because you don't know when you will die. So, how long do you accept the inherent risk and start making your assets more liquid.

I really think that Baby boomers retiring isn't as big as an issue as the consumer credit market and student loan credit. It seems right now that some of the S&P's upside is the fact that its on the backs of people putting their new toys on credit cards and finance plans. I don't think that this can last forever and also the fact that the things they put on them keep on lasting longer and longer.

But, for my retirement I'm pretty long on S&P (I'm only 30 years old). I am not going to pull out at the moment and timing the market for things like that is hard for me to fathom. Taking defensive positions is more for actually Baby Boomers and people that are day trading. As you say that this correction will be triggered by Baby boomers retiring the only thing that actually counters that is medical science. I have a few coworkers in their 70s and they look and act like 50 year olds.


Here is my belief, may be different from yours:

There is a massive generational theft that's been happening over many centuries. Property prices inflating along with the rising cost of education and loans are further rigging the system towards the older, wealthy and established.

Instead of this trend slowing, it's accelerating at the expense of class mobility for the young, poor and intelligent. This disillusions these individuals en masse.

Where have disillusioned intelligent people recently been life-changingly rewarded for their efforts? Cryptocurrencies have done so, loudly. In fact, there are developer celebrities in many of these communities.

The choice to the young and intelligent: Seemingly immediate power, prestige, and potential class mobility versus an stressful period of self improvement that causes extreme debt (college).

The game needs to be better for the young and intelligent or they are going to play a different one. Many already are.

I'm extremely long on cryptocurrency for this (and other) reasons. For a sense of time scale, I have an iota retirement plan that begins distribution in 10 years and lasts 35.


Can I ask how old you are?

Why do you think that Cryto-currencies are fundamentally different to the internet boom which made 20-somethings like Larry Page/Sergy Brin/Mark Zuckerberg some of the richest people on earth in only 10 to 20 years?

I’m sure plenty will get rich on Cryto. I’m unconvinced that this time that makes it different for some fundamental reason.


Sure, I'm 35.

The ease of access to capital for good ideas without any of the bullshit involved in startup fund raising is what has convinced me of this. It really doesn't matter what ivy league school the CEO went to, it's outweighed by the idea, the ability to execute and the ability to convince others to contribute resources.

Crypto is like the internet boom if the boom was more distributed, as anyone could take part in investment from the seed round.


This is so true. With all the chaos that wall street likes to drum up about investing, the index just keeps going up up up. Is it the same ego feeling you get for thinking you can "beat the system" when gambling that makes people feel like they can "beat the system" when speculating in the stock market?


Market capitalization based weighting, the basis of the Nasdaq-100 index and $QQQ ETF, probably constitutes a baseline for what can be considered "unbiased". Any AI agent that measures market "sentiment" can only be conditioned upon the quality of the data it is fed. Which will vary across companies.

An example of one of the best algorithmic strategies I have seen is the following. During secular bull market eras. Simply buy and hold for a period of 24 months. Every IPO that comes down the pike. Regardless of sector. Backtesting this strategy yields annualized 50% rates of return. Which beats $FB performance the last four years :) No doubt, ML could further optimize selectivity, weightings, hold duration, etc. The central thesis is that growth in market cap is strongest during the growth phase of a company.

Of course, today is the day another great algorithmic trading idea: fading volatility spikes. Unwinds in most violent and consequential fashion. Be cautious out there!

Two Big Volatility Players May Be on the Loose as VIX Tops 15

https://www.bloomberg.com/news/articles/2018-02-02/two-big-v...


Most retail investors aren't able to participate in most IPOs. The majority of IPO shares are allocated to institutional investors or high net worth individuals. Your strategy doesn't work if you can't get a share allocation and have to buy on the secondary market at higher prices.


The other problem is identifying "secular bull market eras". Well, identifying them going forward, not retrospectively.


Ultimately markets serve humans, even if the number of beneficiaries is shrinking. And living in a world of limited resources I doubt that AI will have a long term future. It's so dependent on humans to provide: electricity, computer hardware, maintenance, and even purpose.

Humans, at present, also seem better equipped to adapt to irrational markets; especially when they are the source of irrational behavior.


>Between Oct. 18, when it began trading, and the end of the year, the E.T.F. rose 3.1 percent, compared with a 5.1 percent gain for the Standard & Poor’s 500-stock index.

A three month track record? "Dominating"? Come on. This article is either an advertisement or a nothing-burger to get that clickbait headline although I can't decide which.


> "It is to early to say whether the E.T.F., A.I. Powered Equity, will be a trendsetter or merely a curiosity."

The New York Times are now hiring people who don't know the difference between "to" and "too"? Well, that explains the sophomoric understanding of AI showcased throughout the rest of the article!


You can see the unregulated AI in the cryptocurrency markets.

I wonder what the profits have been so far. People have invested in faster internet trunks for trading ages ago, just for a few ms quicker trades.

https://www.forbes.com/forbes/2010/0927/outfront-netscape-ji...

https://www.popularmechanics.com/technology/infrastructure/a...


"Investing" is more than public stocks and other securities. Humans will always be needed for investment in new tech or evaluation of a venture's potential. Show me the machine capable of pickinv between vhs or betamax ... before either hit the shelf.


Amazing they could write a whole article like this and not mention funds like 2Sigma which are entirely AI-focused. Those funds have been sucking cash out of the rest of the managed fund sector at an astonishing rate (2S alone have over $50Bn under mgmt).

No connection to these guys BTW


The title is misleading that humans dominate investing.

Cats selecting stocks with its whiskers and monkey throwing darts on a newspaper on average beats most human professional investors. Most amateur investors are better of with low priced index funds tracking stock index than buying more expensive managed products as those have higher fees.

Book A random down walk wall street. https://en.wikipedia.org/wiki/A_Random_Walk_Down_Wall_Street


Massive concentration into index funds has its own problems which will lead to tears at some point.

One problem is it poor's money into all stocks in an index which is great in a long bull market - but it does cause problems with being over concentrated in stocks which will cause more losses when the down turn happens.

check out https://seekingalpha.com/article/4081504-invest-high-flying-...

which is basically saying.

"The top five holdings of the S&P 500 and the NASDAQ 100 indices have generated the bulk of the market's returns in 2017. Increasing fund flows from actively managed to passive index funds has contributed to the market outperformance of these issues and has resulted in concentration risk for index investors as well as investors in the individual stocks. The next bear market or market correction could result in disproportionately large outflows from these stocks."


This is false. The weights of the largest constituents of the index are set by active traders, not passive investors.


Did you not read the line with "increasing fund flows from actively managed to passive index funds"

Buying the index forces you to by all stocks in the index good and bad over and undervalued.

Some of my active funds in the UK got out of banks before the crash a FSTE 100 tracker could not - so some one investing the same amount in a tracker is almost never going to catch up to that active fund.


I did read it. This statement is false: "but it does cause problems with being over concentrated in stocks which will cause more losses when the down turn happens."

Total market index funds do not overweight or overconcentrate stocks. As I correctly stated, the weight of the stock in the index and in the passive fund are set by the consensus of all active traders.


Someone who is blindly putting all there investments into an index is not an active trader - its when those investors panic and pull investments that the problem will occur - bit of a problem for those with FANG options as well.


So you agree that total market index funds do not overconcentrate stocks relative the market consensus? You stated the opposite in your initial post.


The market consensus is wrong in this case - this is the obvious inference.


If machines are better than human investing, is there people that use machines to select investments? I don't mean high frequency, but long-term.


Since it's a trade secret, it's hard to know exactly what they do and how much human input there is - but a couple well-known quant hedge funds are Renaissance Technologies and Two Sigma. They were both started by mathematicians/computer scientists and they manage 10s of billions.


Most of the trading done by index funds is now fully automated.


Considering that today the press can label any piece of software AI, one could say it happened in 1990s.


Return over 10 years after fees as compared to a minimum expense index fund.

Everything that loses to that is a con (98-99% of actively managed funds). Matters little if you were ripped off with a human picking the losers, an AI or both or neither.


If humans are still dominating, AI has not arrived.


A.I. doesn't have insider trading expertise.


you don't buy on a dip, you buy after the dip is finished and it starts going up again


Good morning NYT, i have been doing this for years with my stock trading robots and my inspiration was not some obscure SV etf or other fin tech gimmick, but the leaders on Wall Street period like RenTec, 2Sigma, etc...


Stopped reading after their first example of an investing model was "high frequency trading" ...


"It is to early"... I know it's picky of me, but when the NYT can't edit their work, what is the point of even trying to educate children on grammar. Surely this was just a typo, but that is no excuse.


Let them who has never released a bug into production throw the first stone ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: