If you're going to quote an article, you shouldn't leave out context that changes the meaning of the sentence you're quoting -
"The 75th anniversary commemorations of the dastardly Japanese attack on Pearl Harbor
seem to be a great gift to the Alt-Right’s xenophobic nationalists."
It's the 75th anniversary commemorations that seem to be a gift - not the attack itself (obviously).
Ah yes you're quite right! Apologies to all. I'd misread that, although I think my point still stands: I'd like to read about the story and not hear about politics in a foreign (to me) country.
If you own $1 billion of equities on swap, you can guarantee that your broker has actually bought the equities (or something close to them) as a hedge. There is no way that your broker has an unhedged $1 billion equity swap with you. They collect a spread on the swap transaction of a few basis points, which is hopefully less than the cost to hedge the swap. They don't want to take the risk of the position moving against them by more than a few basis points (typically the equity market moves hundreds of basis points per day) so they almost always hedge.
The Japanese FX broker, on the other hand, has the odds so firmly stacked in their favour that they have no need to actually do any FX trading (though I expect that they do some). If the margin is high enough, the statistical fluctuations don't matter that much. That's the difference between a brokerage and a bucket shop.
There's a thing called a "Garden of Eden" configuration that has no predecessors, which is impossible to get to from any other possible state.
For a rule like Life, there are many possible configurations that must have been created by God or somebody with a bitmap editor (or somebody who thinks he's God and uses Mathematica as a bitmap editor, like Stephen Wolfram ;), because it would have been impossible for the Life rule to evolve into those states. For example, with the "Life" rule, no possible configuration of cells could ever evolve into all cells with the value "1".
For a rule that simply sets the cell value to zero, all configurations other than pure zeros are garden of eden states, and they all lead directly into a one step attractor of all zeros which always evolves back into itself, all zeros again and again (the shortest possible attractor loop that leads directly to itself).
There is a way of graphically visualizing that global rule state space, which gives insight into the behavior of the rule and the texture and complexity of its state space!
Andrew Wuensche and Mike Lesser published a gorgeous coffee table book entitled "The Global Dynamics of Cellular Automata" that plots out the possible "Garden of Eden" states and the "Basins of Attraction" they lead into of many different one-dimensional cellular automata like rule 30.
Those are not pictures of 1-d cellular automata rule cell states on a grid themselves, but they are actually graphs of the abstract global state space, showing merging and looping trajectories (but not branching since the rules are deterministic -- time flows from the garden of eden leaf tips around the perimeter into (then around) the basin of attractor loops in the center, merging like springs (GOE) into tributaries into rivers into the ocean (BOA)).
The rest of the book is an atlas of all possible 1-d rules of a particular rule numbering system (like rule 30, etc), and the last image is the legend.
He developed a technique of computing and plotting the topology network of all possible states a CA can get into -- tips are "garden of eden" states that no other states can lead to, and loops are "basins of attraction".
Here is the illustration of "rule 30" from page 144 (the legend explaining it is the last photo in the above album). [I am presuming it's using the same rule numbering system as Wolfram but I'm not sure -- EDIT: I visually checked the "space time pattern from a singleton seed" thumbnail against the illustration in the article, and yes it matches rule 30!]
"The Global Dynamics of Cellular Automata introduces a powerful new perspective for the study of discrete dynamical systems. After first looking at the unique trajectory of a system's future, an algoritm is presented that directly computes the multiple merging trajectories of the systems past. A given cellular automaton will "crystallize" state space into a set of basins of attraction that typically have a topology of trees rooted on attractor cycles. Portraits of these objects are made accessible through computer generated graphics. The "Atlas" presents a complete class of such objects, and is inteded , with the accompanying software, as an aid to navigation into the vast reaches of rule behaviour space. The book will appeal to students and researchers interested in cellular automata, complex systems, computational theory, artificial life, neural networks, and aspects of genetics."
"To achieve the global perspective. I
devised a general method for running
CA backwards in time to compute a
state's predecessors with a direct reverse
algorithm. So the predecessors of predecessors, and so on, can be computed, revealing the complete subtree including the "leaves," states without predecessors, the
so-called “garden-of-Eden" states.
Trajectories must lead to attractors in a finite CA, so a basin of attraction is
composed of merging trajectories, trees, rooted on the states making up the attractor
cycle with a period of one or more. State-space is organized by the "physics" underlying the dynamic behavior into a number of these basins of attraction, making
up the basin of attraction field."
(Ok-ok, I know it is just a fun comment, but just to be serious anyway for the sake of truth:
you can think of how likely that this is a GoL pattern: take the set of all imaginable GoL patterns, at all timestep, and check how frequent is this subpattern then, compared to some combination of some common/well-know patterns, like these: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#Exampl...
)
No they can't - or at least, this paper doesn't provide any compelling evidence that they can.
I read this paper when it first came out a few years ago, and produced an implementation of the signal. They have heavily overfitted to historical data - many plausible alternative assumptions for which keywords are predictive are not profitable in backtest at all, let alone useful as a basis for future trading.
This is an unfortunate example of non-finance domain experts, who I'm sure are more than capable in their respective fields, making egregious errors when they try to apply their knowledge in finance.
> I thought the common practice was using part of the historical data for creating the model, and another sizable, non overlapping chunk to validate it.
One problem is that too often, people break the data into a training set and a testing set. Then they train N algos on the training data, test them on the testing data, and then trade on the algo that tested best.
Once you use the testing set for more than one algo, it's really a meta-training set.
Really, you need a training set, a testing set, and a validation set. If you use the validation data set with more than one algo, it's no longer a validation set.
So, you train N algos, test N algos. Pick the best, and validate it. If validation fails, do you have enough discipline to wait for more data to come in and try again? Most people do not and will make hand-wavy arguments about why it's okay to re-shuffle the same data into 3 data sets and try again.
Its an infinite regression. You keep needing more data to be completely 'fair'. If the data set is finite, eventually you use all of it. Then where do you go?
Another route is to model the data source, and train on the model (which you can run forever to get endless data). Then test on the real-world data. But that's only as good as the model.
> This is an unfortunate example of non-finance domain experts, who I'm sure are more than capable in their respective fields, making egregious errors when they try to apply their knowledge in finance.
Full ACK to this statement. I remember when this post was written in 2013 (by the way, can that date be put in the title?), alongside a similar paper arguing Twitter hashtags/likes/retweets could serve as a market signal - mostly for this excellent response:
Does the author define what they mean for a stereotype to be "accurate"?
For example, it is a stereotype that in the US, white people vote Republican. Is this an accurate stereotype?
On one level, it is - in recent elections around 55-60% of white voters voted Republican, so it is certainly true to say that most white people vote Republican.
On another level, given a random white voter from the US, there is only a 55-60% chance that they voted Republican, which is not much better than guessing. So it's not particularly accurate to say that white people vote Republican.
I expect that most stereotypes fall into this class - they are accurate in aggregate, but not particularly informative when dealing with individuals. And surely this is the problem with stereotypes? We take a characteristic which is true in aggregate for a group (white people vote Republican, black people listen to hip hop, old people are less open to new experiences) and assume that those characteristics are true of individual members of that group.