Thankfully, GC offers productivity and allows for safe code by default, while D still offers the necessary language features to do all the low level tricks from C, C++, Rust, Objective-C, if one actually needs to go down that level.
I like being able to write safe and GC-free code by default. Rust would be even better if it was possible to opt into GC, but I would still want to to be GC-free by default. It's good that the entire library ecosystem has grown up GC-free and that should continue.
If I hadn’t already embraced C# for that sort of thing, I’d probably have turned to D years ago. Between C# and Rust, I rarely find myself needing other tools aside from Makefiles and shell scripts.
TL;DR Title is misleading clickbait but the semi-interesting part is:
> Backtesting results [of the backtesting strategy referenced] look absurd: 100% profitable. But if you change any of the many parameters in the Settings popup, they will turn into disaster. It means, the rules of this strategy are very fragile. Don't trade this! Remember about backtesting rule #1: past results do not guarantee success in the future.
I’ve always wondered about this. Of course back-testing a specific timeframe doesn’t mean anything. But if I backtest a strategy on multiple timeframes in a sliding window fashion and am always profitable - doesn’t that mean anything? Would you still say that past results do not guarantee success in the future?
It still just means you're finding parameters that match some subset of the relationships present in the data as a whole.
All sliding the window does is discover the parameters that work for the whole data set in chunks - it is an artificial distinction. It still regresses to: you've found some number space generated by some function that matches some percentage of the numerical relationships (correlations) present in the data.
It's circular reasoning because during creating the parameters you're testing it on the "future" data. It only "guarantees" success in the "future" because you discarded all the parameters that didn't work in the "future". No different from writing a model that uses the S&P 500 price "parameters" between 250 and 1000 and back-testing it on data from 1950-1996.
The only way to prove your algorithm's robustness is to generate random data and test it on that. Once you've tested against every one of the infinite possible realities of a single time window, then you can rightly assert that past results have guaranteed success in the future. Hint: it's impossible, but the random data testing is actually the correct technique to test algorithms at scale.
Back-testing on historical data is like a footnote compared to the thesis simulation can generate - the only value it contains is correlating relationships between market data and external variables not present in the numbers. Back-testing to tune an algorithm based purely on the numbers in the data is just an exercise in quantified hind-sight bias.
> The only way to prove your algorithm's robustness is to generate random data and test it on that.
I would never to that. This algorithm appears to have worked well on EURUSD daily timeframe candlesticks data. It would be ridiculous to assume that it can work well on a random set of data, like global average temperatures daily or rate of births worldwide. Or even prices of oil or other currency pair.
"Random data" didn't mean a random data set from a different domain. It meant random data from the same domain - simulated price/volume data within a reasonable range. If it can't work well on that, then it isn't a trading algorithm, it is a glorified fit curve.
yes. You know the lower bound on price is 0, and the upper bound of infinity is probably of no practical value, so you can pick something like 10 or 100x max all-time. Volume is the same, 0 to infinity, but again you can pick a distribution that is much (10-100x) wider than the real one. The wider the better, as it will better uncover tail risks and payoffs for highly unusual or atypical events (see Taleb, Black Swan, etc.)
I'm not making this up - this is how model testing is actually done, in multiple domains. Simulation is a reason banks, HFTs, hedge funds, etc. use massive compute infrastructure - doing it the right way, with many millions of plausible data sets, requires orders of magnitude more computing resources than back-testing on one data set that just happens to represent one way things could have played out (i.e. reality).
Thinking that one historical data-set is somehow special (in itself, without context) is largely a delusion. In fact you can generate near perfect historically accurate price charts just using a random walk algorithm seeded with an opening price.
Consider if your trading algorithm simply searched the history for that sliding window of data and then presented the following data as it's "prediction" -- it's utterly useless. This is the function of a compressor rather than a predictor.
Lovely, it was a bit of a hack I came up with that I was trying out, but your cleaned up version makes it look wonderful so into my init.vim it goes! Thanks :)
The extension also translates words and texts on any web page by double clicking or highlighting with the mouse. There‘s no way to do that without that permission.
How would i go about using it with a Pi-Hole? Do I just add https://raw.githubusercontent.com/bcye/Hello-Goodbye/master/... to the list of block lists? I don't see a redirect IP on your filter list (something like 0.0.0.0). Do I have to manually edit it?
What about going the other way - gaining weight? Does anyone have tips or reccomendations on how I can gain weight. I'm 6'3" and 160lbs and have always wanted to gain weight but I struggle with consistently eating 5 meals a day.
Gaining weight is simple, it's the other way around. You need to eat more calories than you burn. So just like with losing weight, track everything with a scale and a food log (like myfitnesspal). See how many calories you need to start gaining weight and start tracking. Maybe you'll notice you're not eating at a surplus at your age, activity level, etc. And just like losing weight, it will take a while, depending on how much your surplus is. You might gain 1-2 lbs a week.
document.getElementById('pressureDate').innerHTML = moment().add(1, 'days').format('dddd, MMMM Do');