Faulty Strategy Tester Optimization Results - Anyone else experiencing this? Is there a hidden setting I am not aware of?

 

I have just gone through extensive testing of 6 pairs to tune a new EA.

When I use the resultant Optimization parameters my Strategy Tester results don't come close to that of the optimization runs. In fact they seem to be opposite.

BTW I never liked that Optimization Report throws away results.... But then I found that it is a settable feature, right click settings on the Optimization Results Tab to select or deselect "Skip Useless Results"

After an hours optimization I run the best results in the tester and rather than having a profit factor of 5 I end up with a pf or .5, As a goof I try the settings for a pf of .5 and low and behold the run is profitable.

For the simulations I am doing nothing fancy, I am only varying one parameter across several runs to get the best payout over a set payout.

I have been using the ST for 4 years with out incident. These results simply Do Not Compute Will Robinson

 
Hm interesting. so far haven't had this problem.
 
forexCoder:
Hm interesting. so far haven't had this problem.

Yes Typically you can plug the results back into and run the tester and get relatively the same results usually within 5 %

I should mention that this is a 6 week H4 run on a platform that has collected raw data during that period. (Actually system has been running pretty much none stop since January collecting data on all pairs)

 

The most common reason is that the spread has changed. This leads to less takeprofits are hit and more stoplosses. Depending on the strategy this can be a killer. This is the main reason why a tester terminal should be kept offline.

If you start at a bad moment you could use a very high spread during the testing or optimization.

 

Excellent point,

I will retry test 6x2 hours, and see how that fares. I will post result of offline testing.

 

Side question

Is it possible to safely copy the contents of history from one platform to another in order to share raw data?

This would be needed if the ST platforms are offline.

 

I would recommend to not start testing on off hours. (or do that if you want a bad case test).

History data can be exported and imported without the need to connect to internet.

 

Since we are talking about testing on a H4 spread really does not come into play since we are not scalping.

Entries are done based on new candle, exits are done on target hits or conditions of candle close.

The strategy is so simple it is dumb. literally near black and white conditional trading.

I would be curious to hear if either of you experience the same result with your EAs.

I ran the optimization on one offline platform and then as results came in ran the setting on another offline ST.

I am still getting results that are not even close. Looking at the resultant chart, The EA in ST are absolutely correct.

I have perhaps one idea for the discrepancy. I should knock my head against t the wall a few times if this is true.

Is the ST Optimization capable of reading iMA on different time frames? Maybe a Bug in the optimization Engine?

The strategy does look at lower time frames for MA's to determine Go signal. In the tester Journal I see {and have double checked many times during development} the verification of trade acceptance and rejection based on lower time frame indicator signals. If the Optimization engine has a bug in it that fails to create the correct stimuli this would account for the huge discrepancies.

 

I am not sure about using lower timeframes, i normally hardcode the timeframes and use a M1 for testing purposes. That results that i can use always upper timeframes. (AFAIK you cannot get Zero Bar values from all timeframes but the one your are simulating)

Since we are talking about testing on a H4 spread really does not come into play since we are not scalping.

Two days before i had a spread of at least 40Pips on USDCHF (https://www.mql5.com/en/forum/134307) which would affect even a non scalping strategy based on H4. Never say Never...

On my offline terminal i have no differences at all between optimization and normal testing. But tomorrow i can see if looking at lower timeframes causes the problem.

Just for information, what build do you use?

//z

 
No, so far havent experienced what your'e describing, using vaguely similar :P strategies. Had an optimization (genertic algorithm) run for a week max, so nowhere near yours, but I ran it multiple times, and while results were not exactly equal, it was a matter of tenths of per cent.
 

Pooh,

A Reboot of machine did not help.

I at least have a back door, I have access to about 30 VM's which I can leverage into ST's.

ZZ, if I grok the test strategy, logoff the test platform when the currency pair under test is at a tolerable spread for testing. Then it would be important not to log off the platform until testing is completed so as not to affect the average spread.

Thanks for your input. At quite a loss as to what might be causing these differences.

Reason: