Announcement

Collapse
No announcement yet.

Partner 728x90

Collapse

Backtest not matching Optimization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Backtest not matching Optimization

    In the past when I develop a strategy I first run the optimizer on a range of historical data to find best parameter settings. Then I enter the settings the optimizer found as best and run backtest on same time period with same settings and ensure that I do get the value returned by the optimizer (sort of a sanity check on making sure I have settings set correctly) then I run backtests on out of sample time periods. Only after that do I move on to sim trading.

    I have a strategy now though, that when I optimize and get some results and settings, and enter those settings to do backtest... it does not match the results the optimizer claimed to have gotten with the same settings. I have meticulously checked every setting and they are set the same on backtest as on best run from optimizer. The results aren't even close.

    Any ideas? I can send more details and data if needed.

    Thanks!

    #2
    d1g1talfr3ak, screenshots and more details if possible would be good to further investigate - if you don't wish to post them here, you can always email us at support at ninjatrader dot com

    The 'usual suspects' would be all parameters including session template, Date Range, Commissions, Slippage, FillType and BarsRequired.

    If you backtest your settings found as 'best' directly from the chart interface - any change in results compared to the Strategy Analyzer run?

    Comment


      #3
      This happens to me a lot when I optimize it over a range of instruments. This is because the way the optimizer aggregates certain parameters is different from single set of parameters in backtest

      Comment


        #4
        First of all, to Pretender, I appreciate all suggestions and ideas, but I am not running on multiple instruments. Just using the continuous contract ES.

        Now, Attached are some screenshots. First, I ran the optimizer on 1/1/2009 to 1/1/2012.
        I did not use slippage here.. simply because I am trying to debug, not get valid results.

        Attached are the screenshots of results and the settings it used for best result. It said the best result made $42,547.50. Then, running the backtest with those same results in the strategy analyzer showed a loss of $11702.50. It is interesting to note that this backtest said it took 2018 trades while the optimizer results over same period with same settings took 558 trades. I then loade a chart of ES### over same exact time period and loaded the same strat (I had set settings to default before running backtest in strategy analyzer) and it produced results of -$56315.00... even worse than in the strategy analyzer. It did, however, take the same number of trades as backtest from strategy analyzer, with 2018. Both backtests had same commissions. They all had same rate of commision and commisions turned on. They were all set to default instrument session template and default fill type.

        I have repeated this experiment several times with different ranges and step sizes for optimizer, and it is always different.

        As one final test, I ran the optimizer, but did not set any ranges for the parameters, so it went from the settings we used in backtest to those same setiings... essentially just one single iteration. It exactly matched the backtest from strategy analyzer. Screenshot of that is attached as "OptimizerNoVaryParameters.png"

        Any help would be appreciated.
        Attached Files

        Comment


          #5
          d1g1talfr3ak, thanks for the details. To further isolate -

          Would you see the same for another instrument?
          Would you see the same if the default optimizer instead of the GA would be used?
          Would you see the same outcome if working with our SampleMACrossOver strategy as well?

          It's also important to keep in mind that the GA is not an exhaustive search, but rather a smart one that would eliminate likely non performing parameter sets from being evaluated. Depending on how big the search space is, multiple runs with the GA could yield very different 'best' results.

          Comment

          Latest Posts

          Collapse

          Topics Statistics Last Post
          Started by sjsj2732, Yesterday, 04:31 AM
          0 responses
          32 views
          0 likes
          Last Post sjsj2732  
          Started by NullPointStrategies, 03-13-2026, 05:17 AM
          0 responses
          286 views
          0 likes
          Last Post NullPointStrategies  
          Started by argusthome, 03-08-2026, 10:06 AM
          0 responses
          283 views
          0 likes
          Last Post argusthome  
          Started by NabilKhattabi, 03-06-2026, 11:18 AM
          0 responses
          133 views
          1 like
          Last Post NabilKhattabi  
          Started by Deep42, 03-06-2026, 12:28 AM
          0 responses
          91 views
          0 likes
          Last Post Deep42
          by Deep42
           
          Working...
          X