Idea 1:
I realize we could just increase the generation size and # of generations amongst other variable but... I had a thought.
What I am thinking : I'd like to say "for a date range, time frame, strategy, and instrument, we ran [x,y,z,a,b,c,d,e,f] GO criteria with parameters [par a, par b, par c, etc]" and while we were able to achieve the appropriate result, is it the best?
Question: Is there a logical way to record the permutations of parameters as well as the above criteria (date, time, instrument, etc) that were inputted into one GO (in a list or array, perhaps?) then ingest this into a new GO? Basiclaly tell it, this is what has been run... where do you want to go from here? It's not easy but... if we ingest the history, and it's all "complete" we'll run one generation and be done. If not, we'll know more generations are needed.
We could change certain flexible GO paramaters on the second round (mutation rate, etc) but not the core dates/strategy between rounds. This isn't necessary but could be used for robustness...
I'm looking for conceptual help here..
Idea 2:
Alternatively: Given the genetic optimizers opaqueness in what is happening IN process natively, there would appear to need some sort "calculator" that predicts how many tests might need to be run to sufficiently cover [xxxxx] many permutations?
Asked another way: Is there a formula that allows for the derivation of settings of the Genetic Optimizer to determine it's likelihood of success, with the intent to drive fewer runs for time/resource efficiency?
Please don't say just run more from the beginning because we can't really know it's "done" unless it stops early...
Thanks in Advance
Comment