Like this for example....
phaseTemp = atanRadians*(180/Math.PI); Phase.Set(phaseTemp);
Value.Set(((Math.Min(CurrentBar + 1, Period) - 1 ) * Value[1] + trueRange) / Math.Min(CurrentBar + 1, Period));
So today, I had was working on my old indicator where I did my calculations outside the Set method, stored the result in a double, and then stored the double into the DataSeries with the Set method.... here's the code section...
if (Math.Abs(InPhase[0] + InPhase[1]) > 0) { tangent = Math.Abs((Quadrature[0]+Quadrature[1])/(InPhase[0]+InPhase[1])); atanRadians = Math.Atan(tangent); phaseTemp = atanRadians*(180/Math.PI); Phase.Set(phaseTemp); }
Which I am really happy with, because it exactly matches the chart Published in Ehlers' article below:
But thinking that it was a redundant step to first store the calculation result in a double, I refactored the code to this....
if (Math.Abs(InPhase[0] + InPhase[1]) > 0)
{
tangent = Math.Abs((Quadrature[0]+Quadrature[1])/(InPhase[0]+InPhase[1]));
atanRadians = Math.Atan(tangent);
Phase.Set(atanRadians*(180/Math.PI));
}
To my shock this code plots very differently... the result is way off...
The max reading is much higher than before... especially in a percentage basis, as this indicator can only range between 6 and 50.... At Max reading on this chart, its a difference of 15.83 points...
Moving the atanRadians*(180/Math.PI) line inside the Set parens is the only change needed to make such a drastic change.
So now I am wondering why?
I know an easy answer would be, so don't do it like that then.... but I want to know why its doing it, because we often see calculations done inside the Set method line....
Comment