Something I do regularly is calculate a number in a bar, find the MAX of that using the MAX indicator going 10 bars back, then dividing that max number into the current bar number to see if the current bar number is big or not. (I do the exact same thing for negative numbers using the MIN indicator.)
This gives me the correct value unless I use division to calculate the number that I'm MAXing or MINing. For example, if I use (variable[0] - variable[1]) ("[bars back]"), to calculate the number, or if I add, it works. But if I use ((variable[0] / variable[1]) * 100), I have constant problems. Even though the numbers I calculate using this formula look fine, when I apply my MAX, MIN methodology the values do not come out right. I do not know where the problem is. It does not seem to be a divide by 0 issue.
Does anyone know what is happening?
Thank you.

Comment