So for example, I set the last 12 values of a DataSeries to be 100.0 and then pass it into an EMA.
Here's the example code:
for( int i=0; i<12; i++ )
{
testSeries.Set( i, 100.0 );
}
IDataSeries emaTest = EMA( testSeries, 5 );
for( int i=0; i<5; i++ )
{
Print( "VALUE: " + testSeries[i].ToString() );
Print( "EMA: " + emaTest[i].ToString() );
}
VALUE: 100 EMA: 109.097235188783 VALUE: 100 EMA: 113.645852783175 VALUE: 100 EMA: 120.468779174763 VALUE: 100 EMA: 130.703168762144 VALUE: 100 EMA: 146.054753143216
What am I doing wrong here? Is there a bug in the EMA method?

That looks to me like floating point inaccuracies inherent in trying to use digital equipment to approximate floating point numbers.
Comment