I was running some machine learning models and was struggling to find a metric that could capture errors relative to volatility in the underlying model that I was trying to predict so I created MARDE. Commonly used metrics like Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Mean Squared Error (MSE), and R-squared (R2) have limitations that failed to capture my model’s performance.
I created a new metric called Mean Absolute Relative Difference Error (MARDE), which addresses some of the shortcomings of the existing metrics. MARDE measures the average absolute error between the true and predicted values, relative to the consecutive differences in the true values. By considering the relative differences, MARDE offers a more nuanced evaluation of model performance, particularly in situations with varying volatility or fluctuating data patterns.
MARDE = (1/n) * Σ(abs_y_error_i / Δy_true_i) * 100, for i=1 to n
Here, Δy_true represents the absolute differences between consecutive elements in y_true, and abs_y_error represents the absolute differences between y_true and y_pred.
def mean_absolute_volatility_error(y_true: Union[List[float], np.ndarray], y_pred: Union[List[float], np.ndarray]) -> float:
"""
Calculate the mean absolute volatility error for the given true and predicted values.
:param y_true: List or numpy array of true values
:param y_pred: List or numpy array of predicted values
:return: Mean absolute volatility error (in percentage)
"""
# Convert input lists to numpy arrays
y_true, y_pred = np.array(y_true), np.array(y_pred)
# Check if there are less than two elements in y_true
if len(y_true) < 2:
return np.nan
# Calculate the absolute differences between consecutive elements in y_true
if len(y_true) == 2:
abs_diff_y_true = np.abs(y_true[1] - y_true[0])
else:
abs_diff_y_true = np.abs(np.diff(y_true))
# Insert the first difference value at the beginning of the array
abs_diff_y_true = np.insert(abs_diff_y_true, 0, abs_diff_y_true[0])
# Calculate the absolute differences between y_true and y_pred
abs_y_error = np.abs(y_true - y_pred)
# Calculate mean absolute volatility error (in percentage)
return np.mean(abs_y_error / abs_diff_y_true * 100)
Comparing MARDE to existing metrics:
- Mean Absolute Error (MAE): MAE measures the average absolute difference between the true and predicted values. However, it does not account for the inherent volatility or fluctuations in the data. MARDE considers this aspect by normalizing the error with respect to the consecutive differences in the true values.
- MAE = (1/n) * Σ|y_true_i – y_pred_i|, for i=1 to n
- Mean Absolute Percentage Error (MAPE): MAPE is similar to MAE, but it normalizes the error by dividing it by the true value. While MAPE is useful for expressing errors in percentage terms, it can lead to biased results when dealing with very small or zero true values. MARDE overcomes this issue by using consecutive differences as the normalization factor.
- MAPE = (1/n) * Σ(|(y_true_i – y_pred_i) / y_true_i|) * 100, for i=1 to n
- Mean Squared Error (MSE): MSE measures the average squared difference between the true and predicted values. It tends to penalize larger errors more heavily, which can be useful in some cases but might also overemphasize outliers. MARDE, on the other hand, focuses on the absolute differences and is less sensitive to extreme values.
- MSE = (1/n) * Σ(y_true_i – y_pred_i)^2, for i=1 to n
- R-squared (R2): R2 measures the proportion of variance in the true values that can be explained by the model. While R2 is useful for assessing the overall goodness-of-fit, it does not provide a direct measure of the prediction error. MARDE, as a metric focused on errors, offers a complementary perspective to R2 in evaluating model performance.
- R² = 1 – (SSR / SST)
- SSR (Sum of Squared Residuals) is the sum of the squared differences between the actual dependent variable values (y) and the predicted values (ŷ) from the model;
- SST (Total Sum of Squares) is the sum of the squared differences between the actual dependent variable values (y) and the mean of the dependent variable (ȳ).
The Mean Absolute Relative Difference Error (MARDE) is a valuable addition to the set of evaluation metrics used in forecasting and prediction tasks. By taking into account the consecutive differences in the true values, MARDE provides a more granular assessment of model performance, particularly in situations with fluctuating data patterns. It overcomes some of the limitations of existing metrics and can be employed alongside them for a comprehensive evaluation of prediction models.