loading page

Investigation of Marketing Mix Models' Business Error using KL Divergence and Chebyshev's Inequality
  • R Venkat Raman,
  • Ridhima Kumar,
  • Pranav Krishna
R Venkat Raman
Aryma Labs Pvt. Ltd Aryma Labs Pvt. Ltd Aryma Labs Pvt. Ltd

Corresponding Author:[email protected]

Author Profile
Ridhima Kumar
Aryma Labs Pvt. Ltd Aryma Labs Pvt. Ltd Aryma Labs Pvt. Ltd
Pranav Krishna
Aryma Labs Pvt. Ltd Aryma Labs Pvt. Ltd Aryma Labs Pvt. Ltd

Abstract

This report is an investigation into the trade-offs between Predictive Accuracy and Business Impact in Robyn, which uses the Nevergrad algorithm for optimizing (Normalized RMSE) and Business Error (Decomposed Residual Sum of Squares) as independent objectives. We examined models with the best and the worst Decomp.RSSD on the Pareto frontier, analyzing their performance through the lenses of Kullback-Leibler (KL) Divergence and Chebyshev's inequality. The aim is to explore how these models balance the dual objectives of error minimization and "Business Impact", highlighting the complexity of selecting the "Best" model when considering both statistical alignment and business relevance. Analysis revealed unexpected trends between the Best and Worst models in terms of KL Divergence and error clustering, highlighting a trade-off between minimizing business error and maintaining predictive accuracy, and pointing to the need for a nuanced model evaluation approach, and a delicate hand in choosing the final model.
04 May 2024Submitted to TechRxiv
07 May 2024Published in TechRxiv