Many conventional methods fall short when confronted with heavy-tailed data distributions. In this talk, we will discuss our recent research on adaptive robust estimators. Our key insight is that the robustification parameter should adapt to the sample size, dimensionality, and error moments. This adaptation allows us to strike an optimal balance between bias and robustness, in the presence of heavy-tailed errors. We focus on the mean and regression cases, and examine the performance of these estimators through theoretical analyses and numerical experiments. Furthermore, we explore potential applications that extend beyond these specific scenarios.
Additionally, we tackle a practical and computational challenge associated with adaptive robust estimators—carefully tuning the robustification parameter using techniques like cross-validation or Lepski’s method. To address this issue, we introduce a novel objective function that automates the parameter tuning process, resulting in self-tuned robust estimators. Our numerical studies demonstrate the superiority of this approach compared to other state-of-the-art methods.