Out of Distribution Robustness in Deep Learning
In this video I discuss the paper The Evolution of OutofDistribution Robustness Throughout FineTuning. Abstract: Although machine learning models typically experience a drop in performance on outofdistribution data, accuracies on in versus outofdistribution data are widely observed to follow a single linear trend when evaluated across a testbed of models. Models that are more accurate on the outofdistribution data relative to this baseline exhibit effective robustness and are exceedingly rare. Identifying such models, and understanding their properties, is key to improving outofdistribution performance. We conduct a thorough empirical investigation of effective robustness during finetuning and surprisingly find that models pretrained on larger datasets exhibit effective robustness during training that vanishes at convergence. We study how properties of the data influence effective robustness, and we show that it increases with the larger size, more diversity, and higher example difficulty
|
|