Apricot: A Weight-Adaptation Approach to Fixing Deep Learning Models
A deep learning (DL) model is inherently imprecise. To fix this problem, existing techniques retrain a given DL model over a larger training dataset or with the help of fault injected models or using the insight of failing test cases in the given DL model. In this paper, we present Apricot, a novel weight-adaptation approach to fixing DL models iteratively. Our key observation is that if the deep learning architecture of a DL model is trained over a subset of the original training dataset, the weights in the resultant reduced DL model (rDLM) can provide insights on the adjustment direction and magnitude of the weights in the original DL model to handle the test cases that the original DL model misclassify. Apricot generates a set of such reduced DL models from the original DL model. In each iteration, for each weight of the input DL model (iDLM) of that iteration, Apricot adjusts the weight of this iDLM toward the average weight of these rDLMs correctly classifying the failing test cases experienced by this iDLM and/or away from these rDLMs misclassifying the same failing test cases, followed by training the weight-adjusted iDLM over the original training dataset to generate a new iDLM for the next iteration. The experiment using five state-of-the-art DLMs shows that Apricot can increase the accuracy of these DL models by 0.35%-2.81% with an average of 1.45%. The experiment also reveals the complementary nature of these rDLMs in Apricot.