Not registered as user yet
Contributions
Industry Showcase (Papers)
Wed 13 Sep 2023 14:47 - 15:00 at Room D - Open Source and Software Ecosystems 2 Chair(s): Paul GrünbacherDesigning and optimizing deep models require managing large datasets and conducting carefully designed controlled experiments that depend on large sets of hyper-parameters and problem dependent software/data configurations. These experiments are executed by training the model under observation with varying configurations. Since executing a typical training run can take days even on proven acceleration fabrics such as Graphics Processing Units (GPU), avoiding human error in configuration preparations and securing the repeatability of the experiments are of utmost importance. Failed training runs lead to lost time, wasted energy and frustration. On the other hand, unrepeatable or poorly monitored/logged training runs make it exceedingly hard to track performance and lock on a successful and well generalizing deep model. Hence, managing large datasets and training automation are crucial for efficiently training deep models. In this paper, we present two open source software tools that aim to achieve these goals, namely, a Dataset Manager (DatumAid) tool and a Training Automation Manager (OrchesTrain) tool. DatumAid is a software tool that integrates with Computer Vision Annotation Tool (CVAT) to facilitate the management of annotated datasets. By adding additional functionality, DatumAid allows users to filter labeled data, manipulate datasets, and export datasets for training purposes. The tool adopts a simple code structure while providing flexibility to users through configuration files. OrchesTrain aims to automate model training process by facilitating rapid preparation and training of models in the desired style for the intended tasks. Users can seamlessly integrate their models prepared in the PyTorch library into the system and leverage the full capabilities of OrchesTrain. It enables the simultaneous or separate usage of Wandb, MLflow, and TensorBoard loggers. To ensure reproducibility of the conducted experiments, all configurations and codes are saved to the selected logger in an appropriate structure within a YAML file along with the serialized model files. Both software tools are publicly available on GitHub.