Gas Estimation and Optimization for Smart Contracts on Ethereum
When users deploy or invoke smart contracts on Ethereum, a fee is charged for avoiding resource abuse. Metered in gas, the fee is the product of the amount of gas used and the gas price. The more gas used indicates a higher transaction fee. In my doctoral research, we aim to investigate two widely studied issues regarding gas, i.e., gas estimation and gas optimization. The former is to predict gas costs for executing a transactions to avoid out-of-gas exceptions, and the latter is to modify existing contracts to save transaction fee. We target some problems that previous work did not solve: gas estimation for loop functions, and gas optimization for storage usage and arrays. We expect that my research can help Ethereum users avoid economical loss for out-of-gas exceptions and pay less transaction fee.
Quality analysis of mobile applications with special focus on security aspects
Smart phones and mobile apps have become an essential part of our daily lives. It is necessary to ensure the quality of these apps. Two important aspects of code quality are maintainability and security. The goals of my PhD project are (1) to study code smells, security issues and their evolution in iOS apps and frameworks, (2) to enhance training and teaching using visualisation support, and (3) to support developers in automat- ically detecting dependencies to vulnerable library elements in their apps. For each of the three tools, dedicated tool support will be provided, i.e., GraphifyEvolution, VisualiseEvolution, and DependencyEvolution respectively. The tool GraphifyEvolution exists and has been applied to analyse code smells in iOS apps written in Swift. The tool has a modular architecture and can be extended to add support for additional languages and external analysis tools. In the remaining two years of my PhD studies, I will complete the other two tools and apply them in case studies with developers in industry as well as in university teaching.
The genuine supervision of modern IT systems brings new opportunities and challenges by making available big data streams that, if properly analysed, can support high standards of scalability, reliability and efficiency. Rule-based inference engines on streaming data are a key component of maintenance systems for detecting anomalies and automatizing their resolution, but they remain confined to simple and general rules, a lesson learned from the expert systems era. Artificial Intelligence for Operations Systems (AIOps) propose to take advantage of advanced analytics, such as machine learning and data mining on big data, to improve every step of supervision systems, such as incident management (detection, triage, root cause analysis, automated healing). However, the best AIOPs techniques often rely on ``opaque'' models, strongly limiting their adoption. In this thesis, we aim at studying how Subgroup Discovery can help AIOps. This data mining offers possibilities to extract hypotheses from data, resp. from predictive models, helping the experts to understand the underlying processes generating the data, resp. predictions. To ensure relevancy of our propositions, this project involves both data mining researchers and practitioners from Infologic, a French software editor.
Semi-automated Cross-Component Issue Management and Impact Analysis
Despite microservices and other component-based architecture styles being state of the art in research for many years by now, issue management across the boundaries of a single component is still challenging. Components that were developed independently and can be used independently are joined together in the overall architecture, which results in dependencies between those components. Due to these dependencies, bugs can result that propagate along the call chains through the architecture. Other types of issues, such as the violation of non-functional quality properties, can also impact other components. However, traditional issue management systems end at the boundaries of a component, making tracking of issues across different components time-consuming and error-prone. Therefore, a need for automation arises for cross-component issue management, which automatically puts issues of the independent components in the correct mutual context, creating new cross-component issues and syncing cross-component issues between different components. This automation could enable developers to manage issues across components as efficiently as possible and increases the system’s quality. To solve this problem, we propose an initial approach for semi-automated cross-component issue management in conjunction with service-level objectives based on our Gropius system. For example, relationships between issues of the same or different components can be predicted using classification to identify dependencies of issues across component boundaries. In addition, we are developing a system to model, monitor and alert service-level objectives. Based on this, the impact of such quality violations on the overall system and the business process will be analysed and explained through cross-component issues.
Tackling Flaky Tests: Understanding the Problem and Providing Practical Solutions
Non-deterministically behaving tests impede software development as they hamper regression testing, destroy trust, and waste resources. This phenomenon, also called test flakiness, has received increasing attention over the past years. The multitude of both peer-reviewed literature and online blog articles touching the issue illustrates that flaky tests are deemed both a relevant research topic and a serious problem in everyday business. A major shortcoming of existing work aiming to mitigate flaky tests is its limited applicability since many of the proposed tools are highly relying on specific ecosystems. This issue also reflects on various attempts to investigate flaky tests: Using mostly similar sets of open-source Java projects, many studies are unable to generalize their findings to projects laying beyond this scope. On top of that, a holistic understanding of flaky tests also suffers from a lack of analyses focusing on the developers’ perspective with most existing studies taking a code-centric approach. With my work, I want to close these gaps: I plan to create an overarching and widely applicable framework that empowers developers to tackle flaky tests through existing and novel techniques and enables researchers to swiftly deploy and evaluate new approaches. As a starting point, I am studying test flakiness from previously unconsidered angles: I widen the scope of observation investigating flakiness beyond the realm of the Java ecosystem while also capturing the practitioners’ opinion. By adding to the understanding of the phenomenon I not only hope to close existing research gaps, but to retrieve a clear vision of how research on test flakiness can create value for developers working in the field.