Social
Mon 10 Oct 2022 10:00 - 10:15 at Gold B - Coffee BreakThe coffee break is an intentional allocation of time toward low-fidelity activities like chatting and brain storming. The idea is to help break the flow of the conference in a way that helps restore energy and create engagement.
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1NOTE: SUBMITTED A RESEARCH PAPER TO ASE RESEARCH TRACK Application development for the modern Web involves sophisticated engineering workflows which include user interface aspects. Those involve Web elements typically created with HTML/CSS markup and JavaScript-like languages, yielding Web documents. WebMonitor leverages requirements formally specified in a logic able to capture both the layout of visual components as well as how they change over time, as a user interacts with them. Then, requirements are verified upon arbitrary web pages, allowing for automated support for a wide set of use cases in interaction testing and simulation. We position WebMonitor within a developer workflow, where in case of a negative result, a visual counterexample is returned. The monitoring framework we present follows a black-box approach, and as such is independent of the underlying technologies a Web application may be developed with, as well as the browser and operating system used.
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1This paper presents Quacky, a tool for quantifying permissiveness of access control policies in the cloud. Given a policy, Quacky translates it into a SMT formula and uses a model counting constraint solver to quantify permissiveness. When given multiple policies, Quacky not only determines which policy is more permissive, but also quantifies the relative permissiveness between the policies. With Quacky, users can automatically analyze complex policies, helping them ensure that there is no unintended access to their data. Quacky supports access control policies written in Amazon’s AWS Identity and Access Management (IAM), Microsoft’s Azure, and Google Cloud Platform (GCP) policy languages. Quacky is open-source and has both a command-line and a web interface. Video URL: \url{https://youtu.be/YsiGOI_SCtg}. The Quacky tool and benchmarks are available at \url{https://github.com/vlab-cs-ucsb/quacky}
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1Software metrics capture information about software development products and processes. These metrics support decision-making, e.g., in team management or dependency selection. However, existing metrics tools measure only a snapshot of a software project. Little attention has been given to enabling engineers to reason about metric trends over time—longitudinal metrics that give insight about process, not just product. In this work, we present PRIME (PRocess Internal MEtrics), a tool for computing and visualizing process metrics. The currently-supported metrics include productivity, issue density, issue spoilage, and bus factor. We illustrate the value of longitudinal data and conclude with a research agenda. The demo video can be found at https://youtu.be/YigEHy3_JCo. The source code can be found at https://github.com/SoftwareSystemsLaboratory/prime.
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1To reduce the attack surface from app source code, massive tools focus on detecting vulnerabilities in Android apps. However, some obvious weaknesses have been highlighted in the previous studies. For example, (1) most of the available tools such as AndroBugs, MobSF, Qark, and Super use pattern-based methods to detect vulnerabilities. Although they are effective in detecting some types, a large number of false positives would be introduced, which inevitably increases the patching overhead for app developers. (2) Similarly, the static taint analysis tools such as FlowDroid and IccTA present hundreds of vulnerability candidates of data leakage instead of confirmed vulnerabilities. (3) Last but not least, a relatively complete vulnerability taxonomy is missing, which would introduce a lot of false negatives. In this paper, based on our prior knowledge in this research domain, we empirically propose a vulnerability taxonomy as the baseline and then extend AUSERA by augmenting the detection capability to 50 vulnerability types. Meanwhile, a new benchmark dataset including all these 50 vulnerabilities is constructed to demonstrate the effectiveness of AUSERA. The tool and datasets are available at: https://github.com/tjusenchen/AUSERA and the demonstration video can be found at: https://youtu.be/UCiGwVaFPpY.
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1We present Trimmer, a state-of-the art tool for code size reduction. Trimmer reduces code size by specializing a program for constant inputs provided by developers. These constant inputs can be provided as command-line options or configuration files, and are used to specify features that must be retained, which in turn identify features that are unused in a specific deployment and can be removed. Trimmer includes sophisticated compiler transformations for input specialization, supports precise yet efficient context-sensitive inter-procedural constant propagation, and introduces a custom loop unroller. Trimmer is easy-to-use, and is highly configurable. In this paper, we discuss Trimmer’s configurable knobs that allow developers to explicitly control analysis precision vs analysis times. We also discuss the high-level implementation of Trimmer’s static analysis passes. The source code of Trimmer is publicly available at https://github.com/ashish-gehani/Trimmer. The video demonstration is available at https://www.youtube.com/watch?v=6pAuJ68INnI
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1The rapid expansion of robotics relies on properly configuring and testing hardware and software. Due to the expense and hazard of real-world testing on hardware, robot system testing increasingly utilizes extensive simulation. Creating robot simulation tests requires specialized skills in robot programming and simulation tools. While there are many platforms and tool-kits to create these simulations, they can be cumbersome when combined with automated testing. We present Maktub: a tool for creating tests using Unity and ROS. Maktub leverages the extensive 3D manipulation capabilities of Unity to lower the barrier in creating system tests for robots. A key idea of Maktub is to make tests without needing robotic software development skills. A video demonstration of Maktub can be found here: https://youtu.be/c0Bacy3DlEE, and the source code can be found here https://github.com/RobotCodeLab/Maktub.
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1A key threat to the usage of third-party dependencies has been the threat of security vulnerabilities, which risks unwanted access to a user application. As part of an ecosystem of dependencies, users of a library are prone to both the direct and transitive dependencies adopted into their application. Recent work involves tool supports for vulnerable dependency updates, rarely showing the complexity of the transitive updates. In this paper, we introduce our solution to support vulnerability updating. V-Achilles is a prototype that shows a visualization (i.e., using dependency graphs) affected by vulnerability attacks. In addition to the tool overview, we highlight three use cases to demonstrate usefulness, and an application of our prototype with real-world npm packages. The prototype is available at https://github.com/MUICT-SERU/V-Achilles, with an accompanying video demonstration at https://www.youtube.com/watch?v=tspiZfhMNcs.
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1We present RoboSimVer, a tool for modeling and analyzing RoboSim Models. It uses a graphical modeling approach to model platform-independent simulation models of Robotics called RoboSim. For model analysis, we implemented a model-transformation approach to translate RoboSim models into NTA (Network of Timed Automata) and its stochastic version SHA (Stochastic Hybrid Automata) based on some patterns and mapping rules. RoboSimVer is able to get a simulation model. It also provides different rigorous verification techniques to check whether the simulation models satisfy property constraints. For experimental demonstrations, we adopt the case study Alpha algorithm for robotics. We use a robotic platform model of swarm robots in an uncertain environment, to illustrate how our tool supports the verification of stochastic and hybrid systems. The demonstration video of the tool is available at https://youtu.be/mNe4q64GkmQ
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1Cross-Chain bridges have become the most popular solution to support asset interoperability between heterogeneous blockchains. However, while providing efficient and flexible cross-chain asset transfer, the complex workflow involving both on-chain smart contracts and off-chain programs causes emerging security issues. In the past year, there have been more than ten severe attacks against cross-chain bridges, causing billions of loss. With few studies focusing on the security of cross-chain bridges, the community still lacks the knowledge and tools to mitigate this significant threat. To bridge the gap, we conduct the first study on the security of cross-chain bridges. We document three new classes of security bugs and propose a set of security properties and patterns to characterize them. Based on those patterns, we design Xscope, an automatic tool to find security violations in cross-chain bridges and detect real-world attacks. We evaluate Xscope on four popular cross-chain bridges. It successfully detects all known attacks and finds suspicious attacks unreported before. A video of Xscope is available at https://youtu.be/vMRO_qOqtXY.
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1Many organizations seek to increase their agility in order to deliver more timely and competitive products. However, in safety-critical systems such as medical devices, autonomous vehicles, or factory floor robots, the release of new features has the potential to introduce hazards that potentially lead to runtime failures that impact software safety. As a result, many projects suffer from a phenomenon referred to as the `big freeze’. SAFA is designed to address this challenge. It uses cutting-edge deep-learning solutions to generate trees of requirements, design, code, tests, and other artifacts, in order to visually depict how hazards are mitigated in the system, and then generates warnings when key artifacts are missing. It uses a combination of colors, annotations, and recommendations to dynamically visualize change across software versions, and augments safety cases with visual annotations to aid users in detecting and analyzing potentially adverse impacts of change upon system safety. A link to our tool demo can be found at https://www.youtube.com/watch?v=r-CwxerbSVA.
Tool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1Recommender systems (RSs) are increasingly being used to help in all sorts of software engineering tasks, including modelling. However, building a RS for a modelling notation is costly. This is especially detrimental for development paradigms that rely on domain-specific languages (DSLs), like model-driven engineering and lowcode approaches. To alleviate this problem, we propose a DSL called Droid that facilitates the configuration and creation of RSs for particular modelling notations. Its tooling provides automation for all phases in the development of a RS: data preprocessing, system configuration for the modelling language, evaluation and selection of the best recommendation algorithm, and deployment of the RS into a modelling tool. A video of the tool is available at https://www.youtube.com/watch?v=VHiObfKUhS0.
Pre-printTool Demonstrations
Tue 11 Oct 2022 10:00 - 10:30 at Ballroom A - Tool Poster Session 1Test-based generate-and-validate automated program repair (APR) systems generate many patches that pass the test suite without fixing the bug. The generated patches must be manually inspected by the developers, a task that tends to be time-consuming, thereby diminishing the role of APR in reducing debugging costs. We present the design and implementation of a novel tool, named Shibboleth, for automatic assessment of the patches generated by test-based generate-and-validate APR systems. Shibboleth leverages lightweight static and dynamic heuristics from both test and production code to rank and classify the patches. Shibboleth is based on the idea that the buggy program is almost correct and the bugs are small mistakes that require small changes to fix and specifically the fix does not remove the code implementing correct functionality of the program. Thus, the tool measures the impact of patches on both production code (via syntactic and semantic similarity) and test code (via code coverage) to separate the patches that result in similar programs and that do not remove desired program elements. We have evaluated Shibboleth on 1,871 patches, generated by 29 Java-based APR systems for Defects4J programs. The technique outperforms state-of-the-art raking and classification techniques. Specifically, in our ranking data set, in 66% of the cases, Shibboleth ranks the correct patch in top-1 or top-2 positions and, in our classification data set, it achieves an accuracy and F1-score of 0.887 and 0.852, respectively, in classification mode. A demo video of the tool is available at https://bit.ly/3NvYJN8.
Social
Mon 10 Oct 2022 12:00 - 13:30 at Gold B - LunchLunch
Student Research Competition
Tue 11 Oct 2022 14:00 - 15:30 at Ballroom A - Poster Session (for judges only)no description available
During code reviews, software developers often raise security concerns if they find any. Ignoring such concerns can bring a severe impact on the performance of a software product. This risk can be reduced if we can automatically identify such code reviews that trigger security concerns so that we can perform additional scrutiny from the security experts. Therefore, the objective of this study is to develop an automated tool to identify code reviews that trigger security concerns.
With this goal, I developed an approach named ASTOR, where I combine two separate deep learning-based classifiers– (i) using code review comments and (ii) using the corresponding code context, and make an ensemble using Logistic Regression. Based on stratified ten-fold cross-validation, the best ensemble model achieves the F1-score of 79.7% with an accuracy of 88.4% to automatically identify code reviews that raise security concerns.
Since toxicity during developers’ interactions in open source software (OSS) projects show negative impacts on developers’ relation, a toxicity detector for the Software Engineering (SE) domain is a need. However, prior studies found that contemporary toxicity detection tools failed their reliability with the SE texts by achieving a poor performance. To encounter this challenge, I have developed ToxiCR, a SE-specific toxicity detector that is evaluated with manually labeled 19,571 code review comments. I evaluate ToxiCR with different combinations of ten supervised learning models, five text vectorizers, and eight preprocessing techniques (two of them are SE domain-specific). After applying all possible combinations, I have found that ToxiCR significantly outperformed existing toxicity classifiers with accuracy of 95.8% and an 𝐹1_1 score of 88.9%
Pre-printData science libraries are updated frequently, and new version releases commonly include breaking changes. However, developers often use older versions of libraries because it is challenging to update the source code of large projects. We propose CombyInferPy, a new tool for analyzing and fixing breaking changes in library APIs. CombyInferPy infers rules in the form of Comby templates, a structural code search and replace tool that can automatically update source code. Preliminary results show that CombyInferPy can update the pandas library’s Python code. Using the Comby rules inferred by CombyInferPy, we can automatically fix several failing tests and deprecation warnings. This shows that this approach is promising and can help developers update their libraries.
Path coverage is the process of measuring the fraction of execution paths that are taken during run-time in a software by a given set of inputs. It is commonly used to assess the stability, security, and functionality of an application; therefore, this is closely associated with software testing. Path coverage requires knowledge of the software’s source code (white-box testing), specifically the software’s potential execution paths; however, the problem becomes more challenging when the source code is not available and path coverage must be done using only the software’s binary code. This can occur if the software is a product, the software is a legacy system, or the source code is not available (e.g.~contracted software or permission-less).
This paper investigates how black-box \textit{path detection and discovery} can be achieved using \textit{execution fingerprints} that are a concise frequency representation of a software’s executed assembly instructions. Execution fingerprints can be used to identify which inputs exercised different sections of code, thus revealing execution paths. Experimental results show that clustering execution fingerprints can be used to differentiate the execution paths of software and provide a method to detect these different paths all inside a true black-box testing environment
Embedded systems are widely used for implementing diverse Internet-of-Things (IoT) applications. These applications often deal with secret/sensitive data and encryption keys which can potentially be leaked through timing side-channel analysis. Runtime-based timing side-channel attacks are performed by measuring the time a code takes to execute and using that information to extract sensitive data. Effectively detecting such vulnerabilities with high precision and low false positives is a challenging task due to the runtime dependence of software code on the underlying hardware. Effectively fixing such vulnerabilities with low overhead is also non-trivial due to the diverse nature of embedded systems. In this article, we propose an automatic runtime side channel vulnerability detection and mitigation framework that not only considers the software code but also use the underlying hardware architecture information to tune the framework for more accurate vulnerability detection and system-specific tailored mitigation.
The AAA pattern, i.e. the Arrangement, Action, and Assertion, is a common and nature layout to create a test case. Following this pattern in test cases may benefit comprehension, debugging, and maintenance. The AAA structure of real-life test cases may not be explicit due to its high complexity. Manually labeling AAA statements in test cases is tedious. Thus, an automated approach for labeling AAA statements in existing test cases could benefit new developers and projects that practice collective code ownership and test driven development.
This study contributes an automatic approach based on machine learning models. The ``secret sauce" of this approach is a set of three learning features that are based on the semantic, syntax, and context information in test cases, derived from the manual tagging process. Thus, our approach mimics how developers may manually tag the AAA pattern of a test case. We assess the precision, recall, and F-1 score of our approach based on 449 test cases, containing about 16,612 statements, across 4 Apache open source projects. For achieving the best performance in our approach, we explore the usage of six machine learning models; the contribution of the SMOTE data balancing technique; the comparison of the three learning features; and the comparison of five different methods for calculating the semantic feature. The results show our approach is able to identify Arrangement, Action, and Assertion statements with a precision upwards of 92%, and recall up to 74%. Our experiments also provide empirical insights regarding how to best leverage machine learning for software engineering tasks.
Smart contracts are self-governed computer programs that run on blockchain to facilitate asset transfer between users within a trustless environment. The absence of contract specifications hinders routine tasks, such as program understanding, debugging, testing, and verification of smart contracts. In this work, we propose a unified specification mining framework to infer specification models from past transaction histories. These include access control models describing high-level authorization rules, program invariants capturing low-level program semantics, and behavior models characterizing interaction patterns allowed by contract implementations. The extracted specification models can be used to perform conformance checking on smart contracts, with the goal of eliminating unforeseen contract quality issues.
Being extremely dominated by men, software development organizations lack diversity. People from other groups often encounter sexist, misogynistic, and discriminatory (SMD) speech during communication. To identify SMD contents, I aim to build an automatic misogyny identification (AMI) tool for the domain of software developers. On this goal, I built a dataset of 10,138 pull request comments mined from Github based on a keyword-based selection, followed by manual validation. Using ten-fold cross-validation, I evaluated ten machine learning algorithms for automatic identification. The best performing model achieved 67.7% precision, 55.9% recall, 60.9% f-score, and 96.2% accuracy for the best performing model.
Developers use exceptions guarded by conditions to abort the execution when a program reaches an unexpected state. However, sometimes the condition and the raised exception do not imply the same stopping reason, in which case, we call them inconsistent if-condition-raise statements. The inconsistency can originate from a mistake in the condition or the message of the exception. This paper presents IICR-Finder, a deep learning-based approach to detect inconsistent if-condition-raise statements. The approach reasons both about the logic of the condition and the natural language of the exception message and raises a warning in case of an inconsistency. We present six techniques to automatically generate large numbers of inconsistent statements for training. Moreover, we implement two neural models, based on binary classification and triplet loss, to learn inconsistency detection. We apply the approach to 210K if-condition-raise statements extracted from 42 million lines of Python code. It achieves a precision of 72% at a recall of 60% on a dataset of past bug fixes. Running IICR-Finder on open-source projects reveals 30 previously unknown bugs, ten of which we reported, with eight confirmed by the developers.
Many techniques have been proposed to mine knowledge from software artefacts and solve software evolution management tasks. To promote effective reusing of those knowledge, we propose a unified format, differential facts, to represent software changes across versions as well as various relations within each version, such as call graphs. Based on queryable formats, differential facts can be manipulated to implement complex evolution management tasks. Since facts once extracted can be shared among different tasks, the reusability brings improvements to oveall performance. We validate the technique and show its benefits of being efficient, flexible, and easy to implement, with several applications, including semantic history slicing, regression test selection, documentation error detection and client-specific usage patterns discovery.
RESTful APIs have been applied to provide cloud services by various notable companies. The quality assurance of RESTful API is critical. Several automatic RESTful API testing techniques have been proposed to tame this issue. By analyzing crashes caused by each test case, developers can find potential bugs in cloud services. However, automatic tools usually generate a massive number of failed test cases. Since it is labor-intensive for developers to validate each test case, automated crash clustering is one promising solution to help debug cloud services.
In this paper, we propose RESTCluster, a two-phase crash clustering approach. The preliminary evaluation result shows that RESTCluster can achieve 100% precision in different sizes of subjects with a high recall.
no description available
Bonita Sharif, Ph.D. is an Associate Professor in the Department of Computer Science and Engineering at University of Nebraska - Lincoln, Lincoln, Nebraska USA. She received her Ph.D. in 2010 and MS in 2003 in Computer Science from Kent State University, U.S.A and B.S. in Computer Science from Cyprus College, Nicosia Cyprus. Her research interests are in eye tracking related to software engineering, program comprehension, empirical software engineering, emotional awareness, software traceability, and software visualization to support maintenance of large systems. She has authored over 40 refereed publications. She serves on numerous program committees including ICSME, VISSOFT, SANER, ICSE NIER, and ICPC. Sharif is a recipient of the NSF CAREER award and the NSF CRI award related to empowering software engineering with eye tracking. She directs the Software Engineering Research and Empirical Studies Lab in the Computer Science and Engineering department at UNL.Check out the iTrace infrastructure that supports eye tracking within developer work environments at http://www.i-trace.org
Tool Demonstrations
Wed 12 Oct 2022 09:30 - 10:00 at Ballroom A - Tool Poster Session 2With the application of deep learning (DL) in signal detection, improving the robustness of classification models has received much attention, especially in automatic modulation classification (AMC) of electromagnetic signals. To obtain robust models, a large amount of electromagnetic signal data is required in the training and testing process. However, both the high cost of manual collection and the low quality of data samples from automatically generated data result in the defects of the AMC models. Therefore, it is important to generate electromagnetic data by data augmentation. In this paper, we propose a novel electromagnetic data augmentation tool, namely ElecDaug, which directs the metamorphic process by electromagnetic signal characteristics to achieve automatic data augmentation. Based on electromagnetic data pre-processing, transmission or time-frequency domains characteristic metamorphic, ElecDaug can augment the data samples to build robust AMC models. Preliminary experiments show that ElecDaugcan effectively augment available data samples for model repair. The video is at https://youtu.be/tqC0z5Sg1_k. Documentation and source code can be found here: https://github.com/ehhhhjw/tool_ElecDaug.git.
Tool Demonstrations
Wed 12 Oct 2022 09:30 - 10:00 at Ballroom A - Tool Poster Session 2Dynamic program analysis tools such as Eraser, Memcheck, or ThreadSanitizer abstract the contents of individual memory locations and store the abstraction results in a separate data structure called shadow memory. They then use this meta-information to efficiently implement the actual analyses. In this paper, we describe the implementation of an efficient symbolic shadow memory extension for the CBMC bounded model checker that can be accessed through an API, and sketch its use in the design of a new data race analyzer that is implemented by a code-to-code translation. Artifact/tool URL: shorturl.at/jzVZ0 Video URL: youtu.be/pqlbyiY5BLU
Tool Demonstrations
Tue 11 Oct 2022 11:10 - 11:20 at Banquet B - Technical Session 3 - Fuzzing I Chair(s): Aravind Machiry Purdue UniversityWith the rapid development of autonomous driving systems (ADS), especially the increasing adoption of deep neural networks (DNNs) as their core components, effective quality assurance methods for ADS have attracted growing interests in both academia and industry. In this paper, we report a new testing platform ADEPT we have developed, aiming to provide practically realistic and comprehensive testing facilities for DNN-based ADS. ADEPT is based on the virtual simulator CARLA and provides numerous testing facilities such as scene construction, ADS importation, test execution and recording, etc. In particular, ADEPT features two distinguished test scenario generation strategies designed for autonomous driving. First, we make use of real-life accident reports from which we leverage natural language processing to fabricate abundant driving scenarios. Second, we synthesize physically-robust adversarial attacks by taking the feedback of ADS into consideration and thus are able to generate closed-loop test scenarios. The experiments confirm the efficacy of the platform. A video demonstrating the usage of ADEPT can be found at https://youtu.be/evMorf0uR_s.
Tool Demonstrations
Tue 11 Oct 2022 14:50 - 15:00 at Ballroom C East - Technical Session 5 - Code Analysis Chair(s): Vahid Alizadeh DePaul UniversityDynamic taint analysis (DTA) is a popular approach to help protect JavaScript applications against injection vulnerabilities. In 2016, the ECMAScript 7 JavaScript language standard introduced many language features that most existing DTA tools for JavaScript do not support, e.g., the async/await keywords for asynchronous programming. We present Augur, a high-performance dynamic taint analysis for ES7 JavaScript that leverages VM-\textit{supported} instrumentation. Integrating directly with a public, stable instrumentation API gives Augur the ability to run with high performance inside the VM and remain resilient to language revisions. We extend the abstract-machine approach to DTA with semantics to handle asynchronous function calls. In addition to providing the classic DTA use case of injection vulnerability detection, Augur is highly configurable to support any type of taint analysis, making it useful outside of the security domain. We evaluated Augur on a set of 20 benchmarks, and observed a median runtime overhead of only 1.77×. We note a median performance improvement of 298% compared to the previous state-of-the-art Ichnaea.
Tool demo: https://www.youtube.com/watch?v=GczQ-2A58LE
Link to open source code repository: https://github.com/nuprl/augur
Tool Demonstrations
Tue 11 Oct 2022 15:00 - 15:10 at Ballroom C East - Technical Session 5 - Code Analysis Chair(s): Vahid Alizadeh DePaul UniversityTypes in TypeScript play an important role in the correct usage of variables and APIs. Type errors such as variable or function misuse can be avoided with explicit type annotations. In this work, we introduce FlexType, an IDE extension that can be used on both JavaScript and TypeScript to infer types in an interactive or automatic fashion. We perform experiments with FlexType in JavaScript to determine how many types FlexType could resolve if it were to be used to migrate top JavaScript projects to TypeScript. FlexType is able to annotate 56.69% of all types with high precision and confidence including native and imported types from modules. In addition to the automatic inference, we believe the interactive Visual Studio Code extension is inherently useful in both TypeScript and JavaScript especially when resolving types is taxing for the developer.
The source code is available at GitHub and a video demonstration at https://youtu.be/4dPV05BWA8A.
Pre-printTool Demonstrations
Tue 11 Oct 2022 14:20 - 14:30 at Ballroom C East - Technical Session 5 - Code Analysis Chair(s): Vahid Alizadeh DePaul UniversitySmart contracts are self-executing computer programs deployed on blockchain to enable trustworthy exchange of value without the need of a central authority. With the absence of documentation and specifications, routine tasks such as program understanding, maintenance, verification, and validation, remain challenging for smart contracts. In this paper, we propose a dynamic invariant detection tool, InvCon, for Ethereum smart contracts to mitigate this issue. The detected invariants can be used to not only support the reverse engineering of contract specifications, but also enable standard-compliance checking for contract implementations. InvCon provides a Web-based interface and a demonstration video of it is available at: https://youtu.be/Y1QBHjDSMYk.
Pre-printTool Demonstrations
Wed 12 Oct 2022 09:30 - 10:00 at Ballroom A - Tool Poster Session 2We developed a plugin for IntelliJ IDEA called AntiCopyPaster, which tracks the pasting of code fragments inside the IDE and suggests the appropriate Extract Method refactoring to combat the propagation of duplicates. Unlike the existing approaches, our tool is integrated with the developer’s workflow, and pro-actively recommends refactorings. Since not all code fragments need to be extracted, we develop a classification model to make this decision. When a developer copies and pastes a code fragment, the plugin searches for duplicates in the currently opened file, waits for a short period of time to allow the developer to edit the code, and finally inferences the refactoring decision based on a number of features.
Our experimental study on a large dataset of 18,942 code fragments mined from 13 Apache projects shows that AntiCopyPaster correctly recommends Extract Method refactorings with an F-score of 0.82. Furthermore, our survey of 59 developers reflects their satisfaction with the developed plugin’s operation. The plugin and its source code are publicly available on GitHub at https://github.com/JetBrains-Research/anti-copy-paster. The demonstration video can be found on YouTube: https://www.youtube.com/watch?v=_wwHg-qFjJY.
DOI Pre-printTool Demonstrations
Tue 11 Oct 2022 15:00 - 15:10 at Gold A - Technical Session 8 - Mobile Apps II Chair(s): Wei Yang University of Texas at DallasTo face the climate change, Android developers urge to become green software developers. But how to ensure carbon-efficient mobile apps at large? In this paper, we introduce $ecoCode$, a SonarQube plugin able to highlight code structures that are smelly from an energy perspective. It is based on a curated list of energy code smells likely to impact negatively the battery lifespan of Android-powered devices. The $ecoCode$ plugin enables analysis of any native Android project written in Java in order to enforce green code.
DOI File AttachedTool Demonstrations
Wed 12 Oct 2022 09:30 - 10:00 at Ballroom A - Tool Poster Session 2As software systems continue to grow in complexity and scale, deploying and delivering them becomes increasingly difficult. In this work, we develop DeployQA, a novel QA bot that automatically answers software deployment questions over user manuals and Stack Overflow posts. DeployQA is built upon RoBERTa. To bridge the gap between natural language and the domain of software deployment, we propose three adaptations in terms of vocabulary, pre-training, and fine-tuning, respectively. We evaluate our approach on our constructed DeQuAD dataset. The results show that DeployQA remarkably outperforms baseline methods by leveraging the three domain adaptation strategies.
Tool Demonstrations
Wed 12 Oct 2022 09:30 - 10:00 at Ballroom A - Tool Poster Session 2Refactoring software can be hard and time-consuming. Several refactoring tools assist developers in reaching more readable and maintainable code. However, most of them are characterized by long feedback loops that impoverish their refactoring experience. We believe that we can reduce this problem by focusing on the concept of Live Refactoring and its main principles: the live recommendation and continuous visualization of refactoring candidates, and the immediate visualization of results from applying a refactoring to the code. Therefore, we implemented a Live Refactoring Environment that identifies, suggests, and applies Extract Method refactorings. To evaluate our approach, we carried out an empirical experiment. Early results showed us that our refactoring environment improves several code quality aspects, being well received, understood, and used by the experiment participants. The source code of our tool is available on: https://github.com/saracouto1318/LiveRef. Its demonstration video can be found at: https://youtu.be/_jxx21ZiQ0o.
Tool Demonstrations
Wed 12 Oct 2022 09:30 - 10:00 at Ballroom A - Tool Poster Session 2Automatic vulnerability detection is of paramount importance to promote the security of an application and should be exercised at the earliest stages within the software development life cycle (SDLC) to reduce the risk of exposure. Despite the advancements with state-of-the-art deep learning techniques in software vulnerability detection, the development environments are not yet leveraging their performance. In this work, we integrate the Transformers architecture, one of the main highlights of advances in deep learning for Natural Language Processing, within a developer-friendly tool for code security. We introduce VDet for Java, a transformer-based VS Code extension that enables one to discover vulnerabilities in Java files. Our preliminary model evaluation presents an accuracy of 85.8% for multi-label classification and can detect up to 21 vulnerability types. The demonstration of our tool can be found at https://youtu.be/OjiUBQ6TdqE.
no description available