The microservice paradigm is a popular software development pattern that breaks down a large application into smaller, independent services. While this approach offers several advantages, such as scalability, agility, and flexibility, it also introduces new security challenges. This paper presents a novel approach to securing microservice architectures using fuzz testing. Fuzz testing is known to find security vulnerabilities in software by feeding it with unexpected or random inputs. In this paper, we propose a zero-config fuzz test generation technique for microservices that can maximize coverage of internal states by mutating the frontend requests and the backend responses from dependent services. We also present the results of our fuzz testing, which reported and got fixed thousands of security vulnerabilities in real-world microservice applications.
Zero-Config Fuzzing for Microservices (camera ready version) (ase23.pdf)
Adaptive REST API Testing with Reinforcement Learning
Modern web services increasingly rely on REST APIs. Effectively testing these APIs is challenging due to the vast search space to be explored, which involves selecting API operations for sequence creation, choosing parameters for each operation from a potentially large set of parameters, and sampling values from the virtually infinite parameter input space. Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally (i.e., not considering their importance or complexity) and lacking prioritization strategies. Furthermore, these tools struggle when response schemas are absent in the specification or exhibit variants. To address these limitations, we present an adaptive REST API testing technique that incorporates reinforcement learning to prioritize operations and parameters during exploration. Our approach dynamically analyzes request and response data to inform dependent parameters and adopts a sampling-based strategy for efficient processing of dynamic API feedback. We evaluated our technique on ten RESTful services, comparing it against state-of-the-art REST testing tools with respect to code coverage achieved, requests generated, operations covered, and service failures triggered. Additionally, we performed an ablation study on prioritization, dynamic feedback analysis, and sampling to assess their individual effects. Our findings demonstrate that our approach outperforms existing REST API testing tools in terms of effectiveness, efficiency, and fault-finding ability.
The microservice paradigm is a popular software development pattern that breaks down a large application into smaller, independent services. While this approach offers several advantages, such as scalability, agility, and flexibility, it also introduces new security challenges. This paper presents a novel approach to securing microservice architectures using fuzz testing. Fuzz testing is known to find security vulnerabilities in software by feeding it with unexpected or random inputs. In this paper, we propose a zero-config fuzz test generation technique for microservices that can maximize coverage of internal states by mutating the frontend requests and the backend responses from dependent services. We also present the results of our fuzz testing, which reported and got fixed thousands of security vulnerabilities in real-world microservice applications.
Let's Chat to Find the APIs: Connecting Human, LLM and Knowledge Graph through AI Chain
Recorded talk
API recommendation methods have evolved from literal and semantic keyword matching to query expansion and query clarification. The latest query clarification method is knowledge graph (KG)-based, but limitations include out-of-vocabulary (OOV) failures and rigid question templates. To address these limitations, we propose a novel knowledge-guided query clarification approach for API recommendation that leverages a large language model (LLM) guided by KG. We utilize the LLM as a neural knowledge base to overcome OOV failures, generating fluent and appropriate clarification questions and options. We also leverage the structured API knowledge and entity relationships stored in the KG to filter out noise, and transfer the optimal clarification path from KG to the LLM, increasing the efficiency of the clarification process. Our approach is designed as an AI chain that consists of five steps, each handled by a separate LLM call, to improve accuracy, efficiency, and fluency for query clarification in API recommendation. We verify the usefulness of each unit in our AI chain, which all received high scores close to a perfect 5. When compared to the baselines, our approach shows a significant improvement in MRR, with a maximum increase of 63.9% higher when the query statement is covered in KG and 37.2% when it is not. Ablation experiments reveal that the guidance of knowledge in the KG and the knowledge-guided pathfinding strategy are crucial for our approach’s performance, resulting in a 19.0% and 22.2% increase in MAP, respectively. Our approach demonstrates a way to bridge the gap between KG and LLM, effectively compensating for the strengths and weaknesses of both.