Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Web applications play a crucial role in our daily lives, making it essential to employ testing methods that ensure their quality. Typically, Web testing automation frameworks rely on locators to interact with the graphical user interface, acting as connection points to the elements on a Web page. Nevertheless, locators are widely recognized as a major vulnerability in Web testing, as they are highly sensitive to the frequent changes in Web page structures caused by rapid software evolution. The adoption of the Page Object pattern to separate test logic from structural layout - supporting code reuse and maintainability - has generally led to more robust test cases. However, their implementation is a manually intensive task, and even automated support may require manual realignment efforts. Although gamification strategies have recently been integrated into the Web testing process to boost user engagement, using tasks and rewards aligned with testing activities, they have not yet been employed to enhance the robustness of locators and support the implementation of Page Objects. In this paper, we introduce TESTQUEST, a tool designed to improve test robustness by applying gamification to locators and Page Objects, boosting user engagement while guiding them toward the adoption of best practices.
In today's digital society, personalization has become a crucial aspect of software applications, significantly impacting user experience and engagement. A new wave of intelligent user interfaces, such as AI-based conversational agents, has the potential to enable such personalization beyond what other types of interfaces could offer in the past. Personalization requires the ability to specify a complete user profile, covering as many dimensions as possible, such as potential accessibility constraints, interaction preferences, and even hobbies. In this sense, this paper presents the concepts of a unified user modeling language, aimed to combine previous approaches in a single proposal. Additionally, a proof of concept has been developed that leverages user profiles modeled using our language to automatically adapt a conversational agent.
In recent years, large language models (LLMs) have showcased significant advancements in code generation. However, most evaluation benchmarks are primarily oriented towards Python, making it difficult to evaluate other programming languages, such as Swift, with high quality. By examining widely established multilingual benchmarks like HumanEval-XL and MultiPL-E, we identified critical issues specific to their Swift components, making them insufficient or even irrelevant for assessing LLM coding capabilities on Swift. Unlike these existing approaches, which prioritize rapid scaling and generalization by automatically translating Python-centric benchmarks with LLMs, we adopt a quality-over-quantity methodology. We present SwiftEval, the first Swift-oriented benchmark consisting of 28 carefully hand-crafted problems, and evaluate 44 popular Code LLMs on it. Our results show significant LLM scores drop for problems requiring language-specific features, most noticeable in the models of smaller sizes.
To alleviate difficulties in writing smart contracts for distributed blockchain applications, as other research, we propose transformation of Business Process Model and Notation (BPMN) models into blockchain smart contracts. Unlike other research, we use Discrete Event Hierarchical State Machine (DE-HSM) multi-modal modeling to identify collaborative trade transactions that need to be supported by the smart contract and describe how the trade transactions, that may be nested, are supported by a transaction mechanism. We describe algorithms to (i) identify the nested trade transactions and to (ii) transform the BPMN model into blockchains smart contracts that include a transaction mechanism to enforce the transactional properties for the identified trade transactions. The developed proof of concept shows that our approach to automated transformation of BPMN models into smart contracts with the support of privacy and cross-chain interoperability is feasible. The thesis examines and evaluates automatically generated alternative transaction mechanisms to support such transactions using three use cases of varying degree of complexity, namely order processing, supply chain management, and a multi-faceted trade use case. The research enriches the academic dialogue on blockchain technology and smart contracts and proposes potential avenues for future research.
In the pursuit of enhancing software reusability and developer productivity, code search has emerged as a key area, aimed at retrieving code snippets relevant to functionalities based on natural language queries. Despite significant progress in self-supervised code pre-training utilizing the vast amount of code data in repositories, existing methods have primarily focused on leveraging contrastive learning to align natural language with function-level code snippets. These studies have overlooked the abundance of fine-grained (such as block-level and statement-level) code snippets prevalent within the function-level code snippets, which results in suboptimal performance across all levels of granularity. To address this problem, we first construct a multi-granularity code search dataset called MGCodeSearchNet, which contains 536K+ pairs of natural language and code snippets. Subsequently, we introduce a novel Multi-Granularity Self-Supervised contrastive learning code Search framework (MGS$^{3}$}). First, MGS$^{3}$ features a Hierarchical Multi-Granularity Representation module (HMGR), which leverages syntactic structural relationships for hierarchical representation and aggregates fine-grained information into coarser-grained representations. Then, during the contrastive learning phase, we endeavor to construct positive samples of the same granularity for fine-grained code, and introduce in-function negative samples for fine-grained code. Finally, we conduct extensive experiments on code search benchmarks across various granularities, demonstrating that the framework exhibits outstanding performance in code search tasks of multiple granularities. These experiments also showcase its model-agnostic nature and compatibility with existing pre-trained code representation models.
Test cases are indispensable for conducting effective fault localization (FL). However, test cases in practice are severely class imbalanced, i.e. the number of failing test cases (i.e. minority class) is much less than that of passing ones (i.e. majority class). The severe class imbalance between failing and passing test cases have hindered the FL effectiveness. To address this issue, we propose PCD-DAug: a Principal Context-aware Diffusion guided Data Augmentation approach that generate synthesized failing test cases for improving FL. PCD-DAug first combines program slicing with principal component analysis to construct a principal context that shows how a set of statements influences the faulty output via statistical program dependencies. Then, PCD-DAug devises a conditional diffusion model to learn from principle contexts for generating synthesized failing test cases and acquiring a class balanced dataset for FL. We conducted large-scale experiments on six state-of-the-art FL approaches and compare PCD-DAug with six data augmentation baselines. The results show that PCD-DAug significantly improves FL effectiveness, e.g. achieving average improvements of 383.83%, 227.08%, and 224.19% in six FL approaches under the metrics Top-1, Top-3, and Top-5, respectively.
Automatic code generation has gained significant momentum with the advent of Large Language Models (LLMs) such as GPT-4. Although many studies focus on improving the effectiveness of LLMs for code generation, very limited work tries to understand the generated code's characteristics and leverage that to improve failed cases. In this paper, as the most straightforward characteristic of code, we investigate the relationship between code complexity and the success of LLM generated code. Using a large set of standard complexity metrics, we first conduct an empirical analysis to explore their correlation with LLM's performance on code generation (i.e., Pass@1). Using logistic regression models, we identify which complexity metrics are most predictive of code correctness. Building on these findings, we propose an iterative feedback method, where LLMs are prompted to generate correct code based on complexity metrics from previous failed outputs. We validate our approach across multiple benchmarks (i.e., HumanEval, MBPP, LeetCode, and BigCodeBench) and various LLMs (i.e., GPT-4o, GPT-3.5 Turbo, Llama 3.1, and GPT-o3 mini), comparing the results with two baseline methods: (a) zero-shot generation, and (b) iterative execution-based feedback without our code complexity insights. Experiment results show that our approach makes notable improvements, particularly with a smaller LLM (GPT3.5 Turbo), where, e.g., Pass@1 increased by 35.71% compared to the baseline's improvement of 12.5% on the HumanEval dataset. The study expands experiments to BigCodeBench and integrates the method with the Reflexion code generation agent, leading to Pass@1 improvements of 20% (GPT-4o) and 23.07% (GPT-o3 mini). The results highlight that complexity-aware feedback enhances both direct LLM prompting and agent-based workflows.
Recent studies show that LLMs possess different skills and specialize in different tasks. In fact, we observe that their varied performance occur in several levels of granularity. For example, in the code optimization task, code LLMs excel at different optimization categories and no one dominates others. This observation prompts the question of how one leverages multiple LLM agents to solve a coding problem without knowing their complementary strengths a priori. We argue that a team of agents can learn from each other's successes and failures so as to improve their own performance. Thus, a lesson is the knowledge produced by an agent and passed on to other agents in the collective solution process. We propose a lesson-based collaboration framework, design the lesson solicitation--banking--selection mechanism, and demonstrate that a team of small LLMs with lessons learned can outperform a much larger LLM and other multi-LLM collaboration methods.
As software systems grow increasingly complex, explainability has become a crucial non-functional requirement for transparency, user trust, and regulatory compliance. Eliciting explainability requirements is challenging, as different methods capture varying levels of detail and structure. This study examines the efficiency and effectiveness of three commonly used elicitation methods - focus groups, interviews, and online surveys - while also assessing the role of taxonomy usage in structuring and improving the elicitation process. We conducted a case study at a large German IT consulting company, utilizing a web-based personnel management software. A total of two focus groups, 18 interviews, and an online survey with 188 participants were analyzed. The results show that interviews were the most efficient, capturing the highest number of distinct needs per participant per time spent. Surveys collected the most explanation needs overall but had high redundancy. Delayed taxonomy introduction resulted in a greater number and diversity of needs, suggesting that a two-phase approach is beneficial. Based on our findings, we recommend a hybrid approach combining surveys and interviews to balance efficiency and coverage. Future research should explore how automation can support elicitation and how taxonomies can be better integrated into different methods.
Quantum computing has demonstrated potential for solving computationally intensive problems more efficiently than classical methods. Many software engineering tasks, such as test case selection, static analysis, code clone detection, and defect prediction, involve complex optimization, search, or classification, making them candidates for quantum enhancement. In this paper, we propose Quantum-Based Software Engineering (QBSE), a potential research direction for applying quantum computing to classical software engineering problems. We outline its scope, clarify its distinction from quantum software engineering (QSE), and identify key problem types that may benefit from quantum optimization, search, and learning techniques. We also summarize existing research efforts that remain fragmented. Finally, we sketch a preliminary research agenda that may help guide the future development of QBSE as a structured and meaningful direction within software engineering.
Developing high-performance software is a complex task that requires specialized expertise. We introduce GSO, a benchmark for evaluating language models' capabilities in developing high-performance software. We develop an automated pipeline that generates and executes performance tests to analyze repository commit histories to identify 102 challenging optimization tasks across 10 codebases, spanning diverse domains and programming languages. An agent is provided with a codebase and performance test as a precise specification, and tasked to improve the runtime efficiency, which is measured against the expert developer optimization. Our quantitative evaluation reveals that leading SWE-Agents struggle significantly, achieving less than 5% success rate, with limited improvements even with inference-time scaling. Our qualitative analysis identifies key failure modes, including difficulties with low-level languages, practicing lazy optimization strategies, and challenges in accurately localizing bottlenecks. We release the code and artifacts of our benchmark along with agent trajectories to enable future research.
Language models (LMs) perform well on standardized coding benchmarks but struggle with real-world software engineering tasks such as resolving GitHub issues in SWE-Bench, especially when model parameters are less than 100B. While smaller models are preferable in practice due to their lower computational cost, improving their performance remains challenging. Existing approaches primarily rely on supervised fine-tuning (SFT) with high-quality data, which is expensive to curate at scale. An alternative is test-time scaling: generating multiple outputs, scoring them using a verifier, and selecting the best one. Although effective, this strategy often requires excessive sampling and costly scoring, limiting its practical application. We propose Evolutionary Test-Time Scaling (EvoScale), a sample-efficient method that treats generation as an evolutionary process. By iteratively refining outputs via selection and mutation, EvoScale shifts the output distribution toward higher-scoring regions, reducing the number of samples needed to find correct solutions. To reduce the overhead from repeatedly sampling and selection, we train the model to self-evolve using reinforcement learning (RL). Rather than relying on external verifiers at inference time, the model learns to self-improve the scores of its own generations across iterations. Evaluated on SWE-Bench-Verified, EvoScale enables our 32B model, Satori-SWE-32B, to match or exceed the performance of models with over 100B parameters while using a few samples. Code, data, and models will be fully open-sourced.
This paper investigates the ability of large language models (LLMs) to recognise and solve tasks which have been obfuscated beyond recognition. Focusing on competitive programming and benchmark tasks (LeetCode and MATH), we compare performance across multiple models and obfuscation methods, such as noise and redaction. We demonstrate that all evaluated LLMs can solve tasks obfuscated to a level where the text would be unintelligible to human readers, and does not contain key pieces of instruction or context. We introduce the concept of eager pattern matching to describe this behaviour, which is not observed in tasks published after the models' knowledge cutoff date, indicating strong memorisation or overfitting to training data, rather than legitimate reasoning about the presented problem. We report empirical evidence of distinct performance decay patterns between contaminated and unseen datasets. We discuss the implications for benchmarking and evaluations of model behaviour, arguing for caution when designing experiments using standard datasets. We also propose measuring the decay of performance under obfuscation as a possible strategy for detecting dataset contamination and highlighting potential safety risks and interpretability issues for automated software systems.
Cyber-physical systems (CPSs) are complex systems that integrate physical, computational, and communication subsystems. The heterogeneous nature of these systems makes their safety assurance challenging. In this paper, we propose a novel automated approach for guardrailing cyber-physical systems using property-based tests (PBTs) generated by Large Language Models (LLMs). Our approach employs an LLM to extract properties from the code and documentation of CPSs. Next, we use the LLM to generate PBTs that verify the extracted properties on the CPS. The generated PBTs have two uses. First, they are used to test the CPS before it is deployed, i.e., at design time. Secondly, these PBTs can be used after deployment, i.e., at run time, to monitor the behavior of the system and guardrail it against unsafe states. We implement our approach in ChekProp and conduct preliminary experiments to evaluate the generated PBTs in terms of their relevance (how well they match manually crafted properties), executability (how many run with minimal manual modification), and effectiveness (coverage of the input space partitions). The results of our experiments and evaluation demonstrate a promising path forward for creating guardrails for CPSs using LLM-generated property-based tests.
We present the CASE framework, an open-source platform for adaptive, context-aware participatory research, and pandemic preparedness. CASE implements an event-driven architecture that enables dynamic survey workflows, allowing real-time adaptation based on participant responses, external data, temporal conditions, and evolving user states. The framework supports a broad range of research needs, from simple one-time questionnaires to complex longitudinal studies with advanced conditional logic. Built on over a decade of practical experience, CASE underwent a major architectural rework in 2024, transitioning from a microservice-based design to a streamlined monolithic architecture. This evolution significantly improved maintainability, flexibility, and accessibility to deployment, particularly for institutions with limited technical capacity. CASE has been successfully deployed across diverse domains, powering national disease surveillance platforms, supporting post-COVID cohort studies, and enabling real-time sentiment analysis during political events. These applications, involving tens of thousands of participants, demonstrate the framework's scalability, versatility, and practical value. This paper describes the foundations of CASE, details its architectural evolution, and presents lessons learned from real-world deployments. We establish CASE as a mature and reusable research infrastructure that balances sophisticated functionality with practical implementation, addressing the critical global need for sustainable and institutionally controlled data collection systems.
Software is an essential component of research. However, little attention has been paid to it compared with that paid to research data. Recently, there has been an increase in efforts to acknowledge and highlight the importance of software in research activities. Structured metadata from platforms like bio.tools, Bioconductor, and Galaxy ToolShed offers valuable insights into research software in the Life Sciences. Although originally intended to support discovery and integration, this metadata can be repurposed for large-scale analysis of software practices. However, its quality and completeness vary across platforms, reflecting diverse documentation practices. To gain a comprehensive view of software development and sustainability, consolidating this metadata is necessary, but requires robust mechanisms to address its heterogeneity and scale. This article presents an evaluation of instruction-tuned large language models for the task of software metadata identity resolution, a critical step in assembling a cohesive collection of research software. Such a collection is the reference component for the Software Observatory at OpenEBench, a platform that aggregates metadata to monitor the FAIRness of research software in the Life Sciences. We benchmarked multiple models against a human-annotated gold standard, examined their behavior on ambiguous cases, and introduced an agreement-based proxy for high-confidence automated decisions. The proxy achieved high precision and statistical robustness, while also highlighting the limitations of current models and the broader challenges of automating semantic judgment in FAIR-aligned software metadata across registries and repositories.
Large Language Models (LLMs) have been increasingly used to optimize code efficiency. Evaluating their effectiveness and further suggesting optimization opportunities often rely on high-quality tests to demonstrate the performance bottlenecks presented in the program. However, existing approaches rely on a limited set of hand-curated inputs or LLM-generated uninteresting length-stressing tests, failing to reveal more nuanced optimization opportunities. We present WEDGE, a framework for generating performance-stressing input given the program under test. WEDGE synthesizes explicit performance-characterizing constraints in the form of branch conditions to partition the programs' execution space into performance-specific regions. When integrated with the coverage-guided fuzzer, reaching different regions introduces explicit rewards for test generation to explore inefficient implementations. Our evaluation shows that WEDGE introduces a significant slowdown compared to the tests in CodeContests and those claimed to be optimized by existing approaches. From the utility perspective, integrating our tests substantially improves the existing code optimization approaches that rely on test-driven execution feedback. We release PERFFORGE, the performance tests generated by WEDGE, to benchmark future approaches for efficient code generation at https://github.com/UChiSeclab/perfforge.
Opinion mining plays a vital role in analysing user feedback and extracting insights from textual data. While most research focuses on sentiment polarity (e.g., positive, negative, neutral), fine-grained emotion classification in app reviews remains underexplored. This paper addresses this gap by identifying and addressing the challenges and limitations in fine-grained emotion analysis in the context of app reviews. Our study adapts Plutchik's emotion taxonomy to app reviews by developing a structured annotation framework and dataset. Through an iterative human annotation process, we define clear annotation guidelines and document key challenges in emotion classification. Additionally, we evaluate the feasibility of automating emotion annotation using large language models, assessing their cost-effectiveness and agreement with human-labelled data. Our findings reveal that while large language models significantly reduce manual effort and maintain substantial agreement with human annotators, full automation remains challenging due to the complexity of emotional interpretation. This work contributes to opinion mining by providing structured guidelines, an annotated dataset, and insights for developing automated pipelines to capture the complexity of emotions in app reviews.