Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Adults with Attention Deficit Hyperactivity Disorder (ADHD) experience challenges sustaining attention in the workplace. Body doubling, the concept of working alongside another person, has been proposed as a productivity aid for ADHD and other neurodivergent populations (NDs). However, prior work found no conclusive effectiveness and noted NDs' discomfort with social presence. This work investigates body doubling as an ADHD centered productivity strategy in construction tasks. In Study 1, we explored challenges ADHD workers face in construction and identified design insights. In Study 2, we implemented a virtual reality bricklaying task under three conditions: (C1) alone, (C2) with a human body double, and (C3) with an AI body double. Results from 12 participants show they finished tasks faster and perceived greater accuracy and sustained attention in C2 and C3 compared to C1. While body doubling was clearly preferred, opinions diverged between conditions. Our findings verify its effect and offer design implications for future interventions.
Large Language Models (LLMs) such as ChatGPT can infer personal attributes from seemingly innocuous text, raising privacy risks beyond memorized data leakage. While prior work has demonstrated these risks, little is known about how users estimate and respond. We conducted a survey with 240 U.S. participants who judged text snippets for inference risks, reported concern levels, and attempted rewrites to block inference. We compared their rewrites with those generated by ChatGPT and Rescriber, a state-of-the-art sanitization tool. Results show that participants struggled to anticipate inference, performing a little better than chance. User rewrites were effective in just 28\% of cases - better than Rescriber but worse than ChatGPT. We examined our participants' rewriting strategies, and observed that while paraphrasing was the most common strategy it is also the least effective; instead abstraction and adding ambiguity were more successful. Our work highlights the importance of inference-aware design in LLM interactions.
Responsible AI (RAI) content work, such as annotation, moderation, or red teaming for AI safety, often exposes crowd workers to potentially harmful content. While prior work has underscored the importance of communicating well-being risk to employed content moderators, designing effective disclosure mechanisms for crowd workers while balancing worker protection with the needs of task designers and platforms remains largely unexamined. To address this gap, we conducted co-design sessions with 29 task designers, workers, and platform representatives. We investigated task designer preferences for support in disclosing tasks, worker preferences for receiving risk disclosure warnings, and how platform stakeholders envision their role in shaping risk disclosure practices. We identify design tensions and map the sociotechnical tradeoffs that shape disclosure practices. We contribute design recommendations and feature concepts for risk disclosure mechanisms in the context of RAI content work.
Large language models (LLMs) typically generate direct answers, yet they are increasingly used as learning tools. Studying instructors' usage is critical, given their role in teaching and guiding AI adoption in education. We designed and evaluated TeaPT, an LLM for pedagogical purposes that supports instructors' professional development through two conversational approaches: a Socratic approach that uses guided questioning to foster reflection, and a Narrative approach that offers elaborated suggestions to extend externalized cognition. In a mixed-method study with 41 higher-education instructors, the Socratic version elicited greater engagement, while the Narrative version was preferred for actionable guidance. Subgroup analyses further revealed that less-experienced, AI-optimistic instructors favored the Socratic version, whereas more-experienced, AI-cautious instructors preferred the Narrative version. We contribute design implications for LLMs for pedagogical purposes, showing how adaptive conversational approaches can support instructors with varied profiles while highlighting how AI attitudes and experience shape interaction and learning.
Limited access to mental health care has motivated the use of digital tools and conversational agents powered by large language models (LLMs), yet their quality and reception remain unclear. We present a study comparing therapist-written responses to those generated by ChatGPT, Gemini, and Llama for real patient questions. Text analysis showed that LLMs produced longer, more readable, and lexically richer responses with a more positive tone, while therapist responses were more often written in the first person. In a survey with 150 users and 23 licensed therapists, participants rated LLM responses as clearer, more respectful, and more supportive than therapist-written answers. Yet, both groups of participants expressed a stronger preference for human therapist support. These findings highlight the promise and limitations of LLMs in mental health, underscoring the need for designs that balance their communicative strengths with concerns of trust, privacy, and accountability.
Although browser-using agents (BUAs) show promise for web tasks and automation, most BUAs terminate after executing a single instruction, failing to support users' complex, nonlinear browsing with ambiguous goals, iterative decision-making, and changing contexts. We present a human-in-the-loop (HITL) conceptual framework informed by theories of human web browsing behavior. The framework centers on an iterative loop in which the BUA proactively proposes next actions and the user steers the browsing process through feedback. It also distinguishes between exploration and exploitation actions, enabling users to control the breadth and depth of their browsing. Consequently, the framework aims to reduce users' physical and cognitive effort while preserving users' traditional browsing mental model and supporting users in achieving satisfactory outcomes. We illustrate how the framework operates with hypothetical use cases and discuss the shift from manual browsing to interaction-driven browsing. We contribute a theoretically informed conceptual framework for BUAs.
In virtual reality (VR) education, especially in creative fields like film production, avatar design and narrative style extend beyond appearance and aesthetics. This study explores how the interaction between avatar gender, the dominant narrative actor's gender, and the learner's gender influences film production learning in VR, focusing on gaze dynamics and gender perspectives. Using a 2*2*2 experimental design, 48 participants operated avatars of different genders and interacted with male or female-dominant narratives. The results show that the consistency between the avatar and gender affects presence, and learners' control over the avatar is also influenced by gender matching. Learners using avatars of the opposite gender reported stronger control, suggesting gender incongruity prompted more focus on the avatar. Additionally, female participants with female avatars were more likely to adopt a "female gaze," favoring soft lighting and emotional shots, while male participants with male avatars were more likely to adopt a "male gaze," choosing dynamic shots and high contrast. When male participants used female avatars, they favored "female gaze," while female participants with male avatars focused on "male gaze". These findings advance our understanding of how avatar design and narrative style in VR-based education influence creativity and the cultivation of gender perspectives, and they offer insights for developing more inclusive and diverse VR teaching tools going forward.
Generative AI is reshaping higher education, yet research has focused largely on students, while instructors remain understudied despite their central role in mediating adoption and modeling responsible use. We present the \textit{AI Academy}, a faculty development program that combined AI exploration with pedagogical reflection and peer learning. Rather than a course evaluated for outcomes, the Academy provided a setting to study how instructors build AI literacies in relation to tools, policies, peer practices, and institutional supports. We studied 25 instructors through pre/post surveys, learning logs, and facilitator interviews. Findings show AI literacy gains alongside new insights. We position instructors as designers of responsible AI practices and contribute a replicable program model, a co-constructed survey instrument, and design insights for professional development that adapts to evolving tools and fosters ethical discussion.
Visual documentation is an effective tool for reducing the cognitive barrier developers face when understanding unfamiliar code, enabling more intuitive comprehension. Compared to textual documentation, it provides a higher-level understanding of the system structure and data flow. Developers usually prefer visual representations over lengthy textual descriptions for large software systems. Visual documentation is both difficult to produce and challenging to evaluate. Manually creating it is time-consuming, and currently, no existing approach can automatically generate high-level visual documentation directly from code. Its evaluation is often subjective, making it difficult to standardize and automate. To address these challenges, this paper presents the first exploration of using agentic LLM systems to automatically generate visual documentation. We introduce VisDocSketcher, the first agent-based approach that combines static analysis with LLM agents to identify key elements in the code and produce corresponding visual representations. We propose a novel evaluation framework, AutoSketchEval, for assessing the quality of generated visual documentation using code-level metrics. The experimental results show that our approach can valid visual documentation for 74.4% of the samples. It shows an improvement of 26.7-39.8% over a simple template-based baseline. Our evaluation framework can reliably distinguish high-quality (code-aligned) visual documentation from low-quality (non-aligned) ones, achieving an AUC exceeding 0.87. Our work lays the foundation for future research on automated visual documentation by introducing practical tools that not only generate valid visual representations but also reliably assess their quality.
While web agents gained popularity by automating web interactions, their requirement for interface access introduces significant privacy risks that are understudied, particularly from users' perspective. Through a formative study (N=15), we found users frequently misunderstand agents' data practices, and desired unobtrusive, transparent data management. To achieve this, we designed and implemented PrivWeb, a trusted add-on on web agents that utilizes a localized LLM to anonymize private information on interfaces according to user preferences. It features privacy categorization schema and adaptive notifications that selectively pauses tasks for user control over information collection for highly sensitive information, while offering non-disruptive options for less sensitive information, minimizing human oversight. The user study (N=14) across travel, information retrieval, shopping, and entertainment tasks compared PrivWeb with baselines without notification and without control for private information access, where PrivWeb reduced perceived privacy risks with no associated increase in cognitive effort, and resulted in higher overall satisfaction.
Large language models (LLMs) are increasingly used in everyday communication, including multilingual interactions across different cultural contexts. While LLMs can now generate near-perfect literal translations, it remains unclear whether LLMs support culturally appropriate communication. In this paper, we analyze the cultural sensitivity of different LLM designs when applied to English-Japanese translations of workplace e-mails. Here, we vary the prompting strategies: (1) naive "just translate" prompts, (2) audience-targeted prompts specifying the recipient's cultural background, and (3) instructional prompts with explicit guidance on Japanese communication norms. Using a mixed-methods study, we then analyze culture-specific language patterns to evaluate how well translations adapt to cultural norms. Further, we examine the appropriateness of the tone of the translations as perceived by native speakers. We find that culturally-tailored prompting can improve cultural fit, based on which we offer recommendations for designing culturally inclusive LLMs in multilingual settings.
Generative Artificial Intelligence (GenAI) has had a tremendous impact on game production and promises lasting transformations. In the last five years since GenAI's inception, several studies, typically via qualitative methods, have explored its impact on game production from different settings and demographic angles. However, these studies often contextualise and consolidate their findings weakly with related work, and a big picture view is still missing. Here, we aim to provide such a view of GenAI's impact on game production in the form of a qualitative research synthesis via meta-ethnography. We followed PRISMA-S to systematically search the relevant literature from 2020-2025, including major HCI and games research databases. We then synthesised the 10 eligible studies, conducting reciprocal translation and line-of-argument synthesis guided by eMERGe, informed by CASP quality appraisal. We identified nine overarching themes, provide recommendations, and contextualise our insights in wider game production trends.
As the ageing population grows, older adults increasingly rely on wearable devices to monitor chronic conditions. However, conventional health data representations (HDRs) often present accessibility challenges, particularly for critical health parameters like blood pressure and sleep data. This study explores how older adults interact with these representations, identifying key barriers such as semantic inconsistency and difficulties in understanding. While research has primarily focused on data collection, less attention has been given to how information is output and understood by end-users. To address this, an end-user evaluation was conducted with 16 older adults (65+) in a structured workshop, using think-aloud protocols and participatory design activities. The findings highlight the importance of affordance and familiarity in improving accessibility, emphasising the familiarity and potential of multimodal cues. This study bridges the gap between domain experts and end-users, providing a replicable methodological approach for designing intuitive, multisensory HDRs that better align with older adults' needs and abilities.
Language and embodied perspective taking are essential for human collaboration, yet few computational models address both simultaneously. This work investigates the PerspAct system [1], which integrates the ReAct (Reason and Act) paradigm with Large Language Models (LLMs) to simulate developmental stages of perspective taking, grounded in Selman's theory [2]. Using an extended director task, we evaluate GPT's ability to generate internal narratives aligned with specified developmental stages, and assess how these influence collaborative performance both qualitatively (action selection) and quantitatively (task efficiency). Results show that GPT reliably produces developmentally-consistent narratives before task execution but often shifts towards more advanced stages during interaction, suggesting that language exchanges help refine internal representations. Higher developmental stages generally enhance collaborative effectiveness, while earlier stages yield more variable outcomes in complex contexts. These findings highlight the potential of integrating embodied perspective taking and language in LLMs to better model developmental dynamics and stress the importance of evaluating internal speech during combined linguistic and embodied tasks.
As large language models (LLMs) become embedded in interactive text generation, disclosure of AI as a source depends on people remembering which ideas or texts came from themselves and which were created with AI. We investigate how accurately people remember the source of content when using AI. In a pre-registered experiment, 184 participants generated and elaborated on ideas both unaided and with an LLM-based chatbot. One week later, they were asked to identify the source (noAI vs withAI) of these ideas and texts. Our findings reveal a significant gap in memory: After AI use, the odds of correct attribution dropped, with the steepest decline in mixed human-AI workflows, where either the idea or elaboration was created with AI. We validated our results using a computational model of source memory. Discussing broader implications, we highlight the importance of considering source confusion in the design and use of interactive text generation technologies.
Current AI writing support tools are largely designed for individuals, complicating collaboration when co-writers must leave the shared workspace to use AI and then communicate and reintegrate results. We propose integrating AI agents directly into collaborative writing environments. Our prototype makes AI use transparent and customisable through two new shared objects: agent profiles and tasks. Agent responses appear in the familiar comment feature. In a user study (N=30), 14 teams worked on writing projects during one week. Interaction logs and interviews show that teams incorporated agents into existing norms of authorship, control, and coordination, rather than treating them as team members. Agent profiles were viewed as personal territory, while created agents and outputs became shared resources. We discuss implications for team-based AI interaction, highlighting opportunities and boundaries for treating AI as a shared resource in collaborative work.
We develop a rigorous measure-theoretic framework for the analysis of fixed points of nonexpansive maps in the space $L^1(\mu)$, with explicit consideration of quantization errors arising in fixed-point arithmetic. Our central result shows that every bounded, closed, convex subset of $L^1(\mu)$ that is compact in the topology of local convergence in measure (a property we refer to as measure-compactness) enjoys the fixed point property for nonexpansive mappings. The proof relies on techniques from uniform integrability, convexity in measure, and normal structure theory, including an application of Kirk's theorem. We further analyze the effect of quantization by modeling fixed-point arithmetic as a perturbation of a nonexpansive map, establishing the existence of approximate fixed points under measure-compactness conditions. We also present counterexamples that illustrate the optimality of our assumptions. Beyond the theoretical development, we apply this framework to a human-in-the-loop co-editing system. By formulating the interaction between an AI-generated proposal, a human editor, and a quantizer as a composition of nonexpansive maps on a measure-compact set, we demonstrate the existence of a "stable consensus artefact". We prove that such a consensus state remains an approximate fixed point even under bounded quantization errors, and we provide a concrete example of a human-AI editing loop that fits this framework. Our results underscore the value of measure-theoretic compactness in the design and verification of reliable collaborative systems involving humans and artificial agents.
Image-based scene understanding allows Augmented Reality systems to provide contextual visual guidance in unprepared, real-world environments. While effective on video see-through (VST) head-mounted displays (HMDs), such methods suffer on optical see-through (OST) HMDs due to misregistration between the world-facing camera and the user's eye perspective. To approximate the user's true eye view, we implement and evaluate three software-based eye-perspective rendering (EPR) techniques on a commercially available, untethered OST HMD (Microsoft HoloLens 2): (1) Plane-Proxy EPR, projecting onto a fixed-distance plane; (2) Mesh-Proxy EPR, using SLAM-based reconstruction for projection; and (3) Gaze-Proxy EPR, a novel eye-tracking-based method that aligns the projection with the user's gaze depth. A user study on real-world tasks underscores the importance of accurate EPR and demonstrates gaze-proxy as a lightweight alternative to geometry-based methods. We release our EPR framework as open source.