Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Sophisticated malware families exploit the openness of the Android platform to infiltrate IoT networks, enabling large-scale disruption, data exfiltration, and denial-of-service attacks. This systematic literature review (SLR) examines cutting-edge approaches to Android malware analysis with direct implications for securing IoT infrastructures. We analyze feature extraction techniques across static, dynamic, hybrid, and graph-based methods, highlighting their trade-offs: static analysis offers efficiency but is easily evaded through obfuscation; dynamic analysis provides stronger resistance to evasive behaviors but incurs high computational costs, often unsuitable for lightweight IoT devices; hybrid approaches balance accuracy with resource considerations; and graph-based methods deliver superior semantic modeling and adversarial robustness. This survey contributes a structured comparison of existing methods, exposes research gaps, and outlines a roadmap for future directions to enhance scalability, adaptability, and long-term security in IoT-driven Android malware detection.
Decentralized federated learning faces privacy risks because model updates can leak data through inference attacks and membership inference, a concern that grows over many client exchanges. Differential privacy offers principled protection by injecting calibrated noise so confidential information remains secure on resource-limited IoT devices. Yet without transparency, black-box training cannot track noise already injected by previous clients and rounds, which forces worst-case additions and harms accuracy. We propose PrivateDFL, an explainable framework that joins hyperdimensional computing with differential privacy and keeps an auditable account of cumulative noise so each client adds only the difference between the required noise and what has already been accumulated. We evaluate on MNIST, ISOLET, and UCI-HAR to span image, signal, and tabular modalities, and we benchmark against transformer-based and deep learning-based baselines trained centrally with Differentially Private Stochastic Gradient Descent (DP-SGD) and Renyi Differential Privacy (RDP). PrivateDFL delivers higher accuracy, lower latency, and lower energy across IID and non-IID partitions while preserving formal (epsilon, delta) guarantees and operating without a central server. For example, under non-IID partitions, PrivateDFL achieves 24.42% higher accuracy than the Vision Transformer on MNIST while using about 10x less training time, 76x lower inference latency, and 11x less energy, and on ISOLET it exceeds Transformer accuracy by more than 80% with roughly 10x less training time, 40x lower inference latency, and 36x less training energy. Future work will extend the explainable accounting to adversarial clients and adaptive topologies with heterogeneous privacy budgets.
Software-based memory-erasure protocols are two-party communication protocols where a verifier instructs a computational device to erase its memory and send a proof of erasure. They aim at guaranteeing that low-cost IoT devices are free of malware by putting them back into a safe state without requiring secure hardware or physical manipulation of the device. Several software-based memory-erasure protocols have been introduced and theoretically analysed. Yet, many of them have not been tested for their feasibility, performance and security on real devices, which hinders their industry adoption. This article reports on the first empirical analysis of software-based memory-erasure protocols with respect to their security, erasure guarantees, and performance. The experimental setup consists of 3 modern IoT devices with different computational capabilities, 7 protocols, 6 hash-function implementations, and various performance and security criteria. Our results indicate that existing software-based memory-erasure protocols are feasible, although slow devices may take several seconds to erase their memory and generate a proof of erasure. We found that no protocol dominates across all empirical settings, defined by the computational power and memory size of the device, the network speed, and the required level of security. Interestingly, network speed and hidden constants within the protocol specification played a more prominent role in the performance of these protocols than anticipated based on the related literature. We provide an evaluation framework that, given a desired level of security, determines which protocols offer the best trade-off between performance and erasure guarantees.
The classification of network traffic using machine learning (ML) models is one of the primary mechanisms to address the security issues in IoT networks and/or IoT devices. However, the ML models often act as black-boxes that create a roadblock to take critical decision based on the model output. To address this problem, we design and develop a system, called rCamInspector, that employs Explainable AI (XAI) to provide reliable and trustworthy explanations to model output. rCamInspector adopts two classifiers, Flow Classifier - categorizes a flow into one of four classes, IoTCam, Conf, Share and Others, and SmartCam Classifier - classifies an IoTCam flow into one of six classes, Netatmo, Spy Clock, Canary, D3D, Ezviz, V380 Spy Bulb; both are IP address and transport port agnostic. rCamInspector is evaluated using 38GB of network traffic and our results show that XGB achieves the highest accuracy of 92% and 99% in the Flow and SmartCam classifiers respectively among eight supervised ML models. We analytically show that the traditional mutual information (MI) based feature importance cannot provide enough reliability on the model output of XGB in either classifiers. Using SHAP and LIME, we show that a separate set of features can be picked up to explain a correct prediction of XGB. For example, the feature Init Bwd Win Byts turns out to have the highest SHAP values to support the correct prediction of both IoTCam in Flow Classifier and Netatmo class in SmartCam Classifier. To evaluate the faithfulness of the explainers on our dataset, we show that both SHAP and LIME have a consistency of more than 0.7 and a sufficiency of 1.0. Comparing with existing works, we show that rCamInspector achieves a better accuracy (99%), precision (99%), and false negative rate (0.7%).
In modern smart systems, the convergence of the Internet of Things (IoT) and Wireless of Things (WoT) have been revolutionized by offering a broad level of wireless connectivity and communication among various devices. Hitherto, this greater interconnectivity poses important security problems, including the question of how to securely interconnect different networks, preserve secure communication channels, and maintain data integrity. However, the traditional cryptographic method and frequency hopping technique, although they provide some protection, are not sufficient to defend against Man-In-The-Middle, jamming, and replay attacks. In addition, synchronization issues in multi-channel communication systems result in increased latency and energy consumption, which make them unsuitable for resource-constrained IoT and WoT devices. This work presents the Multi-Channel Secure Communication (MCSC) framework, which integrates advanced cryptographic protocols with dynamic channel-hopping strategies to enhance security with reduced synchronization overhead. The MCSC framework maximizes the critical performance metrics, such as packet delivery ratio, latency, throughput, and energy efficiency, and fulfills the specific requirements of the IoT and WoT networks. A comprehensive comparison of MCSC with well-established methods, including Frequency Hop Spread Spectrum, single channel Advanced Encryption Standard, and various Elliptic Curve Cryptography-based schemes, indicates that MCSC has lower error rates and is more resilient to a wider range of cyber attacks. The efficiency of the proposed solution to secure IoT and WoT networks without compromising the operational performance is validated under various interference conditions.
Protocol fuzzing is a scalable and cost-effective technique for identifying security vulnerabilities in deployed Internet of Things devices. During their operational phase, IoT devices often run lightweight servers to handle user interactions, such as video streaming or image capture in smart cameras. Implementation flaws in transport or application-layer security mechanisms can expose IoT devices to a range of threats, including unauthorized access and data leakage. This paper addresses the challenge of uncovering such vulnerabilities by leveraging protocol fuzzing techniques that inject crafted transport and application-layer packets into IoT communications. We present a mutation-based fuzzing tool, named IoTFuzzSentry, to identify specific non-trivial vulnerabilities in commercial IoT devices. We further demonstrate how these vulnerabilities can be exploited in real-world scenarios. We integrated our fuzzing tool into a well-known testing tool Cotopaxi and evaluated it with commercial-off-the-shelf IoT devices such as IP cameras and Smart Plug. Our evaluation revealed vulnerabilities categorized into 4 types (IoT Access Credential Leakage, Sneak IoT Live Video Stream, Creep IoT Live Image, IoT Command Injection) and we show their exploits using three IoT devices. We have responsibly disclosed all these vulnerabilities to the respective vendors. So far, we have published two CVEs, CVE-2024-41623 and CVE-2024-42531, and one is awaiting. To extend the applicability, we have investigated the traffic of six additional IoT devices and our analysis shows that these devices can have similar vulnerabilities, due to the presence of a similar set of application protocols. We believe that IoTFuzzSentry has the potential to discover unconventional security threats and allow IoT vendors to strengthen the security of their commercialized IoT devices automatically with negligible overhead.
The majority of consumer IoT devices lack mechanisms for administrators to monitor and control them, hindering tailored security policies. A key challenge is identifying whether a new device, especially a streaming IoT camera, has joined the network. We present zCamInspector, a system for identifying known IoT cameras with supervised classifiers (zCamClassifier) and detecting zero-day cameras with one-class classifiers (zCamDetector). We analyzed ~40GB of traffic across three datasets: Set I (six commercial IoT cameras), Set II (five open-source IoT cameras, ~1.5GB), and Set III (four conferencing and two video-sharing applications as non-IoT traffic). From each, 62 flow-based features were extracted using CICFlowmeter. zCamInspector employs seven supervised models (ET, DT, RF, KNN, XGB, LKSVM, GNB) and four one-class models (OCSVM, SGDOCSVM, IF, DeepSVDD). Results show that XGB identifies IoT cameras with >99% accuracy and false negatives as low as 0.3%, outperforming state-of-the-art methods. For zero-day detection, accuracies reached 93.20% (OCSVM), 96.55% (SGDOCSVM), 78.65% (IF), and 92.16% (DeepSVDD). When all devices were treated as zero-day, DeepSVDD performed best with mean training/testing accuracies of 96.03%/74.51%. zCamInspector also achieved >95% accuracy for specific devices, such as Spy Clock cameras, demonstrating its robustness for identifying and detecting zero-day IoT cameras in diverse network environments.
The integration of the Internet of Things (IoT) in healthcare has revolutionized patient monitoring and data collection, allowing real-time tracking of vital signs, remote diagnostics, and automated medical responses. However, the transmission and storage of sensitive medical data introduce significant security and privacy challenges. To address these concerns, blockchain technology provides a decentralized and immutable ledger that ensures data integrity, , and transparency. Unlike public blockchains, private blockchains are permissioned; the access is granted only to authorized participants; they are more suitable for handling confidential healthcare data. Although blockchain ensures security and trust, it lacks built-in mechanisms to support flexible and controlled data sharing; This is where Proxy Re-Encryption (PRE) comes into play. PRE is a cryptographic technique that allows encrypted data to be re-encrypted for a new recipient without exposing it to intermediaries. We propose an architecture integrating private blockchain and PRE to enable secure, traceable, and privacy-preserving data sharing in IoT-based healthcare systems. Blockchain guarantees tamper proof record-keeping, while PRE enables fine-grained access control, allowing medical professionals to securely share patient data without compromising confidentiality. This combination creates a robust security framework that enhances trust and efficiency in digital healthcare ecosystems.
Being propelled by the fourth industrial revolution (Industry 4.0), IoT devices and solutions are well adopted everywhere, ranging from home applications to industrial use, crossing through transportation, healthcare, energy, and so on. This wide use of IoT has not gone unnoticed, hackers are tracking the weakness of such a technology and threatening them continuously. Their security at various levels has become an important concern of professionals and researchers. This issue takes more risk, especially with the IoT variants, IIoT (Industrial IoT) and MIoT (Medical IoT). Many existing security solutions are adapted and proposed for addressing IoT security. In this paper, we are interested in exploring blockchain technology and we make a comparison of three free Blockchain platforms towards their applicability for MIoT context, namely Ethereum, Hyperledger Fabric and Corda. In general, Blockchain technology provides a decentralized, autonomous, trustless, and distributed environment. It is challenging to find a Blockchain platform that fits the MIoT context and performs well in terms of security. The retained platform should be deployed smartly to avoid its practical drawbacks related to energy-consuming and excessive computing.
The rapid proliferation of the Internet of Things (IoT) continues to expose critical security vulnerabilities, necessitating the development of efficient and robust intrusion detection systems (IDS). Machine learning-based intrusion detection systems (ML-IDS) have significantly improved threat detection capabilities; however, they remain highly susceptible to adversarial attacks. While numerous defense mechanisms have been proposed to enhance ML-IDS resilience, a systematic approach for selecting the most effective defense against a specific adversarial attack remains absent. To address this challenge, we previously proposed DYNAMITE, a dynamic defense selection approach that identifies the most suitable defense against adversarial attacks through an ML-driven selection mechanism. Building on this foundation, we propose SAGE (Sample-Aware Guarding Engine), a substantially improved defense algorithm that integrates active learning with targeted data reduction. It employs an active learning mechanism to selectively identify the most informative input samples and their corresponding optimal defense labels, which are then used to train a second-level learner responsible for selecting the most effective defense. This targeted sampling improves computational efficiency, exposes the model to diverse adversarial strategies during training, and enhances robustness, stability, and generalizability. As a result, SAGE demonstrates strong predictive performance across multiple intrusion detection datasets, achieving an average F1-score improvement of 201% over the state-of-the-art defenses. Notably, SAGE narrows the performance gap to the Oracle to just 3.8%, while reducing computational overhead by up to 29x.
The rapid expansion of Internet of Things (IoT) systems across various domains such as industry, smart cities, healthcare, manufacturing, and government services has led to a significant increase in security risks, threatening data integrity, confidentiality, and availability. Consequently, ensuring the security and resilience of IoT systems has become a critical requirement. In this paper, we propose a lightweight and efficient intrusion detection system (IDS) for IoT environments, leveraging a hybrid model of CNN and ConvNeXt-Tiny. The proposed method is designed to detect and classify different types of network attacks, particularly botnet and malicious traffic, while the lightweight ConvNeXt-Tiny architecture enables effective deployment in resource-constrained devices and networks. A real-world dataset comprising both benign and malicious network packets collected from practical IoT scenarios was employed in the experiments. The results demonstrate that the proposed method achieves high accuracy while significantly reducing training and inference time compared to more complex models. Specifically, the system attained 99.63% accuracy in the testing phase, 99.67% accuracy in the training phase, and an error rate of 0.0107 across eight classes, while maintaining short response times and low resource consumption. These findings highlight the effectiveness of the proposed method in detecting and classifying attacks in real-world IoT environments, indicating that the lightweight architecture can serve as a practical alternative to complex and resource-intensive approaches in IoT network security.
Coverage-guided fuzzing has been widely applied to address zero-day vulnerabilities in general-purpose software and operating systems. This approach relies on instrumenting the target code at compile time. However, applying it to industrial systems remains challenging, due to proprietary and closed-source compiler toolchains and lack of access to source code. FuzzBox addresses these limitations by integrating emulation with fuzzing: it dynamically instruments code during execution in a virtualized environment, for the injection of fuzz inputs, failure detection, and coverage analysis, without requiring source code recompilation and hardware-specific dependencies. We show the effectiveness of FuzzBox through experiments in the context of a proprietary MILS (Multiple Independent Levels of Security) hypervisor for industrial applications. Additionally, we analyze the applicability of FuzzBox across commercial IoT firmware, showcasing its broad portability.
IPv4 NAT has limited the spread of IoT botnets considerably by default-denying bots' incoming connection requests to in-home devices unless the owner has explicitly allowed them. As the Internet transitions to majority IPv6, however, residential connections no longer require the use of NAT. This paper therefore asks: has the transition from IPv4 to IPv6 ultimately made residential networks more vulnerable to attack, thereby empowering the next generation of IPv6-based IoT botnets? To answer this question, we introduce a large-scale IPv6 scanning methodology that, unlike those that rely on AI, can be run on low-resource devices common in IoT botnets. We use this methodology to perform the largest-scale measurement of IPv6 residential networks to date, and compare which devices are publicly accessible to comparable IPv4 networks. We were able to receive responses from 14.0M distinct IPv6 addresses inside of residential networks (i.e., not the external-facing gateway), in 2,436 ASes across 118 countries. These responses come from protocols commonly exploited by IoT botnets (including telnet and FTP), as well as protocols typically associated with end-user devices (including iPhone-Sync and IPP). Comparing to IPv4, we show that we are able to reach more printers, iPhones, and smart lights over IPv6 than full IPv4-wide scans could. Collectively, our results show that NAT has indeed acted as the de facto firewall of the Internet, and the v4-to-v6 transition of residential networks is opening up new devices to attack.
The increasing use of Internet of Things (IoT) devices has led to a rise in security related concerns regarding IoT Networks. The surveillance cameras in IoT networks are vulnerable to security threats such as brute force and zero-day attacks which can lead to unauthorized access by hackers and potential spying on the users activities. Moreover, these cameras can be targeted by Denial of Service (DOS) attacks, which will make it unavailable for the user. The proposed AI based framework will leverage machine learning algorithms to analyze network traffic and detect anomalous behavior, allowing for quick detection and response to potential intrusions. The framework will be trained and evaluated using real-world datasets to learn from past security incidents and improve its ability to detect potential intrusion.
The rapid expansion of the Internet of Things (IoT) and Wireless Sensor Networks (WSNs) has significantly increased the attack surface of such systems, making them vulnerable to a wide range of cyber threats. Traditional Intrusion Detection Systems (IDS) often fail to meet the stringent requirements of resource-constrained IoT environments due to their high computational cost and reliance on large labeled datasets. To address these challenges, this paper proposes a novel hybrid Intrusion Detection System that integrates a Quantum Genetic Algorithm (QGA) with Self-Supervised Learning (SSL). The QGA leverages quantum-inspired evolutionary operators to optimize feature selection and fine-tune model parameters, ensuring lightweight yet efficient detection in resource-limited networks. Meanwhile, SSL enables the system to learn robust representations from unlabeled data, thereby reducing dependency on manually labeled training sets. The proposed framework is evaluated on benchmark IoT intrusion datasets, demonstrating superior performance in terms of detection accuracy, false positive rate, and computational efficiency compared to conventional evolutionary and deep learning-based IDS models. The results highlight the potential of combining quantum-inspired optimization with self-supervised paradigms to design next-generation intrusion detection solutions for IoT and WSN environments.
As IoT devices continue to proliferate, their reliability is increasingly constrained by security concerns. In response, researchers have developed diverse malware analysis techniques to detect and classify IoT malware. These techniques typically rely on extracting features at different levels from IoT applications, giving rise to a wide range of feature extraction methods. However, current approaches still face significant challenges when applied in practice. This survey provides a comprehensive review of feature extraction techniques for IoT malware analysis from multiple perspectives. We first examine static and dynamic feature extraction methods, followed by hybrid approaches. We then explore feature representation strategies based on graph learning. Finally, we compare the strengths and limitations of existing techniques, highlight open challenges, and outline promising directions for future research.
The Internet of Things (IoT) has emerged as a foundational paradigm supporting a range of applications, including healthcare, education, agriculture, smart homes, and, more recently, enterprise systems. However, significant advancements in IoT networks have been impeded by security vulnerabilities and threats that, if left unaddressed, could hinder the deployment and operation of IoT based systems. Detecting unwanted activities within the IoT is crucial, as it directly impacts confidentiality, integrity, and availability. Consequently, intrusion detection has become a fundamental research area and the focus of numerous studies. An intrusion detection system (IDS) is essential to the IoTs alarm mechanisms, enabling effective security management. This paper examines IoT security and introduces an intelligent two-layer intrusion detection system for IoT. Machine learning techniques power the system's intelligence, with a two layer structure enhancing intrusion detection. By selecting essential features, the system maintains detection accuracy while minimizing processing overhead. The proposed method for intrusion detection in IoT is implemented in two phases. In the first phase, the Grasshopper Optimization Algorithm (GOA) is applied for feature selection. In the second phase, the Support Vector Machine (SVM) algorithm is used to detect intrusions. The method was implemented in MATLAB, and the NSLKDD dataset was used for evaluation. Simulation results show that the proposed method improves accuracy compared to other approaches.
This paper presents a novel multi-layered hybrid security approach aimed at enhancing lightweight encryption for IoT-Cloud systems. The primary goal is to overcome limitations inherent in conventional solutions such as TPA, Blockchain, ECDSA and ZSS which often fall short in terms of data protection, computational efficiency and scalability. Our proposed method strategically refines and integrates these technologies to address their shortcomings while maximizing their individual strengths. By doing so we create a more reliable and high-performance framework for secure data exchange across heterogeneous environments. The model leverages the combined potential of emerging technologies, particularly Blockchain, IoT and Cloud computing which when effectively coordinated offer significant advancements in security architecture. The proposed framework consists of three core layers: (1) the H.E.EZ Layer which integrates improved versions of Hyperledger Fabric, Enc-Block and a hybrid ECDSA-ZSS scheme to improve encryption speed, scalability and reduce computational cost; (2) the Credential Management Layer independently verifying data integrity and authenticity; and (3) the Time and Auditing Layer designed to reduce traffic overhead and optimize performance across dynamic workloads. Evaluation results highlight that the proposed solution not only strengthens security but also significantly improves execution time, communication efficiency and system responsiveness, offering a robust path forward for next-generation IoT-Cloud infrastructures.