Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Fragmentation is a routine part of communication in 6LoWPAN-based IoT networks, designed to accommodate small frame sizes on constrained wireless links. However, this process introduces a critical vulnerability fragments are typically stored and processed before their legitimacy is confirmed, allowing attackers to exploit this gap with minimal effort. In this work, we explore a defense strategy that takes a more adaptive, behavior-aware approach to this problem. Our system, called Predictive-CSM, introduces a combination of two lightweight mechanisms. The first tracks how each node behaves over time, rewarding consistent and successful interactions while quickly penalizing suspicious or failing patterns. The second checks the integrity of packet fragments using a chained hash, allowing incomplete or manipulated sequences to be caught early, before they can occupy memory or waste processing time. We put this system to the test using a set of targeted attack simulations, including early fragment injection, replayed headers, and flooding with fake data. Across all scenarios, Predictive CSM preserved network delivery and maintained energy efficiency, even under pressure. Rather than relying on heavyweight cryptography or rigid filters, this approach allows constrained de vices to adapt their defenses in real time based on what they observe, not just what they're told. In that way, it offers a step forward for securing fragmented communication in real world IoT systems
This study addresses critical challenges in managing the transportation of spent nuclear fuel, including inadequate data transparency, stringent confidentiality requirements, and a lack of trust among collaborating parties, issues prevalent in traditional centralized management systems. Given the high risks involved, balancing data confidentiality with regulatory transparency is imperative. To overcome these limitations, a prototype system integrating blockchain technology and the Internet of Things (IoT) is proposed, featuring a multi-tiered consortium chain architecture. This system utilizes IoT sensors for real-time data collection, which is immutably recorded on the blockchain, while a hierarchical data structure (operational, supervisory, and public layers) manages access for diverse stakeholders. The results demonstrate that this approach significantly enhances data immutability, enables real-time multi-sensor data integration, improves decentralized transparency, and increases resilience compared to traditional systems. Ultimately, this blockchain-IoT framework improves the safety, transparency, and efficiency of spent fuel transportation, effectively resolving the conflict between confidentiality and transparency in nuclear data management and offering significant practical implications.
The widespread adoption of the Internet of Things (IoT) has raised a new challenge for developers since it is prone to known and unknown cyberattacks due to its heterogeneity, flexibility, and close connectivity. To defend against such security breaches, researchers have focused on building sophisticated intrusion detection systems (IDSs) using machine learning (ML) techniques. Although these algorithms notably improve detection performance, they require excessive computing power and resources, which are crucial issues in IoT networks considering the recent trends of decentralized data processing and computing systems. Consequently, many optimization techniques have been incorporated with these ML models. Specifically, a special category of optimizer adopted from the behavior of living creatures and different aspects of natural phenomena, known as metaheuristic algorithms, has been a central focus in recent years and brought about remarkable results. Considering this vital significance, we present a comprehensive and systematic review of various applications of metaheuristics algorithms in developing a machine learning-based IDS, especially for IoT. A significant contribution of this study is the discovery of hidden correlations between these optimization techniques and machine learning models integrated with state-of-the-art IoT-IDSs. In addition, the effectiveness of these metaheuristic algorithms in different applications, such as feature selection, parameter or hyperparameter tuning, and hybrid usages are separately analyzed. Moreover, a taxonomy of existing IoT-IDSs is proposed. Furthermore, we investigate several critical issues related to such integration. Our extensive exploration ends with a discussion of promising optimization algorithms and technologies that can enhance the efficiency of IoT-IDSs.
Data remanence in NAND flash complicates complete deletion on IoT SSDs. We design an adaptive architecture offering four privacy levels (PL0-PL3) that select among address, data, and parity deletion techniques. Quantitative analysis balances efficacy, latency, endurance, and cost. Machine-learning adjusts levels contextually, boosting privacy with negligible performance overhead and complexity.
To safeguard user data privacy, on-device inference has emerged as a prominent paradigm on mobile and Internet of Things (IoT) devices. This paradigm involves deploying a model provided by a third party on local devices to perform inference tasks. However, it exposes the private model to two primary security threats: model stealing (MS) and membership inference attacks (MIA). To mitigate these risks, existing wisdom deploys models within Trusted Execution Environments (TEEs), which is a secure isolated execution space. Nonetheless, the constrained secure memory capacity in TEEs makes it challenging to achieve full model security with low inference latency. This paper fills the gap with TensorShield, the first efficient on-device inference work that shields partial tensors of the model while still fully defending against MS and MIA. The key enabling techniques in TensorShield include: (i) a novel eXplainable AI (XAI) technique exploits the model's attention transition to assess critical tensors and shields them in TEE to achieve secure inference, and (ii) two meticulous designs with critical feature identification and latency-aware placement to accelerate inference while maintaining security. Extensive evaluations show that TensorShield delivers almost the same security protection as shielding the entire model inside TEE, while being up to 25.35$\times$ (avg. 5.85$\times$) faster than the state-of-the-art work, without accuracy loss.
Internet of Vehicles (IoV) systems, while offering significant advancements in transportation efficiency and safety, introduce substantial security vulnerabilities due to their highly interconnected nature. These dynamic systems produce massive amounts of data between vehicles, infrastructure, and cloud services and present a highly distributed framework with a wide attack surface. In considering network-centered attacks on IoV systems, attacks such as Denial-of-Service (DoS) can prohibit the communication of essential physical traffic safety information between system elements, illustrating that the security concerns for these systems go beyond the traditional confidentiality, integrity, and availability concerns of enterprise systems. Given the complexity and volume of data generated by IoV systems, traditional security mechanisms are often inadequate for accurately detecting sophisticated and evolving cyberattacks. Here, we present an unsupervised autoencoder method trained entirely on benign network data for the purpose of unseen attack detection in IoV networks. We leverage a weighted combination of reconstruction and triplet margin loss to guide the autoencoder training and develop a diverse representation of the benign training set. We conduct extensive experiments on recent network intrusion datasets from two different application domains, industrial IoT and home IoT, that represent the modern IoV task. We show that our method performs robustly for all unseen attack types, with roughly 99% accuracy on benign data and between 97% and 100% performance on anomaly data. We extend these results to show that our model is adaptable through the use of transfer learning, achieving similarly high results while leveraging domain features from one domain to another.
This paper focuses on Zero-Trust Foundation Models (ZTFMs), a novel paradigm that embeds zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems. By integrating core tenets, such as continuous verification, least privilege access (LPA), data confidentiality, and behavioral analytics into the design, training, and deployment of FMs, ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments. We present the first structured synthesis of ZTFMs, identifying their potential to transform conventional trust-based IoT architectures into resilient, self-defending ecosystems. Moreover, we propose a comprehensive technical framework, incorporating federated learning (FL), blockchain-based identity management, micro-segmentation, and trusted execution environments (TEEs) to support decentralized, verifiable intelligence at the network edge. In addition, we investigate emerging security threats unique to ZTFM-enabled systems and evaluate countermeasures, such as anomaly detection, adversarial training, and secure aggregation. Through this analysis, we highlight key open research challenges in terms of scalability, secure orchestration, interpretable threat attribution, and dynamic trust calibration. This survey lays a foundational roadmap for secure, intelligent, and trustworthy IoT infrastructures powered by FMs.
IoT is a dynamic network of interconnected things that communicate and exchange data, where security is a significant issue. Previous studies have mainly focused on attack classifications and open issues rather than presenting a comprehensive overview on the existing threats and vulnerabilities. This knowledge helps analyzing the network in the early stages even before any attack takes place. In this paper, the researchers have proposed different security aspects and a novel Bayesian Security Aspects Dependency Graph for IoT (BSAGIoT) to illustrate their relations. The proposed BSAGIoT is a generic model applicable to any IoT network and contains aspects from five categories named data, access control, standard, network, and loss. This proposed Bayesian Security Aspect Graph (BSAG) presents an overview of the security aspects in any given IoT network. The purpose of BSAGIoT is to assist security experts in analyzing how a successful compromise and/or a failed breach could impact the overall security and privacy of the respective IoT network. In addition, root cause identification of security challenges, how they affect one another, their impact on IoT networks via topological sorting, and risk assessment could be achieved. Hence, to demonstrate the feasibility of the proposed method, experimental results with various scenarios has been presented, in which the security aspects have been quantified based on the network configurations. The results indicate the impact of the aspects on each other and how they could be utilized to mitigate and/or eliminate the security and privacy deficiencies in IoT networks.
The advancements in communication technology (5G and beyond) and global connectivity Internet of Things (IoT) also come with new security problems that will need to be addressed in the next few years. The threats and vulnerabilities introduced by AI/ML based 5G and beyond IoT systems need to be investigated to avoid the amplification of attack vectors on AI/ML. AI/ML techniques are playing a vital role in numerous applications of cybersecurity. Despite the ongoing success, there are significant challenges in ensuring the trustworthiness of AI/ML systems. However, further research is needed to define what is considered an AI/ML threat and how it differs from threats to traditional systems, as currently there is no common understanding of what constitutes an attack on AI/ML based systems, nor how it might be created, hosted and propagated [ETSI, 2020]. Therefore, there is a need for studying the AI/ML approach to ensure safe and secure development, deployment, and operation of AI/ML based 5G and beyond IoT systems. For 5G and beyond, it is essential to continuously monitor and analyze any changing environment in real-time to identify and reduce intentional and unintentional risks. In this study, we will review the role of the AI/ML technique for 5G and beyond security. Furthermore, we will provide our perspective for predicting and mitigating 5G and beyond security using AI/ML techniques.
The Internet of Things (IoT) and Large Language Models (LLMs) have been two major emerging players in the information technology era. Although there has been significant coverage of their individual capabilities, our literature survey sheds some light on the integration and interaction of LLMs and IoT devices - a mutualistic relationship in which both parties leverage the capabilities of the other. LLMs like OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini/BERT, any many more, all demonstrate powerful capabilities in natural language understanding and generation, enabling more intuitive and context-aware interactions across diverse IoT applications such as smart cities, healthcare systems, industrial automation, and smart home environments. Despite these opportunities, integrating these resource-intensive LLMs into IoT devices that lack the state-of-the-art computational power is a challenging task. The security of these edge devices is another major concern as they can easily act as a backdoor to private networks if the LLM integration is sloppy and unsecured. This literature survey systematically explores the current state-of-the-art in applying LLMs within IoT, emphasizing their applications in various domains/sectors of society, the significant role they play in enhancing IoT security through anomaly detection and threat mitigation, and strategies for effective deployment using edge computing frameworks. Finally, this survey highlights existing challenges, identifies future research directions, and underscores the need for cross-disciplinary collaboration to fully realize the transformative potential of integrating LLMs and IoT.
Industrial Internet of Things (IIoT) systems have become integral to smart manufacturing, yet their growing connectivity has also exposed them to significant cybersecurity threats. Traditional intrusion detection systems (IDS) often rely on centralized architectures that raise concerns over data privacy, latency, and single points of failure. In this work, we propose a novel Federated Learning-Enhanced Blockchain Framework (FL-BCID) for privacy-preserving intrusion detection tailored for IIoT environments. Our architecture combines federated learning (FL) to ensure decentralized model training with blockchain technology to guarantee data integrity, trust, and tamper resistance across IIoT nodes. We design a lightweight intrusion detection model collaboratively trained using FL across edge devices without exposing sensitive data. A smart contract-enabled blockchain system records model updates and anomaly scores to establish accountability. Experimental evaluations using the ToN-IoT and N-BaIoT datasets demonstrate the superior performance of our framework, achieving 97.3% accuracy while reducing communication overhead by 41% compared to baseline centralized methods. Our approach ensures privacy, scalability, and robustness-critical for secure industrial operations. The proposed FL-BCID system provides a promising solution for enhancing trust and privacy in modern IIoT security architectures.
Neuromorphic computing, inspired by the human brain's neural architecture, is revolutionizing artificial intelligence and edge computing with its low-power, adaptive, and event-driven designs. However, these unique characteristics introduce novel cybersecurity risks. This paper proposes Neuromorphic Mimicry Attacks (NMAs), a groundbreaking class of threats that exploit the probabilistic and non-deterministic nature of neuromorphic chips to execute covert intrusions. By mimicking legitimate neural activity through techniques such as synaptic weight tampering and sensory input poisoning, NMAs evade traditional intrusion detection systems, posing risks to applications such as autonomous vehicles, smart medical implants, and IoT networks. This research develops a theoretical framework for NMAs, evaluates their impact using a simulated neuromorphic chip dataset, and proposes countermeasures, including neural-specific anomaly detection and secure synaptic learning protocols. The findings underscore the critical need for tailored cybersecurity measures to protect brain-inspired computing, offering a pioneering exploration of this emerging threat landscape.
Geospatial data statistics involve the aggregation and analysis of location data to derive the distribution of clients within geospatial. The need for privacy protection in geospatial data analysis has become paramount due to concerns over the misuse or unauthorized access of client location information. However, existing private geospatial data statistics mainly rely on privacy computing techniques such as cryptographic tools and differential privacy, which leads to significant overhead and inaccurate results. In practical applications, geospatial data is frequently generated by mobile devices such as smartphones and IoT sensors. The continuous mobility of clients and the need for real-time updates introduce additional complexity. To address these issues, we first design \textit{spatially distributed point functions (SDPF)}, which combines a quad-tree structure with distributed point functions, allowing clients to succinctly secret-share values on the nodes of an exponentially large quad-tree. Then, we use Gray code to partition the region and combine SDPF with it to propose $\mathtt{EPSpatial}$, a scheme for accurate, efficient, and private statistical analytics of geospatial data. Moreover, considering clients' frequent movement requires continuous location updates, we leverage the region encoding property to present an efficient update algorithm.Security analysis shows that $\mathtt{EPSpatial}$ effectively protects client location privacy. Theoretical analysis and experimental results on real datasets demonstrate that $\mathtt{EPSpatial}$ reduces computational and communication overhead by at least $50\%$ compared to existing statistical schemes.
The Advanced Encryption Standard (AES) is a widely adopted cryptographic algorithm essential for securing embedded systems and IoT platforms. However, existing AES hardware accelerators often face limitations in performance, energy efficiency, and flexibility. This paper presents AES-RV, a hardware-efficient RISC-V accelerator featuring low-latency AES instruction extensions optimized for real-time processing across all AES modes and key sizes. AES-RV integrates three key innovations: high-bandwidth internal buffers for continuous data processing, a specialized AES unit with custom low-latency instructions, and a pipelined system supported by a ping-pong memory transfer mechanism. Implemented on the Xilinx ZCU102 SoC FPGA, AES-RV achieves up to 255.97 times speedup and up to 453.04 times higher energy efficiency compared to baseline and conventional CPU/GPU platforms. It also demonstrates superior throughput and area efficiency against state-of-the-art AES accelerators, making it a strong candidate for secure and high-performance embedded systems.
Due to the rapid growth in the number of Internet of Things (IoT) networks, the cyber risk has increased exponentially, and therefore, we have to develop effective IDS that can work well with highly imbalanced datasets. A high rate of missed threats can be the result, as traditional machine learning models tend to struggle in identifying attacks when normal data volume is much higher than the volume of attacks. For example, the dataset used in this study reveals a strong class imbalance with 94,659 instances of the majority class and only 28 instances of the minority class, making it quite challenging to determine rare attacks accurately. The challenges presented in this research are addressed by hybrid sampling techniques designed to improve data imbalance detection accuracy in IoT domains. After applying these techniques, we evaluate the performance of several machine learning models such as Random Forest, Soft Voting, Support Vector Classifier (SVC), K-Nearest Neighbors (KNN), Multi-Layer Perceptron (MLP), and Logistic Regression with respect to the classification of cyber-attacks. The obtained results indicate that the Random Forest model achieved the best performance with a Kappa score of 0.9903, test accuracy of 0.9961, and AUC of 0.9994. Strong performance is also shown by the Soft Voting model, with an accuracy of 0.9952 and AUC of 0.9997, indicating the benefits of combining model predictions. Overall, this work demonstrates the value of hybrid sampling combined with robust model and feature selection for significantly improving IoT security against cyber-attacks, especially in highly imbalanced data environments.
In recent years, consumer Internet of Things (IoT) devices have become widely used in daily life. With the popularity of devices, related security and privacy risks arise at the same time as they collect user-related data and transmit it to various service providers. Although China accounts for a larger share of the consumer IoT industry, current analyses on consumer IoT device traffic primarily focus on regions such as Europe, the United States, and Australia. Research on China, however, is currently rather rare. This study constructs the first large-scale dataset about consumer IoT device traffic in China. Specifically, we propose a fine-grained traffic collection guidance covering the entire lifecycle of consumer IoT devices, gathering traffic from 70 devices spanning 36 brands and 8 device categories. Based on this dataset, we analyze traffic destinations and encryption practices across different device types during the entire lifecycle and compare the findings with the results of other regions. Compared to other regions, our results show that consumer IoT devices in China rely more on domestic services and overally perform better in terms of encryption practices. However, there are still 20/35 devices improperly conduct certificate validation, and 5/70 devices use insecure encryption protocols. To facilitate future research, we open-source our traffic collection guidance and make our dataset publicly available.
This study proposes a mechanism for encrypting SD-JWT (Selective Disclosure JSON Web Token) Disclosures using Attribute-Based Encryption (ABE) to enable flexible access control on the basis of the Verifier's attributes. By integrating Ciphertext-Policy ABE (CP-ABE) into the existing SD-JWT framework, the Holder can assign decryption policies to Disclosures, ensuring information is selectively disclosed. The mechanism's feasibility was evaluated in a virtualized environment by measuring the processing times for SD-JWT generation, encryption, and decryption with varying Disclosure counts (5, 10, 20). Results showed that SD-JWT generation is lightweight, while encryption and decryption times increase linearly with the number of Disclosures. This approach is suitable for privacy-sensitive applications like healthcare, finance, and supply chain tracking but requires optimization for real-time use cases such as IoT. Future research should focus on improving ABE efficiency and addressing scalability challenges.
In this thesis, a novel lightweight hybrid encryption algorithm named SEPAR is proposed, featuring a 16-bit block length and a 128-bit initialization vector. The algorithm is designed specifically for application in Internet of Things (IoT) technology devices. The design concept of this algorithm is based on the integration of a pseudo-random permutation function and a pseudo-random generator function. This intelligent combination not only enhances the algorithm's resistance against cryptographic attacks but also improves its processing speed. The security analyses conducted on the algorithm, along with the results of NIST statistical tests, confirm its robustness against most common and advanced cryptographic attacks, including linear and differential attacks. The proposed algorithm has been implemented on various software platform architectures. The software implementation was carried out on three platforms: 8-bit, 16-bit, and 32-bit architectures. A comparative analysis with the BORON algorithm on a 32-bit ARM processor indicates a performance improvement of 42.25%. Furthermore, implementation results on 8-bit and 16-bit microcontrollers demonstrate performance improvements of 87.91% and 98.01% respectively, compared to the PRESENT cipher.