The emergence of the Internet of Things (IoT) has triggered a massive digital transformation across numerous sectors. This transformation requires efficient wireless communication and connectivity, which depend on the optimal utilization of the available spectrum resource. Given the limited availability of spectrum resources, spectrum sharing has emerged as a favored solution to empower IoT deployment and connectivity, so adequate planning of the spectrum resource utilization is thus essential to pave the way for the next generation of IoT applications, including 5G and beyond. This article presents a comprehensive study of prevalent wireless technologies employed in the field of the spectrum, with a primary focus on spectrum-sharing solutions, including shared spectrum. It highlights the associated security and privacy concerns when the IoT devices access the shared spectrum. This survey examines the benefits and drawbacks of various spectrum-sharing technologies and their solutions for various IoT applications. Lastly, it identifies future IoT obstacles and suggests potential research directions to address them.
In autonomous driving systems, perception is pivotal, relying chiefly on sensors like LiDAR and cameras for environmental awareness. LiDAR, celebrated for its detailed depth perception, is being increasingly integrated into autonomous vehicles. In this article, we analyze the robustness of four LiDAR-included models against adversarial points under physical constraints. We first introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a vehicle, can make the vehicle undetectable by the LiDAR-included models. Experiments reveal that adversarial points adversely affect the detection capabilities of both LiDAR-only and LiDAR–camera fusion models, with a tendency for more adversarial points to escalate attack success rates. Notably, voxel-based models are more susceptible to deception by these adversarial points. We also investigated the impact of the distance and angle of the added adversarial points on the attack success rate. Typically, the farther the victim object to be hidden and the closer to the front of the LiDAR, the higher the attack success rate. Additionally, we have experimentally proven that our generated adversarial points possess good cross-model adversarial transferability and validated the effectiveness of our proposed optimization method through ablation studies. Furthermore, we propose a new plug-and-play, model-agnostic defense method based on the concept of point smoothness. The ROC curve of this defense method shows an AUC value of approximately 0.909, demonstrating its effectiveness.
With the prevalence of various sensors and smart devices in people’s daily lives, numerous types of information are being sensed. While using such information provides critical and convenient services, we are gradually exposing every piece of our behavior and activities. Researchers are aware of the privacy risks and have been working on preserving privacy while sensing human activities. This survey reviews existing studies on privacy-preserving human activity sensing. We first introduce the sensors and captured private information related to human activities. We then propose a taxonomy to structure the methods for preserving private information from two aspects: individual and collaborative activity sensing. For each of the two aspects, the methods are classified into three levels: signal, algorithm, and system. Finally, we discuss the open challenges and provide future directions.
Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized natural language understanding and generation. They possess deep language comprehension, human-like text generation capabilities, contextual awareness, and robust problem-solving skills, making them invaluable in various domains (e.g., search engines, customer support, translation). In the meantime, LLMs have also gained traction in the security community, revealing security vulnerabilities and showcasing their potential in security-related tasks. This paper explores the intersection of LLMs with security and privacy. Specifically, we investigate how LLMs positively impact security and privacy, potential risks and threats associated with their use, and inherent vulnerabilities within LLMs. Through a comprehensive literature review, the paper categorizes the papers into “The Good” (beneficial LLM applications), “The Bad” (offensive applications), and “The Ugly” (vulnerabilities of LLMs and their defenses). We have some interesting findings. For example, LLMs have proven to enhance code security (code vulnerability detection) and data privacy (data confidentiality protection), outperforming traditional methods. However, they can also be harnessed for various attacks (particularly user-level attacks) due to their human-like reasoning abilities. We have identified areas that require further research efforts. For example, Research on model and parameter extraction attacks is limited and often theoretical, hindered by LLM parameter scale and confidentiality. Safe instruction tuning, a recent development, requires more exploration. We hope that our work can shed light on the LLMs’ potential to both bolster and jeopardize cybersecurity.
Multi-agent reinforcement learning holds tremendous potential for revolutionizing intelligent systems across diverse domains. However, it is also concomitant with a set of formidable challenges, which include the effective allocation of credit values to each agent, real-time collaboration among heterogeneous agents, and an appropriate reward function to guide agent behavior. To handle these issues, we propose an innovative solution named the Graph Attention Counterfactual Multiagent Actor–Critic algorithm (GACMAC). This algorithm encompasses several key components: First, it employs a multi-agent actor–critic framework along with counterfactual baselines to assess the individual actions of each agent. Second, it integrates a graph attention network to enhance real-time collaboration among agents, enabling heterogeneous agents to effectively share information during handling tasks. Third, it incorporates prior human knowledge through a potential-based reward shaping method, thereby elevating the convergence speed and stability of the algorithm. We tested our algorithm on the StarCraft Multi-Agent Challenge (SMAC) platform, which is a recognized platform for testing multi-agent algorithms, and our algorithm achieved a win rate of over 95% on the platform, comparable to the current state-of-the-art multi-agent controllers.
Compared to 2D imaging data, the 4D light field (LF) data retains richer scene’s structure information, which can significantly improve the computer’s perception capability, including depth estimation, semantic segmentation, and LF rendering. However, there is a contradiction between spatial and angular resolution during the LF image acquisition period. To overcome the above problem, researchers have gradually focused on the light field super-resolution (LFSR). In the traditional solutions, researchers achieved the LFSR based on various optimization frameworks, such as Bayesian and Gaussian models. Deep learning-based methods are more popular than conventional methods because they have better performance and more robust generalization capabilities. In this paper, the present approach can mainly divided into conventional methods and deep learning-based methods. We discuss these two branches in light field spatial super-resolution (LFSSR), light field angular super-resolution (LFASR), and light field spatial and angular super-resolution (LFSASR), respectively. Subsequently, this paper also introduces the primary public datasets and analyzes the performance of the prevalent approaches on these datasets. Finally, we discuss the potential innovations of the LFSR to propose the progress of our research field.