Wireless communication systems are inherently challenged by factors such as fading, path loss, and shadowing, leading to potential errors in data transmission. Traditional methods to mitigate these issues include power control, diversification, variable beamforming, and modulation techniques. However, the unpredictable nature of the wireless medium often limits their effectiveness. A new approach to address these challenges is the implementation of cascaded intelligent reflecting surfaces (IRS). IRS systems consist of multiple passive elements that intelligently reflect electromagnetic waves, thereby enhancing signal quality. The Advanced Discrete Fourier Transform (ADFT) matrix scheme is explored in channel estimation, a novel method particularly suitable for wireless networks utilising cascaded IRS. The ADFT matrix scheme is significant for its efficiency in managing the common-link configuration of cascading channel coefficients, which effectively reduces pilot overhead. When compared to traditional channel estimation methods like the Least Square|least squares, Maximal a posteriori probability, and Linear Minimum Mean Square Error, the ADFT matrix scheme exhibits superior performance. It achieves a remarkable reduction in normalised mean squared error (NMSE) – 66% and 80% at 20 dB and 15 dB Signal to-Noise Ratios (SNR), respectively. Furthermore, increasing the pilot length correlates with enhanced NMSE performance, with a noted 33% improvement as the base station distance increases. Simulations demonstrate that with an escalation in the number of IRS elements and SNR, the ADFT matrix scheme consistently surpasses conventional methods. This advancement represents a significant leap in the field of wireless communication technology.
The Internet of Things (IoT) is the recent technology intended to facilitate the daily life of humans by providing the power to connect, control and automate objects in the physical world. In this logic, the IoT helps to improve our way of producing and working in various areas (e.g. agriculture, industry, healthcare, transportation etc). Basically, an IoT network comprises physical devices, equipped with sensors and transmitters, that are interconnected with each other and/or connected to the Internet. Its main objective is to gather and transmit data to a storage system such as a server or cloud to enable processing and analysis, ultimately facilitating rapid decision-making or enhancements to the user experience. In the realm of Connected Objects, an effective IoT data collection system plays a vital role by providing several benefits, such as real-time data monitoring, enhanced decision-making, increased operational efficiency etc. However, because of the resource limitations linked to connected objects, such as low memory and battery, or even single-use devices etc. IoT data collecting presents several challenges including scalability, security, interoperability, flexibility etc. for both researchers and companies. The authors categorise current IoT data collection techniques and perform a comparative evaluation of these methods based on the topics analysed and elaborated by the authors. In addition, a comprehensive analysis of recent advances in IoT data collection is provided, highlighting different data types and sources, transmission protocols from connected sensors to a storage platform (server or cloud), the IoT data collection framework, and principles for streamlining the collection process. Finally, the most important research questions and future prospects for the effective collection of IoT data are summarised.
The detection and characterisation of electromagnetic signals within a specific frequency range, known as spectrum sensing, plays a crucial role in Cognitive Radio Networks (CRNs). The CRNs aim to adapt their communication parameters to the surrounding radio environment, thereby improving the efficiency and utilisation of the available radio spectrum. Spectrum sensing is particularly important in device-to-device (D2D) communication when operating independently of the cellular network infrastructure. The Medium Access Control (MAC) protocol coordinates device communication and ensures interference-free operation of the CRN coexisting with the primary cellular network. A spectrum sensing strategy at the MAC layer for cognitive D2D communication. The strategy focuses on reducing the overall sensing period allocated at the MAC layer by having each Cognitive D2D User (cD2DU) sense a smaller subset of available channels while maintaining the same sensing time for cellular user detection at the physical layer. To achieve this, the concept of concurrent groups of D2D devices is introduced in proximity, which are formed by using unique IDs of cD2DUs during the device discovery stage. Each concurrent group senses a specific portion of the cellular user band in a shorter time, resulting in a reduced overall sensing period. In addition to mitigating traffic congestion through data diversion from the cellular network, the proposed strategy facilitates the concurrent sensing of multiple channels by cD2DUs within the underutilised cellular user band. This leads to extended data transmission periods, increased network throughput, and effective offloading of the cellular network. The effectiveness of the proposed work is evaluated by considering factors, such as network throughput and transmission time. Simulation results confirm the effectiveness of the approach in improving spectrum utilisation and communication efficiency in multi-channel Cognitive D2D Networks (cD2DNs).
With the advance of climate change and the local effects of human activity, it has become of utmost importance to sense spatially extended natural and artificial physical phenomena to predict, monitor, and mitigate hazardous events. Wireless sensor networks are suitable for observing such phenomena, for example, wildfires, floods or landslides, without human supervision. This is due to affordable devices, independent power sources, wireless communication, and a broad range of sensors. During normal operation a few, while during the occurrence of an event a multitude of devices can fail. This leads to further disconnected devices, degrading the network's sensing capabilities. The communication requirements of such applications are difficult to fulfil with general routing protocols. The monitored event is rare compared to the network's lifetime, while its occurrence results in multiple, gradual node failures, still demanding the network to perform reliably. Available routing protocols fail to address every aspect of such application, thus the authors propose the Reliable Resilient Multipath Routing Protocol, designed to construct multiple disjoint paths from each device to a distinguished one, called the sink. The protocol employs proactive and reactive network management techniques to increase connection redundancy and maintain connectivity during failures. To verify the proposed protocol end-to-end, we evaluated the supported parameters, performed comparative simulations with routing algorithms known from the literature, and provided estimates of a realistic deployment.
In this paper we consider a scenario where there are two wireless body area networks (WBANs) interfere with each other from a game theoretic perspective. In particular, we envision two WBANs playing a potential game to enhance their performance by decreasing interference to each other. Decreasing interference extends the sensors' batteries life time and reduces the number of re-transmissions. We derive the required conditions for the game to be a potential game and its associated the Nash equilibrium (NE). Specifically, we formulate a game where each WBAN has three strategies. Depending on the payoff of each strategy, the game can be designed to achieve a desired NE. Furthermore, we employ a learning algorithm to achieve that NE. In particular, we employ the Fictitious play (FP) learning algorithm as a distributed algorithm that WBANs can use to approach the NE. The simulation results show that the NE is mainly a function of the power cost parameter and a reliability factor that we set depending on each WBAN setting (patient). However, the power cost factor is more dominant than the reliability factor according to the linear cost function formulation that we use throughout this work.
Nowadays, modern radar systems increase their target detection capabilities by processing pulses coherently. On the other hand, digital radio frequency memory-based modern jammers have the ability to work coherently and can deceive radars even with a very low effective radiated power. These jammers, which have the capability of storing the radar's pulse, can use the previous pulses that they have stored in their memory during the electronic deception, without waiting for the last pulse of the radar, in other words, before the new pulse is received. If the radar does not change its parameters from pulse to pulse, such smart jamming techniques applied in this way can be very effective. In this article, the authors propose to use a smart binary phase-coding method for pulse compression radar as an electronic protection technique against repeater jamming. This approach further improves the target detection capability of modern radar systems, which use coherent integration in the receiver. The proposed method can provide high protection against digital radio frequency memory-based repetitive range deception techniques without compromising the radar's target detection capability. In the simulations, the traditional approach in which the same code is used without changing from pulse-to-pulse, and the approach using code sets obtained by the smart binary phase-coding method in intra-pulse modulation are compared. The results show that the proposed method can significantly improve the isolation against deception jamming and the target detection capability simultaneously.