The software-defined optical network (SDON) is a revolutionary approach in the field of optical networks. The separation of the control plane and data plane in software-defined networking (SDN) provides enhanced security and simplified network administration. Nevertheless, performance and control plane scalability are significant issues in SDN. SDN performance can be evaluated using parameters such as burst loss, delay, channel occupancy, packet loss, throughput, and average response time. The number of messages exchanged between the data plane and the control plane is used as a metric to determine controller scalability. As the network load increases, the controller experiences a higher flow of messages. It causes delay and burst loss in transmitting the burst. Occasionally, bursts exceed the capacity of the fixed-sized burstifier and are discarded because it takes a long time to identify a suitable route for the burst. Hence, it is essential to minimize the volume of messages exchanged between the control plane and the data plane to improve performance and controller scalability. In this paper, we propose a scalable SDN optical network architecture that minimizes the number of messages exchanged between the data plane and the control plane. We proposed mechanisms like channel reservation, transmission cycles, and guard time between cycles to enhance both the speed and the quality of burst transmission. Prior to transmission, resources or channels are allocated to bursts to minimize the possibility of burst collision and loss. The data plane comprises an optical burst switching (OBS) network, and the flow table entries are periodically updated to minimize inter-plane communication. We perform simulations to evaluate and compare the performance of the proposed architecture with the existing state-of-the-art architecture reported in the literature. The proposed architecture performs better than the existing state-of-the-art in terms of metrics including burst loss, delay, channel occupancy, packet loss, throughput, average response time, and reduction in the number of messages exchanged between the data plane and the control plane. Experimental results indicate a 41% reduction in mean burst loss probability and a 40.5% reduction in mean burst sending delay compared to existing architectures. Additionally, 42.1% fewer messages are exchanged between the control plane and the data plane compared to the number of exchanged messages in existing architectures.
Three main kinds of underwater wireless communication, which employ acoustic waves, radio frequency and optical waves, have attracted intensive research interests in recently years. Among them, the underwater optical wireless communication (UOWC) is characterized by high propagation speed and large transmission bandwidth. But, the optical waves in underwater environment are significantly affected by absorption and scattering effects, which limit their transmission range. In order to enhance the performance of UOWC, designing a transmission and energy efficiency routing algorithm has become a non-ignorable issue in UOWC. In this paper, a transmission distance adaptive dual-hop (TDAD) routing algorithm is proposed for underwater optical wireless networks (UOWNs) to improve their efficiency in packet-delivery and energy-consumption. Unlike the existing routing algorithms designed for UOWNs, which pre-set the transmission range of network nodes, the proposed TDAD algorithm adaptively selects the transmission range for each node according to the diversity of heterogeneous service requests and employs location and energy information in its dual-hop based routing procedure. Simulation results indicate that the proposed TDAD algorithm remarkably improves packet delivery rate with more balanced energy consumption when compared to the deviation angle-based single-hop (DAS) algorithm and the distributed sector-based (DS) routing algorithm.
Future 6G communication systems are envisioned to expand their carrier frequency to the THz region, where a broad unexplored region of spectrum is available. With this expansion, THz wireless communication has the potential to achieve ultra-high data transmission rates of up to 100 Gbit/s. However, as large amounts of data are transmitted in an open wireless environment, there are significant concerns regarding communication security due to the susceptibility to eavesdropping, interception, and jamming. In this work, we proposed a secure approach for THz wireless communication based on spatial wave mixing and flexible beam steering. To achieve this, two frequency-modulated THz waves, which are generated by photonic THz sources and carry encrypted information with true randomness, are mixed at a THz envelope detector with an exclusive-OR logic operation. We analyzed the possible spatial location for the THz detector to ensure a secure microcell network deployment. Our results demonstrate that the size of the decryptable region is directly dependent on the directivity and width of the emitted THz beam. To address this, we have developed an array antenna with integrated uni-traveling-carrier photodiodes (UTC-PDs), which is capable of generating THz waves while also improving the flexibility of beam pointing, allowing for greater control over the location and size of the decodable region. By controlling fiber-optic delay lines, we successfully demonstrated that the directional gain of a 200 GHz wave is increased by 8 dB through a 1 × 3 UTC-PD-integrated planar bowtie antenna (PBA) array, together with continuous beam steering from -20° to 10°. Additionally, using a 1 × 4 UTC-PD-integrated PBA array to emulate two encryption transmitters and a Femi-level managed barrier diode to detect spatially mixed THz waves, we successfully achieved a feasibility experiment for real-time 200 Mbit/s location-based decryption in the 200 GHz band. These results indicate that the proposed scheme is feasible for secured THz communication, and would be a powerful candidate to mitigate security risks in 6G microcell networks.
The evolution of data-intensive services and applications continues to drive the need for higher data rates in wireless communication systems, consequently depleting the radio frequency (RF) spectrum. Due to the unlicensed and enormous bandwidth available in the visible light (VL) spectrum, the emergence of visible light communication (VLC) has been considered a potential solution to alleviate the constraints associated with RF spectrum scarcity. However, the line-of-sight requirement and the inability of VL to penetrate opaque obstacles remain a daunting challenge in realizing a wider coverage area. The incorporation of cooperative communication in VLC systems serves as one of the primary solutions to address this challenge. Though various investigations are currently being conducted in this domain, a holistic report of various advances, solution approaches, and design challenges has not been captured in the open literature. Therefore, in this paper, our main goal is to present a review of the state-of-the-art research on cooperative VLC systems. Firstly, we provide a background discussion to establish the relationship between various components of cooperative VLC systems from a theoretical and analytical perspective. Secondly, we categorize various contributions in this direction under media access control (MAC), hybrid VLC-RF, power line communication-VLC (PLC-VLC), and VLC with energy harvesting. Based on the established categories, we identify various system design and evaluation methods, optimization problems, solution approaches adopted to tackle the problems, and their limitations. Thirdly, we identify various insights obtained from the presented papers that could serve as guidelines for practical system design. Finally, various design challenges and open areas for future research are identified.
The rapid growth of Data Center Network (DCN) traffic has brought new challenges, such as limited bandwidth, high latency, and packet loss to existing DCNs based on electrical switches. Because of its theoretically unlimited bandwidth and faster data transmission speeds, optical switching can overcome the problems of electrically switched DCNs. Additionally, numerous research works have been devoted to optical wired DCNs. However, static and fixed-topology DCNs based on optical interconnects significantly limit their flexibility, scalability, and reconfigurability to provide adaptive bandwidth for traffic with heterogeneous characteristics. In this study, we propose and conduct performance evaluations on a reconfigurable optical wireless DCN architecture based on distributed Software-Defined Networking (SDN), Deep Reinforcement Learning (DRL), Semiconductor Optical Amplifier (SOA), and Arrayed Waveguide Grating Router (AWGR). Our architecture is called ODRAD (which stands for Optical Wireless DCN Dynamic-bandwidth Reconfiguration with AWGR and Deep Reinforcement Learning). A Mininet simulation model is established to further verify the reconfigurability of the ODRAD network for various server scales. Based on experimental verification, ODRAD achieves an average end-to-end server latency of under a load of 99%. Compression results demonstrate a 17.36% improvement in packet rate latency performance compared to RotorNet and a 15.21% improvement compared to OPSquare at a load of 99% as the ODRAD network scales from 2,560 to 40,960 servers. Furthermore, ODRAD exhibits effective throughput across different routing protocols, DCN scales and loads.
Space-division multiplexed elastic optical networks (SDM-EONs) utilizing multi-core fiber (MCF) have been considered to address the growing traffic demand in transport networks. The quality of transmission (QoT) of MCF-based SDM-EONs is affected by inter-core and intra-core physical layer impairments (PLIs). This paper proposes an inter-core crosstalk-aware and intra-core impairment-aware algorithm for modulation, core, and spectrum assignment (CIA-MCSA) in MCF-based SDM-EONs. The CIA-MCSA considers PLI estimation in a dynamic traffic scenario and allocates new lightpaths using strategies to avoid blocking by insufficient QoT of the new lightpath and of already active lightpaths. Using numerical simulation, the performance of the CIA-MCSA is compared with five algorithms proposed by other authors, considering two distinct network topologies, heterogeneous traffic demands, and different levels of inter-core crosstalk. The results show that, when compared with the most competitive of the other algorithms, CIA-MCSA achieves an average reduction of the request blocking probability by at least 33.87%; CIA-MCSA achieves an average reduction of the bandwidth blocking probability by at least 20.74%; and CIA-MCSA increases the network spectrum utilization by at least 3.04%.
Development of 5G/F5G technology leads to massive applications accessing to backbone networks, which requires the backbone networks to be upgraded. Semi-filterless elastic optical network (semi-FEON) is a suitable technology to cheaply and gradually upgrade backbone networks. In semi-FEON, routing, modulation and spectrum assignment (RMSA) problem is one of the key issues. In this paper, we study the dynamic RMSA problem in semi-FEON and propose an RMSA algorithm. The algorithm includes three innovations: a K-shortest-subnet-paths (KSSP) algorithm is designed to search candidate paths in semi-FEON, a load-balancing-least-resources (LBLR) policy is introduced to re-sort the candidate paths, and a maximum-occupied-neighbors (MON) rule is proposed to assign spectrum resources to connection requests in semi-FEON. Simulation results show that the proposed KSSP-LBLR-MON algorithm outperforms the existing works in term of bandwidth blocking probability. Concretely, the improvement ratio is greater than 59.98% and 66.64% in German-Net and Henan-Net, respectively.