Pub Date : 2024-06-07DOI: 10.3390/technologies12060086
Muhammad Mu’az Imran, Azam Che Idris, L. C. De Silva, Y. Kim, Pg Emeroylariffion Abas
This paper provides a comprehensive analysis of recent advancements in additive manufacturing, a transformative approach to industrial production that allows for the layer-by-layer construction of complex parts directly from digital models. Focusing specifically on Directed Energy Deposition, it begins by clarifying the fundamental principles of metal additive manufacturing as defined by International Organization of Standardization and American Society for Testing and Materials standards, with an emphasis on laser- and powder-based methods that are pivotal to Directed Energy Deposition. It explores the critical process mechanisms that can lead to defect formation in the manufactured parts, offering in-depth insights into the factors that influence these outcomes. Additionally, the unique mechanisms of defect formation inherent to Directed Energy Deposition are examined in detail. The review also covers the current landscape of process evaluation and non-destructive testing methods essential for quality assurance, including both traditional and contemporary in situ monitoring techniques, with a particular focus given to advanced machine-vision-based methods for geometric analysis. Furthermore, the integration of process monitoring, multiphysics simulation models, and data analytics is discussed, charting a forward-looking roadmap for the development of Digital Twins in Laser–Powder-based Directed Energy Deposition. Finally, this review highlights critical research gaps and proposes directions for future research to enhance the accuracy and efficiency of Directed Energy Deposition systems.
{"title":"Advancements in 3D Printing: Directed Energy Deposition Techniques, Defect Analysis, and Quality Monitoring","authors":"Muhammad Mu’az Imran, Azam Che Idris, L. C. De Silva, Y. Kim, Pg Emeroylariffion Abas","doi":"10.3390/technologies12060086","DOIUrl":"https://doi.org/10.3390/technologies12060086","url":null,"abstract":"This paper provides a comprehensive analysis of recent advancements in additive manufacturing, a transformative approach to industrial production that allows for the layer-by-layer construction of complex parts directly from digital models. Focusing specifically on Directed Energy Deposition, it begins by clarifying the fundamental principles of metal additive manufacturing as defined by International Organization of Standardization and American Society for Testing and Materials standards, with an emphasis on laser- and powder-based methods that are pivotal to Directed Energy Deposition. It explores the critical process mechanisms that can lead to defect formation in the manufactured parts, offering in-depth insights into the factors that influence these outcomes. Additionally, the unique mechanisms of defect formation inherent to Directed Energy Deposition are examined in detail. The review also covers the current landscape of process evaluation and non-destructive testing methods essential for quality assurance, including both traditional and contemporary in situ monitoring techniques, with a particular focus given to advanced machine-vision-based methods for geometric analysis. Furthermore, the integration of process monitoring, multiphysics simulation models, and data analytics is discussed, charting a forward-looking roadmap for the development of Digital Twins in Laser–Powder-based Directed Energy Deposition. Finally, this review highlights critical research gaps and proposes directions for future research to enhance the accuracy and efficiency of Directed Energy Deposition systems.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":" 33","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141372429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.3390/technologies12060085
Luciano Ahumada, E. Carreño, A. Anglès, Diego Dujovne, Palacios Játiva Palacios Játiva
The integration of the 60 GHz band into the IEEE 802.11 standard has revolutionized indoor wireless services. However, this band presents unique challenges to indoor wireless communication infrastructure, originally designed to handle data traffic in residential and office environments. Estimating 60 GHz signal propagation in indoor settings is particularly complicated due to dynamic contextual factors, making it essential to ensure adequate coverage for all connected devices. Consequently, empirical channel modeling plays a pivotal role in understanding real-world behavior, which is characterized by a complex interplay of stationary and mobile elements. Given the highly directional nature of 60 GHz propagation, this study addresses a seemingly simple but important question: what is the impact of employing highly directive antennas when deviating from the line of sight? To address this question, we conducted an empirical measurement campaign of wireless channels within an office environment. Our assessment focused on power losses and distribution within an angular range while an indoor base station served indoor users, simulating the operation of an IEEE 802.11ad high-speed WLAN at 60 GHz. Additionally, we explored scenarios with and without pedestrian movement in the vicinity of wireless terminals. Our observations reveal the presence of significant antenna lobes even in obstructed links, indicating potential opportunities to use angular combiners or beamformers to enhance link availability and the data rate. This empirical study provides valuable information and channel parameters to simulate 60 GHz millimeter wave (mm-wave) links in indoor environments, paving the way for more efficient and robust wireless communication systems.
{"title":"Behind the Door: Practical Parameterization of Propagation Parameters for IEEE 802.11ad Use Cases","authors":"Luciano Ahumada, E. Carreño, A. Anglès, Diego Dujovne, Palacios Játiva Palacios Játiva","doi":"10.3390/technologies12060085","DOIUrl":"https://doi.org/10.3390/technologies12060085","url":null,"abstract":"The integration of the 60 GHz band into the IEEE 802.11 standard has revolutionized indoor wireless services. However, this band presents unique challenges to indoor wireless communication infrastructure, originally designed to handle data traffic in residential and office environments. Estimating 60 GHz signal propagation in indoor settings is particularly complicated due to dynamic contextual factors, making it essential to ensure adequate coverage for all connected devices. Consequently, empirical channel modeling plays a pivotal role in understanding real-world behavior, which is characterized by a complex interplay of stationary and mobile elements. Given the highly directional nature of 60 GHz propagation, this study addresses a seemingly simple but important question: what is the impact of employing highly directive antennas when deviating from the line of sight? To address this question, we conducted an empirical measurement campaign of wireless channels within an office environment. Our assessment focused on power losses and distribution within an angular range while an indoor base station served indoor users, simulating the operation of an IEEE 802.11ad high-speed WLAN at 60 GHz. Additionally, we explored scenarios with and without pedestrian movement in the vicinity of wireless terminals. Our observations reveal the presence of significant antenna lobes even in obstructed links, indicating potential opportunities to use angular combiners or beamformers to enhance link availability and the data rate. This empirical study provides valuable information and channel parameters to simulate 60 GHz millimeter wave (mm-wave) links in indoor environments, paving the way for more efficient and robust wireless communication systems.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":" 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141375601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.3390/technologies12060087
A. M. Jaber, Ammar Alsoud, Saleh R Al-Bashaish, Hmoud Al Dmour, Marwan S. Mousa, T. Trčka, V. Holcman, D. Sobola
In this study, the thickness of a thin film (tc) at a low primary electron energy of less than or equal to 10 keV was calculated using electron energy-loss spectroscopy. This method uses the ratio of the intensity of the transmitted background spectrum to the intensity of the transmission electrons with zero-loss energy (elastic) in the presence of an accurate average inelastic free path length (λ). The Monte Carlo model was used to simulate the interaction between the electron beam and the tested thin films. The total background of the transmitted electrons is considered to be the electron transmitting the film with an energy above 50 eV to eliminate the effect of the secondary electrons. The method was used at low primary electron energy to measure the thickness (t) of C, Si, Cr, Cu, Ag, and Au films below 12 nm. For the C and Si films, the accuracy of the thickness calculation increased as the energy of the primary electrons and thickness of the film increased. However, for heavy elements, the accuracy of the film thickness calculations increased as the primary electron energy increased and the film thickness decreased. High accuracy (with 2% uncertainty) in the measurement of C and Si thin films was observed at large thicknesses and 10 keV, where . However, in the case of heavy-element films, the highest accuracy (with an uncertainty below 8%) was found for thin thicknesses and 10 keV, where . The present results show that an accurate film thickness measurement can be obtained at primary electron energy equal to or less than 10 keV and a ratio of . This method demonstrates the potential of low-loss electron energy-loss spectroscopy in transmission electron microscopy as a fast and straightforward method for determining the thin-film thickness of the material under investigation at low primary electron energies.
在这项研究中,使用电子能量损失光谱法计算了在小于或等于 10 keV 的低初级电子能量下的薄膜厚度 (tc)。这种方法使用的是在精确的平均非弹性自由路径长度(λ)存在的情况下,透射背景光谱强度与零损耗能量(弹性)透射电子强度之比。蒙特卡罗模型用于模拟电子束与被测薄膜之间的相互作用。透射电子的总背景被认为是透射薄膜的能量高于50 eV的电子,以消除次级电子的影响。该方法用于在低初级电子能量下测量 12 nm 以下的 C、Si、Cr、Cu、Ag 和 Au 薄膜的厚度 (t)。对于 C 和 Si 薄膜,厚度计算的精确度随着原生电子能量和薄膜厚度的增加而提高。然而,对于重元素,薄膜厚度计算的精确度随着原初电子能量的增加和薄膜厚度的减小而增加。在大厚度和 10 keV 时,C 和 Si 薄膜的测量精度较高(不确定度为 2%),其中 .然而,在重元素薄膜的测量中,厚度较薄和 10 keV 时的精度最高(不确定度低于 8%),其中 .本研究结果表明,在原初电子能量等于或小于 10 keV 和比值为.的条件下,可以获得精确的薄膜厚度测量结果。 这种方法证明了透射电子显微镜中的低损耗电子能量损失光谱法的潜力,它是一种在低原初电子能量条件下快速、直接测定被测材料薄膜厚度的方法。
{"title":"Electron Energy-Loss Spectroscopy Method for Thin-Film Thickness Calculations with a Low Incident Energy Electron Beam","authors":"A. M. Jaber, Ammar Alsoud, Saleh R Al-Bashaish, Hmoud Al Dmour, Marwan S. Mousa, T. Trčka, V. Holcman, D. Sobola","doi":"10.3390/technologies12060087","DOIUrl":"https://doi.org/10.3390/technologies12060087","url":null,"abstract":"In this study, the thickness of a thin film (tc) at a low primary electron energy of less than or equal to 10 keV was calculated using electron energy-loss spectroscopy. This method uses the ratio of the intensity of the transmitted background spectrum to the intensity of the transmission electrons with zero-loss energy (elastic) in the presence of an accurate average inelastic free path length (λ). The Monte Carlo model was used to simulate the interaction between the electron beam and the tested thin films. The total background of the transmitted electrons is considered to be the electron transmitting the film with an energy above 50 eV to eliminate the effect of the secondary electrons. The method was used at low primary electron energy to measure the thickness (t) of C, Si, Cr, Cu, Ag, and Au films below 12 nm. For the C and Si films, the accuracy of the thickness calculation increased as the energy of the primary electrons and thickness of the film increased. However, for heavy elements, the accuracy of the film thickness calculations increased as the primary electron energy increased and the film thickness decreased. High accuracy (with 2% uncertainty) in the measurement of C and Si thin films was observed at large thicknesses and 10 keV, where . However, in the case of heavy-element films, the highest accuracy (with an uncertainty below 8%) was found for thin thicknesses and 10 keV, where . The present results show that an accurate film thickness measurement can be obtained at primary electron energy equal to or less than 10 keV and a ratio of . This method demonstrates the potential of low-loss electron energy-loss spectroscopy in transmission electron microscopy as a fast and straightforward method for determining the thin-film thickness of the material under investigation at low primary electron energies.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":" 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141375593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.3390/technologies12060084
Parveez Shariff Bhadravathi Ghouse, Pradeep Kumar, Pallavi R. Mane, Sameena Pathan, Tanweer Ali, Alexandros–Apostolos A. Boulogeorgos, Jaume Anguera
A double-stub matching technique is used to design a dual-band monopole antenna at 28 and 38 GHz. The transmission line stubs represent the matching elements. The first matching network comprises series capacitive and inductive stubs, causing impedance matching at the 28 GHz band with a wide bandwidth. On the other hand, the second matching network has two shunt inductive stubs, generating resonance at 38 GHz. A Smith chart is utilized to predict the stub lengths. While incorporating their dimensions physically, some of the stub lengths are fine-tuned. The proposed antenna is compact with a profile of 0.75λ1×0.66λ1 (where λ1 is the free-space wavelength at 28 GHz). The measured bandwidths are 27–28.75 GHz and 36.20–42.43 GHz. Although the physical series capacitance of the first matching network is a slot in the ground plane, the antenna is able to achieve a good gain of 7 dBi in both bands. The proposed antenna has a compact design, good bandwidth and gain, making it a candidate for 5G wireless applications.
{"title":"Dual-Band Antenna at 28 and 38 GHz Using Internal Stubs and Slot Perturbations","authors":"Parveez Shariff Bhadravathi Ghouse, Pradeep Kumar, Pallavi R. Mane, Sameena Pathan, Tanweer Ali, Alexandros–Apostolos A. Boulogeorgos, Jaume Anguera","doi":"10.3390/technologies12060084","DOIUrl":"https://doi.org/10.3390/technologies12060084","url":null,"abstract":"A double-stub matching technique is used to design a dual-band monopole antenna at 28 and 38 GHz. The transmission line stubs represent the matching elements. The first matching network comprises series capacitive and inductive stubs, causing impedance matching at the 28 GHz band with a wide bandwidth. On the other hand, the second matching network has two shunt inductive stubs, generating resonance at 38 GHz. A Smith chart is utilized to predict the stub lengths. While incorporating their dimensions physically, some of the stub lengths are fine-tuned. The proposed antenna is compact with a profile of 0.75λ1×0.66λ1 (where λ1 is the free-space wavelength at 28 GHz). The measured bandwidths are 27–28.75 GHz and 36.20–42.43 GHz. Although the physical series capacitance of the first matching network is a slot in the ground plane, the antenna is able to achieve a good gain of 7 dBi in both bands. The proposed antenna has a compact design, good bandwidth and gain, making it a candidate for 5G wireless applications.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":"69 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141381338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.3390/technologies12060083
A. Vgenopoulos, Kostas Kordas, Federico Lasagni, S. Perrella, A. Polini, R. Vari
The firmware developed for the readout and trigger processing of the information emerging from the BIS78-RPC Muon Spectrometer chambers in the ATLAS experiment at CERN is presented here, together with data processing techniques, data acquisition software, and tests of the readout chain system, which represent efforts to make these chambers operational in the ATLAS experiment. This work is performed in the context of the BIS78-RPC project, which deals with the pilot deployment of a new generation of sMDT+RPCs in the experiment. Such chambers are planned to be fully deployed in the whole barrel inner layer of the Muon Spectrometer during the Phase II upgrade of the ATLAS experiment. On-chamber front-ends include an amplifier, a discriminator ASIC, and an LVDS transmitter. The signal is digitized by CERN HPTDC chips and then processed by an FPGA, which is the heart of the readout and trigger processing, using various techniques.
{"title":"Data Readout Techniques on FPGA for the ATLAS RPC-BIS78 Detectors","authors":"A. Vgenopoulos, Kostas Kordas, Federico Lasagni, S. Perrella, A. Polini, R. Vari","doi":"10.3390/technologies12060083","DOIUrl":"https://doi.org/10.3390/technologies12060083","url":null,"abstract":"The firmware developed for the readout and trigger processing of the information emerging from the BIS78-RPC Muon Spectrometer chambers in the ATLAS experiment at CERN is presented here, together with data processing techniques, data acquisition software, and tests of the readout chain system, which represent efforts to make these chambers operational in the ATLAS experiment. This work is performed in the context of the BIS78-RPC project, which deals with the pilot deployment of a new generation of sMDT+RPCs in the experiment. Such chambers are planned to be fully deployed in the whole barrel inner layer of the Muon Spectrometer during the Phase II upgrade of the ATLAS experiment. On-chamber front-ends include an amplifier, a discriminator ASIC, and an LVDS transmitter. The signal is digitized by CERN HPTDC chips and then processed by an FPGA, which is the heart of the readout and trigger processing, using various techniques.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":"10 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141265853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.3390/technologies12060082
Jorge Galarza-Falfan, E. E. García-Guerrero, O. A. Aguirre-Castro, O. López-Bonilla, Ulises Jesús Tamayo-Pérez, José Ricardo Cárdenas-Valdez, C. Hernández-Mejía, Susana Borrego-Dominguez, Everardo Inzunza-González
Machine learning technologies are being integrated into robotic systems faster to enhance their efficacy and adaptability in dynamic environments. The primary goal of this research was to propose a method to develop an Autonomous Mobile Robot (AMR) that integrates Simultaneous Localization and Mapping (SLAM), odometry, and artificial vision based on deep learning (DL). All are executed on a high-performance Jetson Nano embedded system, specifically emphasizing SLAM-based obstacle avoidance and path planning using the Adaptive Monte Carlo Localization (AMCL) algorithm. Two Convolutional Neural Networks (CNNs) were selected due to their proven effectiveness in image and pattern recognition tasks. The ResNet18 and YOLOv3 algorithms facilitate scene perception, enabling the robot to interpret its environment effectively. Both algorithms were implemented for real-time object detection, identifying and classifying objects within the robot’s environment. These algorithms were selected to evaluate their performance metrics, which are critical for real-time applications. A comparative analysis of the proposed DL models focused on enhancing vision systems for autonomous mobile robots. Several simulations and real-world trials were conducted to evaluate the performance and adaptability of these models in navigating complex environments. The proposed vision system with CNN ResNet18 achieved an average accuracy of 98.5%, a precision of 96.91%, a recall of 97%, and an F1-score of 98.5%. However, the YOLOv3 model achieved an average accuracy of 96%, a precision of 96.2%, a recall of 96%, and an F1-score of 95.99%. These results underscore the effectiveness of the proposed intelligent algorithms, robust embedded hardware, and sensors in robotic applications. This study proves that advanced DL algorithms work well in robots and could be used in many fields, such as transportation and assembly. As a consequence of the findings, intelligent systems could be implemented more widely in the operation and development of AMRs.
{"title":"Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms","authors":"Jorge Galarza-Falfan, E. E. García-Guerrero, O. A. Aguirre-Castro, O. López-Bonilla, Ulises Jesús Tamayo-Pérez, José Ricardo Cárdenas-Valdez, C. Hernández-Mejía, Susana Borrego-Dominguez, Everardo Inzunza-González","doi":"10.3390/technologies12060082","DOIUrl":"https://doi.org/10.3390/technologies12060082","url":null,"abstract":"Machine learning technologies are being integrated into robotic systems faster to enhance their efficacy and adaptability in dynamic environments. The primary goal of this research was to propose a method to develop an Autonomous Mobile Robot (AMR) that integrates Simultaneous Localization and Mapping (SLAM), odometry, and artificial vision based on deep learning (DL). All are executed on a high-performance Jetson Nano embedded system, specifically emphasizing SLAM-based obstacle avoidance and path planning using the Adaptive Monte Carlo Localization (AMCL) algorithm. Two Convolutional Neural Networks (CNNs) were selected due to their proven effectiveness in image and pattern recognition tasks. The ResNet18 and YOLOv3 algorithms facilitate scene perception, enabling the robot to interpret its environment effectively. Both algorithms were implemented for real-time object detection, identifying and classifying objects within the robot’s environment. These algorithms were selected to evaluate their performance metrics, which are critical for real-time applications. A comparative analysis of the proposed DL models focused on enhancing vision systems for autonomous mobile robots. Several simulations and real-world trials were conducted to evaluate the performance and adaptability of these models in navigating complex environments. The proposed vision system with CNN ResNet18 achieved an average accuracy of 98.5%, a precision of 96.91%, a recall of 97%, and an F1-score of 98.5%. However, the YOLOv3 model achieved an average accuracy of 96%, a precision of 96.2%, a recall of 96%, and an F1-score of 95.99%. These results underscore the effectiveness of the proposed intelligent algorithms, robust embedded hardware, and sensors in robotic applications. This study proves that advanced DL algorithms work well in robots and could be used in many fields, such as transportation and assembly. As a consequence of the findings, intelligent systems could be implemented more widely in the operation and development of AMRs.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":"33 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141270456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.3390/technologies12060080
Maria Carolina Avelar, Patricia Almeida, Brígida Mónica Faria, Luís Paulo Reis
The independence and autonomy of both elderly and disabled people have been a growing concern in today’s society. Therefore, wheelchairs have proven to be fundamental for the movement of these people with physical disabilities in the lower limbs, paralysis, or other type of restrictive diseases. Various adapted sensors can be employed in order to facilitate the wheelchair’s driving experience. This work develops the proof concept of a brain–computer interface (BCI), whose ultimate final goal will be to control an intelligent wheelchair. An event-related (de)synchronization neuro-mechanism will be used, since it corresponds to a synchronization, or desynchronization, in the mu and beta brain rhythms, during the execution, preparation, or imagination of motor actions. Two datasets were used for algorithm development: one from the IV competition of BCIs (A), acquired through twenty-two Ag/AgCl electrodes and encompassing motor imagery of the right and left hands, and feet; and the other (B) was obtained in the laboratory using an Emotiv EPOC headset, also with the same motor imaginary. Regarding feature extraction, several approaches were tested: namely, two versions of the signal’s power spectral density, followed by a filter bank version; the use of respective frequency coefficients; and, finally, two versions of the known method filter bank common spatial pattern (FBCSP). Concerning the results from the second version of FBCSP, dataset A presented an F1-score of 0.797 and a rather low false positive rate of 0.150. Moreover, the correspondent average kappa score reached the value of 0.693, which is in the same order of magnitude as 0.57, obtained by the competition. Regarding dataset B, the average value of the F1-score was 0.651, followed by a kappa score of 0.447, and a false positive rate of 0.471. However, it should be noted that some subjects from this dataset presented F1-scores of 0.747 and 0.911, suggesting that the movement imagery (MI) aptness of different users may influence their performance. In conclusion, it is possible to obtain promising results, using an architecture for a real-time application.
当今社会,老年人和残疾人的独立性和自主性日益受到关注。因此,轮椅已被证明是下肢残疾、瘫痪或其他类型限制性疾病患者行动的基本工具。为了促进轮椅的驾驶体验,可以采用各种适配传感器。这项研究开发了脑机接口(BCI)的验证概念,其最终目标是控制智能轮椅。将使用事件相关(去)同步神经机制,因为它对应于在执行、准备或想象运动动作时,大脑μ和β节律的同步或非同步。算法开发使用了两个数据集:一个数据集来自第四届 BCIs 比赛(A),通过 22 个 Ag/AgCl 电极获得,包含左右手和脚的运动想象;另一个数据集(B)是在实验室使用 Emotiv EPOC 头戴式耳机获得的,也包含相同的运动想象。在特征提取方面,对几种方法进行了测试:即信号功率谱密度的两个版本,然后是滤波器组版本;使用各自的频率系数;最后是已知方法滤波器组共同空间模式(FBCSP)的两个版本。关于第二版 FBCSP 的结果,数据集 A 的 F1 分数为 0.797,误报率为 0.150,相当低。此外,相应的平均 kappa 分数达到了 0.693,与竞赛中获得的 0.57 处于同一数量级。数据集 B 的 F1 分数平均值为 0.651,kappa 分数为 0.447,误报率为 0.471。不过,值得注意的是,该数据集中的一些受试者的 F1 分数达到了 0.747 和 0.911,这表明不同用户的运动图像(MI)能力可能会影响他们的表现。总之,使用实时应用架构可以获得令人满意的结果。
{"title":"Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair","authors":"Maria Carolina Avelar, Patricia Almeida, Brígida Mónica Faria, Luís Paulo Reis","doi":"10.3390/technologies12060080","DOIUrl":"https://doi.org/10.3390/technologies12060080","url":null,"abstract":"The independence and autonomy of both elderly and disabled people have been a growing concern in today’s society. Therefore, wheelchairs have proven to be fundamental for the movement of these people with physical disabilities in the lower limbs, paralysis, or other type of restrictive diseases. Various adapted sensors can be employed in order to facilitate the wheelchair’s driving experience. This work develops the proof concept of a brain–computer interface (BCI), whose ultimate final goal will be to control an intelligent wheelchair. An event-related (de)synchronization neuro-mechanism will be used, since it corresponds to a synchronization, or desynchronization, in the mu and beta brain rhythms, during the execution, preparation, or imagination of motor actions. Two datasets were used for algorithm development: one from the IV competition of BCIs (A), acquired through twenty-two Ag/AgCl electrodes and encompassing motor imagery of the right and left hands, and feet; and the other (B) was obtained in the laboratory using an Emotiv EPOC headset, also with the same motor imaginary. Regarding feature extraction, several approaches were tested: namely, two versions of the signal’s power spectral density, followed by a filter bank version; the use of respective frequency coefficients; and, finally, two versions of the known method filter bank common spatial pattern (FBCSP). Concerning the results from the second version of FBCSP, dataset A presented an F1-score of 0.797 and a rather low false positive rate of 0.150. Moreover, the correspondent average kappa score reached the value of 0.693, which is in the same order of magnitude as 0.57, obtained by the competition. Regarding dataset B, the average value of the F1-score was 0.651, followed by a kappa score of 0.447, and a false positive rate of 0.471. However, it should be noted that some subjects from this dataset presented F1-scores of 0.747 and 0.911, suggesting that the movement imagery (MI) aptness of different users may influence their performance. In conclusion, it is possible to obtain promising results, using an architecture for a real-time application.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":"32 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141268962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.3390/technologies12060081
Oumayma Jouini, K. Sethom, Abdallah Namoun, Nasser Aljohani, Meshari Huwaytim Alanazi, Mohammad N. Alanazi
Internet of Things (IoT) devices often operate with limited resources while interacting with users and their environment, generating a wealth of data. Machine learning models interpret such sensor data, enabling accurate predictions and informed decisions. However, the sheer volume of data from billions of devices can overwhelm networks, making traditional cloud data processing inefficient for IoT applications. This paper presents a comprehensive survey of recent advances in models, architectures, hardware, and design requirements for deploying machine learning on low-resource devices at the edge and in cloud networks. Prominent IoT devices tailored to integrate edge intelligence include Raspberry Pi, NVIDIA’s Jetson, Arduino Nano 33 BLE Sense, STM32 Microcontrollers, SparkFun Edge, Google Coral Dev Board, and Beaglebone AI. These devices are boosted with custom AI frameworks, such as TensorFlow Lite, OpenEI, Core ML, Caffe2, and MXNet, to empower ML and DL tasks (e.g., object detection and gesture recognition). Both traditional machine learning (e.g., random forest, logistic regression) and deep learning methods (e.g., ResNet-50, YOLOv4, LSTM) are deployed on devices, distributed edge, and distributed cloud computing. Moreover, we analyzed 1000 recent publications on “ML in IoT” from IEEE Xplore using support vector machine, random forest, and decision tree classifiers to identify emerging topics and application domains. Hot topics included big data, cloud, edge, multimedia, security, privacy, QoS, and activity recognition, while critical domains included industry, healthcare, agriculture, transportation, smart homes and cities, and assisted living. The major challenges hindering the implementation of edge machine learning include encrypting sensitive user data for security and privacy on edge devices, efficiently managing resources of edge nodes through distributed learning architectures, and balancing the energy limitations of edge devices and the energy demands of machine learning.
物联网(IoT)设备通常在资源有限的情况下运行,同时与用户及其环境互动,产生大量数据。机器学习模型可以解释这些传感器数据,从而做出准确的预测和明智的决策。然而,来自数十亿设备的庞大数据量可能会使网络不堪重负,从而使传统的云数据处理在物联网应用中效率低下。本文全面介绍了在边缘和云网络的低资源设备上部署机器学习的模型、架构、硬件和设计要求方面的最新进展。为集成边缘智能而定制的著名物联网设备包括树莓派、英伟达的 Jetson、Arduino Nano 33 BLE Sense、STM32 微控制器、SparkFun Edge、Google Coral Dev Board 和 Beaglebone AI。这些设备都采用了定制的人工智能框架,如 TensorFlow Lite、OpenEI、Core ML、Caffe2 和 MXNet,以支持 ML 和 DL 任务(如物体检测和手势识别)。传统机器学习(如随机森林、逻辑回归)和深度学习方法(如 ResNet-50、YOLOv4、LSTM)都部署在设备、分布式边缘和分布式云计算上。此外,我们还使用支持向量机、随机森林和决策树分类器分析了 IEEE Xplore 上有关 "物联网中的 ML "的 1000 篇最新出版物,以确定新兴主题和应用领域。热门话题包括大数据、云、边缘、多媒体、安全、隐私、服务质量和活动识别,关键领域包括工业、医疗保健、农业、交通、智能家居和城市以及辅助生活。阻碍边缘机器学习实施的主要挑战包括在边缘设备上加密敏感的用户数据以确保安全和隐私,通过分布式学习架构有效管理边缘节点的资源,以及平衡边缘设备的能源限制和机器学习的能源需求。
{"title":"A Survey of Machine Learning in Edge Computing: Techniques, Frameworks, Applications, Issues, and Research Directions","authors":"Oumayma Jouini, K. Sethom, Abdallah Namoun, Nasser Aljohani, Meshari Huwaytim Alanazi, Mohammad N. Alanazi","doi":"10.3390/technologies12060081","DOIUrl":"https://doi.org/10.3390/technologies12060081","url":null,"abstract":"Internet of Things (IoT) devices often operate with limited resources while interacting with users and their environment, generating a wealth of data. Machine learning models interpret such sensor data, enabling accurate predictions and informed decisions. However, the sheer volume of data from billions of devices can overwhelm networks, making traditional cloud data processing inefficient for IoT applications. This paper presents a comprehensive survey of recent advances in models, architectures, hardware, and design requirements for deploying machine learning on low-resource devices at the edge and in cloud networks. Prominent IoT devices tailored to integrate edge intelligence include Raspberry Pi, NVIDIA’s Jetson, Arduino Nano 33 BLE Sense, STM32 Microcontrollers, SparkFun Edge, Google Coral Dev Board, and Beaglebone AI. These devices are boosted with custom AI frameworks, such as TensorFlow Lite, OpenEI, Core ML, Caffe2, and MXNet, to empower ML and DL tasks (e.g., object detection and gesture recognition). Both traditional machine learning (e.g., random forest, logistic regression) and deep learning methods (e.g., ResNet-50, YOLOv4, LSTM) are deployed on devices, distributed edge, and distributed cloud computing. Moreover, we analyzed 1000 recent publications on “ML in IoT” from IEEE Xplore using support vector machine, random forest, and decision tree classifiers to identify emerging topics and application domains. Hot topics included big data, cloud, edge, multimedia, security, privacy, QoS, and activity recognition, while critical domains included industry, healthcare, agriculture, transportation, smart homes and cities, and assisted living. The major challenges hindering the implementation of edge machine learning include encrypting sensitive user data for security and privacy on edge devices, efficiently managing resources of edge nodes through distributed learning architectures, and balancing the energy limitations of edge devices and the energy demands of machine learning.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":"49 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141269830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-02DOI: 10.3390/technologies12060079
Dietrich Spädt, Niclas Richter, Cornelia Golle, Andrea Ehrmann, Lilia Sabantina
The air permeability of a textile fabric belongs to the parameters which characterize its potential applications as garments, filters, airbags, etc. Calculating the air permeability is complicated due to its dependence on many other fabric parameters, such as porosity, thickness, weaving parameters and others, which is why the air permeability is usually measured. Standardized measurement instruments according to EN ISO 9237, however, are expensive and complex, prohibiting small companies or many universities from using them. This is why a simpler and inexpensive test instrument was suggested in a previous paper. Here, we show correlations between the results of the standardized and the custom-made instrument and verify this correlation using fluid dynamics calculations.
纺织面料的透气性属于其潜在应用的特征参数,如服装、过滤器、气囊等。由于透气性与许多其他织物参数(如孔隙率、厚度、织造参数等)有关,因此计算透气性非常复杂,这也是通常测量透气性的原因。然而,符合 EN ISO 9237 标准的标准化测量仪器既昂贵又复杂,导致小公司或许多大学无法使用。因此,我们在之前的一篇论文中提出了一种更简单、更便宜的测试仪器。在此,我们展示了标准仪器和定制仪器的测量结果之间的相关性,并通过流体动力学计算验证了这种相关性。
{"title":"Comparison of a Custom-Made Inexpensive Air Permeability Tester with a Standardized Measurement Instrument","authors":"Dietrich Spädt, Niclas Richter, Cornelia Golle, Andrea Ehrmann, Lilia Sabantina","doi":"10.3390/technologies12060079","DOIUrl":"https://doi.org/10.3390/technologies12060079","url":null,"abstract":"The air permeability of a textile fabric belongs to the parameters which characterize its potential applications as garments, filters, airbags, etc. Calculating the air permeability is complicated due to its dependence on many other fabric parameters, such as porosity, thickness, weaving parameters and others, which is why the air permeability is usually measured. Standardized measurement instruments according to EN ISO 9237, however, are expensive and complex, prohibiting small companies or many universities from using them. This is why a simpler and inexpensive test instrument was suggested in a previous paper. Here, we show correlations between the results of the standardized and the custom-made instrument and verify this correlation using fluid dynamics calculations.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":"53 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141273841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.3390/technologies12060077
Ismail Bounoua, Youssef Saidi, Reda Yaagoubi, Mourad Bouziani
Irrigation is crucial for crop cultivation and productivity. However, traditional methods often waste water and energy due to neglecting soil and crop variations, leading to inefficient water distribution and potential crop water stress. The crop water stress index (CWSI) has become a widely accepted index for assessing plant water status. However, it is necessary to forecast the plant water stress to estimate the quantity of water to irrigate. Deep learning (DL) models for water stress forecasting have gained prominence in irrigation management to address these needs. In this paper, we present a comparative study between two deep learning models, ConvLSTM and CNN-LSTM, for water stress forecasting using remote sensing data. While these DL architectures have been previously proposed and studied in various applications, our novelty lies in studying their effectiveness in the field of water stress forecasting using time series of remote sensing images. The proposed methodology involves meticulous preparation of time series data, where we calculate the crop water stress index (CWSI) using Landsat 8 satellite imagery through Google Earth Engine. Subsequently, we implemented and fine-tuned the hyperparameters of the ConvLSTM and CNN-LSTM models. The same processes of model compilation, optimization of hyperparameters, and model training were applied for the two architectures. A citrus farm in Morocco was chosen as a case study. The analysis of the results reveals that the CNN-LSTM model excels over the ConvLSTM model for long sequences (nine images) with an RMSE of 0.119 and 0.123, respectively, while ConvLSTM provides better results for short sequences (three images) than CNN-LSTM with an RMSE of 0.153 and 0.187, respectively.
{"title":"Deep Learning Approaches for Water Stress Forecasting in Arboriculture Using Time Series of Remote Sensing Images: Comparative Study between ConvLSTM and CNN-LSTM Models","authors":"Ismail Bounoua, Youssef Saidi, Reda Yaagoubi, Mourad Bouziani","doi":"10.3390/technologies12060077","DOIUrl":"https://doi.org/10.3390/technologies12060077","url":null,"abstract":"Irrigation is crucial for crop cultivation and productivity. However, traditional methods often waste water and energy due to neglecting soil and crop variations, leading to inefficient water distribution and potential crop water stress. The crop water stress index (CWSI) has become a widely accepted index for assessing plant water status. However, it is necessary to forecast the plant water stress to estimate the quantity of water to irrigate. Deep learning (DL) models for water stress forecasting have gained prominence in irrigation management to address these needs. In this paper, we present a comparative study between two deep learning models, ConvLSTM and CNN-LSTM, for water stress forecasting using remote sensing data. While these DL architectures have been previously proposed and studied in various applications, our novelty lies in studying their effectiveness in the field of water stress forecasting using time series of remote sensing images. The proposed methodology involves meticulous preparation of time series data, where we calculate the crop water stress index (CWSI) using Landsat 8 satellite imagery through Google Earth Engine. Subsequently, we implemented and fine-tuned the hyperparameters of the ConvLSTM and CNN-LSTM models. The same processes of model compilation, optimization of hyperparameters, and model training were applied for the two architectures. A citrus farm in Morocco was chosen as a case study. The analysis of the results reveals that the CNN-LSTM model excels over the ConvLSTM model for long sequences (nine images) with an RMSE of 0.119 and 0.123, respectively, while ConvLSTM provides better results for short sequences (three images) than CNN-LSTM with an RMSE of 0.153 and 0.187, respectively.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":"58 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141279731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}