For wireless communication networks, researchers have proposed many schemes to reduce the cost of location registration and paging signals caused by the mobility of user equipment (UE). Among them, a zone-based method that designates one zone (1Z, group of cells) as a registration area (RA) and then performs registration whenever the UE leaves the RA is commonly adopted due to its convenient implementation. However, the performance of 1Z is known to be very poor when the UE frequently crosses the RA’s boundary requesting location updates. Two or three zone-based schemes (2Z or 3Z) have since been recommended to overcome these limitations. In our previous work, we analyzed the performances of 1Z, 2Z, and 3Z systems while assuming a square-shaped zone. However, there is no reason why the shape of the zone is limited to a square. This paper analyzes the performance of 3Z while assuming a hexagonal-shaped rather than a square-shaped zone. Using a semi-Markov process theory, registration and paging costs are evaluated after defining states in 3Z operations and calculating the transition probability between states. Based on various realistic parameters, the numerical results showed that the 3Z outperformed 1Z and 2Z for most call-to-mobility ratio (CMR) values. The performance of 3Z was improved more when the registration cost decreased if the probability of returning to the previously registered zone increased or the time staying in the zone decreased. The 3Z system is easy to implement with simple software modifications. It can be dynamically applied as an efficient mobility management method in the future for various devices that will emerge in the 5G/6G environment.
{"title":"Modeling and Performance Analysis of Three Zone-Based Registration Scheme in Wireless Communication Networks","authors":"Hee-Seon Jang, Jang-Hyun Baek","doi":"10.3390/app131810064","DOIUrl":"https://doi.org/10.3390/app131810064","url":null,"abstract":"For wireless communication networks, researchers have proposed many schemes to reduce the cost of location registration and paging signals caused by the mobility of user equipment (UE). Among them, a zone-based method that designates one zone (1Z, group of cells) as a registration area (RA) and then performs registration whenever the UE leaves the RA is commonly adopted due to its convenient implementation. However, the performance of 1Z is known to be very poor when the UE frequently crosses the RA’s boundary requesting location updates. Two or three zone-based schemes (2Z or 3Z) have since been recommended to overcome these limitations. In our previous work, we analyzed the performances of 1Z, 2Z, and 3Z systems while assuming a square-shaped zone. However, there is no reason why the shape of the zone is limited to a square. This paper analyzes the performance of 3Z while assuming a hexagonal-shaped rather than a square-shaped zone. Using a semi-Markov process theory, registration and paging costs are evaluated after defining states in 3Z operations and calculating the transition probability between states. Based on various realistic parameters, the numerical results showed that the 3Z outperformed 1Z and 2Z for most call-to-mobility ratio (CMR) values. The performance of 3Z was improved more when the registration cost decreased if the probability of returning to the previously registered zone increased or the time staying in the zone decreased. The 3Z system is easy to implement with simple software modifications. It can be dynamically applied as an efficient mobility management method in the future for various devices that will emerge in the 5G/6G environment.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43419843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenjie Zhu, R. Zhao, Hao Zhang, Jianfeng Lu, Zhishu Zhang, Bingyu Wei, Yuhang Fan
With the increasing applications of UWB indoor positioning technologies in industrial areas, to further enhance the positioning precision, the UWB/IMU combination method (UICM) has been considered as one of the most effective solutions to reduce non-line-of-sight (NLOS) errors. However, most conversional UICMs suffer from a high probability of positioning failure due to uncontrollable and cumulative errors from inertial measuring units (IMU). Hence, to address this issue, we improved the extended Kalman filter (EKF) algorithm of an indoor positioning model based on UWB/IMU tight combination with a double-loop error self-correction. Compared with conventional UICMs, this improved model consists of new modules for fixing time desynchronization, optimizing the threshold setting for UWB ranging, data fusion in NLOS, and double-loop error estimation, sequentially. Further, systematic error controllability analysis proved that the proposed model could satisfy the controllability of UWB indoor positioning systems. To validate this improved UICM, inevitable obstacles and atmospheric interferences were regarded as Gaussian white noises to verify its environmental adaptability. Finally, the experimental results showed that this proposed model outperformed the state-of-the-art UWB-based positioning models with a maximum deviation of 0.232 m (reduced by 83.93% compared to a pure UWB model and 43.14% compared to the conventional UWB/IMU model) and standard deviation of 0.09981 m (reduced by 88.35% compared to a pure UWB model and 22.21% compared to the conventional UWB-IMU model).
{"title":"Improved Indoor Positioning Model Based on UWB/IMU Tight Combination with Double-Loop Cumulative Error Estimation","authors":"Wenjie Zhu, R. Zhao, Hao Zhang, Jianfeng Lu, Zhishu Zhang, Bingyu Wei, Yuhang Fan","doi":"10.3390/app131810046","DOIUrl":"https://doi.org/10.3390/app131810046","url":null,"abstract":"With the increasing applications of UWB indoor positioning technologies in industrial areas, to further enhance the positioning precision, the UWB/IMU combination method (UICM) has been considered as one of the most effective solutions to reduce non-line-of-sight (NLOS) errors. However, most conversional UICMs suffer from a high probability of positioning failure due to uncontrollable and cumulative errors from inertial measuring units (IMU). Hence, to address this issue, we improved the extended Kalman filter (EKF) algorithm of an indoor positioning model based on UWB/IMU tight combination with a double-loop error self-correction. Compared with conventional UICMs, this improved model consists of new modules for fixing time desynchronization, optimizing the threshold setting for UWB ranging, data fusion in NLOS, and double-loop error estimation, sequentially. Further, systematic error controllability analysis proved that the proposed model could satisfy the controllability of UWB indoor positioning systems. To validate this improved UICM, inevitable obstacles and atmospheric interferences were regarded as Gaussian white noises to verify its environmental adaptability. Finally, the experimental results showed that this proposed model outperformed the state-of-the-art UWB-based positioning models with a maximum deviation of 0.232 m (reduced by 83.93% compared to a pure UWB model and 43.14% compared to the conventional UWB/IMU model) and standard deviation of 0.09981 m (reduced by 88.35% compared to a pure UWB model and 22.21% compared to the conventional UWB-IMU model).","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42232495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ngoc An Dang Nguyen, Hoang Nhut Huynh, Trung Nghia Tran
The development of optical sensors, especially with regard to the improved resolution of cameras, has made optical techniques more applicable in medicine and live animal research. Research efforts focus on image signal acquisition, scattering de-blur for acquired images, and the development of image reconstruction algorithms. Rapidly evolving artificial intelligence has enabled the development of techniques for de-blurring and estimating the depth of light-absorbing structures in biological tissues. Although the feasibility of applying deep learning to overcome these problems has been demonstrated in previous studies, limitations still exist in terms of de-blurring capabilities on complex structures and the heterogeneity of turbid medium, as well as the limit of accurate estimation of the depth of absorptive structures in biological tissues (shallower than 15.0 mm). These problems are related to the absorption structure’s complexity, the biological tissue’s heterogeneity, the training data, and the neural network model itself. This study thoroughly explores how to generate training and testing datasets on different deep learning models to find the model with the best performance. The results of the de-blurred image show that the Attention Res-UNet model has the best de-blurring ability, with a correlation of more than 89% between the de-blurred image and the original structure image. This result comes from adding the Attention gate and the Residual block to the common U-net model structure. The results of the depth estimation show that the DenseNet169 model shows the ability to estimate depth with high accuracy beyond the limit of 20.0 mm. The results of this study once again confirm the feasibility of applying deep learning in transmission image processing to reconstruct clear images and obtain information on the absorbing structure inside biological tissue. This allows the development of subsequent transillumination imaging studies in biological tissues with greater heterogeneity and structural complexity.
光学传感器的发展,特别是在提高相机分辨率方面,使光学技术更适用于医学和活体动物研究。研究工作集中在图像信号采集、采集图像的散射去模糊以及图像重建算法的开发上。快速发展的人工智能使生物组织中光吸收结构的去模糊和深度估计技术得以发展。尽管在之前的研究中已经证明了应用深度学习来克服这些问题的可行性,但在复杂结构的去模糊能力和混浊介质的异质性方面,以及在生物组织中吸收结构深度(浅于15.0 mm)的准确估计方面,仍然存在局限性。这些问题与吸收结构的复杂性、生物组织的异质性、训练数据和神经网络模型本身有关。本研究深入探讨了如何在不同的深度学习模型上生成训练和测试数据集,以找到性能最佳的模型。去模糊图像的结果表明,Attention Res UNet模型具有最好的去模糊能力,去模糊图像与原始结构图像之间的相关性超过89%。这一结果来自于在通用U-net模型结构中添加注意门和残差块。深度估计结果表明,DenseNet169模型能够以超过20.0 mm的高精度估计深度。该研究结果再次证实了将深度学习应用于透射图像处理以重建清晰图像并获得生物组织内部吸收结构信息的可行性。这允许在具有更大异质性和结构复杂性的生物组织中进行后续的透照成像研究。
{"title":"Improvement of the Performance of Scattering Suppression and Absorbing Structure Depth Estimation on Transillumination Image by Deep Learning","authors":"Ngoc An Dang Nguyen, Hoang Nhut Huynh, Trung Nghia Tran","doi":"10.3390/app131810047","DOIUrl":"https://doi.org/10.3390/app131810047","url":null,"abstract":"The development of optical sensors, especially with regard to the improved resolution of cameras, has made optical techniques more applicable in medicine and live animal research. Research efforts focus on image signal acquisition, scattering de-blur for acquired images, and the development of image reconstruction algorithms. Rapidly evolving artificial intelligence has enabled the development of techniques for de-blurring and estimating the depth of light-absorbing structures in biological tissues. Although the feasibility of applying deep learning to overcome these problems has been demonstrated in previous studies, limitations still exist in terms of de-blurring capabilities on complex structures and the heterogeneity of turbid medium, as well as the limit of accurate estimation of the depth of absorptive structures in biological tissues (shallower than 15.0 mm). These problems are related to the absorption structure’s complexity, the biological tissue’s heterogeneity, the training data, and the neural network model itself. This study thoroughly explores how to generate training and testing datasets on different deep learning models to find the model with the best performance. The results of the de-blurred image show that the Attention Res-UNet model has the best de-blurring ability, with a correlation of more than 89% between the de-blurred image and the original structure image. This result comes from adding the Attention gate and the Residual block to the common U-net model structure. The results of the depth estimation show that the DenseNet169 model shows the ability to estimate depth with high accuracy beyond the limit of 20.0 mm. The results of this study once again confirm the feasibility of applying deep learning in transmission image processing to reconstruct clear images and obtain information on the absorbing structure inside biological tissue. This allows the development of subsequent transillumination imaging studies in biological tissues with greater heterogeneity and structural complexity.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44888720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hard turning is an emerging machining technology that evolved as a substitute for grinding in the production of precision parts from hardened steel. It offers advantages such as reduced cycle times, lower costs, and environmental benefits over grinding. Hard turning is stated to be difficult because of the high hardness of the workpiece material, which causes higher tool wear, cutting temperature, surface roughness, and cutting force. In this work, a dual-nozzle minimum quantity lubrication (MQL) system’s performance assessment of ZnO nano-cutting fluid in the hard turning of AISI 52100 bearing steel is examined. The objective is to evaluate the ZnO nano-cutting fluid’s impacts on flank wear, surface roughness, cutting temperature, cutting power consumption, and cutting noise. The tool flank wear was traced to be very low (0.027 mm to 0.095 mm) as per the hard turning concern. Additionally, the data acquired are statistically analyzed using main effects plots, interaction plots, and analysis of variance (ANOVA). Moreover, a novel Weighted Aggregated Sum Product Assessment (WASPAS) optimization tool was implemented to select the optimal combination of input parameters. The following optimal input variables were found: depth of cut = 0.3 mm, feed = 0.05 mm/rev, cutting speed = 210 m/min, and flow rate = 50 mL/hr.
{"title":"WASPAS Based Multi Response Optimization in Hard Turning of AISI 52100 Steel under ZnO Nanofluid Assisted Dual Nozzle Pulse-MQL Environment","authors":"Saswat Khatai, Ramanuj Kumar, A. Panda, A. Sahoo","doi":"10.3390/app131810062","DOIUrl":"https://doi.org/10.3390/app131810062","url":null,"abstract":"Hard turning is an emerging machining technology that evolved as a substitute for grinding in the production of precision parts from hardened steel. It offers advantages such as reduced cycle times, lower costs, and environmental benefits over grinding. Hard turning is stated to be difficult because of the high hardness of the workpiece material, which causes higher tool wear, cutting temperature, surface roughness, and cutting force. In this work, a dual-nozzle minimum quantity lubrication (MQL) system’s performance assessment of ZnO nano-cutting fluid in the hard turning of AISI 52100 bearing steel is examined. The objective is to evaluate the ZnO nano-cutting fluid’s impacts on flank wear, surface roughness, cutting temperature, cutting power consumption, and cutting noise. The tool flank wear was traced to be very low (0.027 mm to 0.095 mm) as per the hard turning concern. Additionally, the data acquired are statistically analyzed using main effects plots, interaction plots, and analysis of variance (ANOVA). Moreover, a novel Weighted Aggregated Sum Product Assessment (WASPAS) optimization tool was implemented to select the optimal combination of input parameters. The following optimal input variables were found: depth of cut = 0.3 mm, feed = 0.05 mm/rev, cutting speed = 210 m/min, and flow rate = 50 mL/hr.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45173392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinxin Guo, Mengyan Lyu, Bin Xia, Kunpeng Zhang, Liye Zhang
The feature point method is the mainstream method to accomplish inter-frame estimation in visual Simultaneous Localization and Mapping (SLAM) methods, among which the Oriented FAST and Rotated BRIEF (ORB) feature-based method provides an equilibrium of accuracy as well as efficiency. However, the ORB algorithm is prone to clustering phenomena, and its unequal distribution of extracted feature points is not conducive to the subsequent camera tracking. To solve the above problems, this paper suggests an adaptive feature extraction algorithm that first constructs multiple-scale images using an adaptive Gaussian pyramid algorithm, calculates adaptive thresholds, and uses an adaptive meshing method for regional feature point detection to adapt to different scenes. The method uses Adaptive and Generic Accelerated Segment Test (AGAST) to speed up feature detection and the non-maximum suppression method to filter feature points. The feature points are then divided equally by a quadtree technique, and the orientation of those points is determined by an intensity centroid approach. Experiments were conducted on publicly available datasets, and the outcomes demonstrate the algorithm has good adaptivity and solves the problem of a large number of corner point clusters that may result from using manually set detection thresholds. The RMSE of the absolute trajectory error of SLAM applying this method on four sequences of TUM RGB-D datasets is decreased by 13.88% when compared with ORB-SLAM3. It is demonstrated that the algorithm provides high-quality feature points for subsequent image alignment, and the application to SLAM improves the reliability and accuracy of SLAM.
{"title":"An Improved Visual SLAM Method with Adaptive Feature Extraction","authors":"Xinxin Guo, Mengyan Lyu, Bin Xia, Kunpeng Zhang, Liye Zhang","doi":"10.3390/app131810038","DOIUrl":"https://doi.org/10.3390/app131810038","url":null,"abstract":"The feature point method is the mainstream method to accomplish inter-frame estimation in visual Simultaneous Localization and Mapping (SLAM) methods, among which the Oriented FAST and Rotated BRIEF (ORB) feature-based method provides an equilibrium of accuracy as well as efficiency. However, the ORB algorithm is prone to clustering phenomena, and its unequal distribution of extracted feature points is not conducive to the subsequent camera tracking. To solve the above problems, this paper suggests an adaptive feature extraction algorithm that first constructs multiple-scale images using an adaptive Gaussian pyramid algorithm, calculates adaptive thresholds, and uses an adaptive meshing method for regional feature point detection to adapt to different scenes. The method uses Adaptive and Generic Accelerated Segment Test (AGAST) to speed up feature detection and the non-maximum suppression method to filter feature points. The feature points are then divided equally by a quadtree technique, and the orientation of those points is determined by an intensity centroid approach. Experiments were conducted on publicly available datasets, and the outcomes demonstrate the algorithm has good adaptivity and solves the problem of a large number of corner point clusters that may result from using manually set detection thresholds. The RMSE of the absolute trajectory error of SLAM applying this method on four sequences of TUM RGB-D datasets is decreased by 13.88% when compared with ORB-SLAM3. It is demonstrated that the algorithm provides high-quality feature points for subsequent image alignment, and the application to SLAM improves the reliability and accuracy of SLAM.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48137481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we address the limitations of current deep learning models in road extraction tasks from remote sensing imagery. We introduce MixerNet-SAGA, a novel deep learning model that incorporates the strengths of U-Net, integrates a ConvMixer block for enhanced feature extraction, and includes a Scaled Attention Gate (SAG) for augmented spatial attention. Experimental validation on the Massachusetts road dataset and the DeepGlobe road dataset demonstrates that MixerNet-SAGA achieves a 10% improvement in precision, 8% in recall, and 12% in IoU compared to leading models such as U-Net, ResNet, and SDUNet. Furthermore, our model excels in computational efficiency, being 20% faster, and has a smaller model size. Notably, MixerNet-SAGA shows exceptional robustness against challenges such as same-spectrum–different-object and different-spectrum–same-object phenomena. Ablation studies further reveal the critical roles of the ConvMixer block and SAG. Despite its strengths, the model’s scalability to extremely large datasets remains an area for future investigation. Collectively, MixerNet-SAGA offers an efficient and accurate solution for road extraction in remote sensing imagery and presents significant potential for broader applications.
{"title":"MixerNet-SAGA A Novel Deep Learning Architecture for Superior Road Extraction in High-Resolution Remote Sensing Imagery","authors":"Wei Wu, Chao Ren, Anchao Yin, Xudong Zhang","doi":"10.3390/app131810067","DOIUrl":"https://doi.org/10.3390/app131810067","url":null,"abstract":"In this study, we address the limitations of current deep learning models in road extraction tasks from remote sensing imagery. We introduce MixerNet-SAGA, a novel deep learning model that incorporates the strengths of U-Net, integrates a ConvMixer block for enhanced feature extraction, and includes a Scaled Attention Gate (SAG) for augmented spatial attention. Experimental validation on the Massachusetts road dataset and the DeepGlobe road dataset demonstrates that MixerNet-SAGA achieves a 10% improvement in precision, 8% in recall, and 12% in IoU compared to leading models such as U-Net, ResNet, and SDUNet. Furthermore, our model excels in computational efficiency, being 20% faster, and has a smaller model size. Notably, MixerNet-SAGA shows exceptional robustness against challenges such as same-spectrum–different-object and different-spectrum–same-object phenomena. Ablation studies further reveal the critical roles of the ConvMixer block and SAG. Despite its strengths, the model’s scalability to extremely large datasets remains an area for future investigation. Collectively, MixerNet-SAGA offers an efficient and accurate solution for road extraction in remote sensing imagery and presents significant potential for broader applications.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43892530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robot vacuum cleaners have gained widespread popularity as household appliances. One significant challenge in enhancing their functionality is to identify and classify small indoor objects suitable for safe suctioning and recycling during cleaning operations. However, the current state of research faces several difficulties, including the lack of a comprehensive dataset, size variation, limited visual features, occlusion and clutter, varying lighting conditions, the need for real-time processing, and edge computing. In this paper, I address these challenges by investigating a lightweight AI model specifically tailored for robot vacuum cleaners. First, I assembled a diverse dataset containing 23,042 ground-view perspective images captured by robot vacuum cleaners. Then, I examined state-of-the-art AI models from the existing literature and carefully selected three high-performance models (Xception, DenseNet121, and MobileNet) as potential model candidates. Subsequently, I simplified these three selected models to reduce their computational complexity and overall size. To further compress the model size, I employed post-training weight quantization on these simplified models. In this way, our proposed lightweight AI model strikes a balance between object classification accuracy and computational complexity, enabling real-time processing on resource-constrained robot vacuum cleaner platforms. I thoroughly evaluated the performance of the proposed AI model on a diverse dataset, demonstrating its feasibility and practical applicability. The experimental results show that, with a small memory size budget of 0.7 MB, the best AI model is L-w Xception 1, with a width factor of 0.25, whose resultant object classification accuracy is 84.37%. When compared with the most accurate state-of-the-art model in the literature, this proposed model accomplished a remarkable memory size reduction of 350 times, while incurring only a slight decrease in classification accuracy, i.e., approximately 4.54%.
{"title":"Towards Indoor Suctionable Object Classification and Recycling: Developing a Lightweight AI Model for Robot Vacuum Cleaners","authors":"Qian Huang","doi":"10.3390/app131810031","DOIUrl":"https://doi.org/10.3390/app131810031","url":null,"abstract":"Robot vacuum cleaners have gained widespread popularity as household appliances. One significant challenge in enhancing their functionality is to identify and classify small indoor objects suitable for safe suctioning and recycling during cleaning operations. However, the current state of research faces several difficulties, including the lack of a comprehensive dataset, size variation, limited visual features, occlusion and clutter, varying lighting conditions, the need for real-time processing, and edge computing. In this paper, I address these challenges by investigating a lightweight AI model specifically tailored for robot vacuum cleaners. First, I assembled a diverse dataset containing 23,042 ground-view perspective images captured by robot vacuum cleaners. Then, I examined state-of-the-art AI models from the existing literature and carefully selected three high-performance models (Xception, DenseNet121, and MobileNet) as potential model candidates. Subsequently, I simplified these three selected models to reduce their computational complexity and overall size. To further compress the model size, I employed post-training weight quantization on these simplified models. In this way, our proposed lightweight AI model strikes a balance between object classification accuracy and computational complexity, enabling real-time processing on resource-constrained robot vacuum cleaner platforms. I thoroughly evaluated the performance of the proposed AI model on a diverse dataset, demonstrating its feasibility and practical applicability. The experimental results show that, with a small memory size budget of 0.7 MB, the best AI model is L-w Xception 1, with a width factor of 0.25, whose resultant object classification accuracy is 84.37%. When compared with the most accurate state-of-the-art model in the literature, this proposed model accomplished a remarkable memory size reduction of 350 times, while incurring only a slight decrease in classification accuracy, i.e., approximately 4.54%.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47361801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacobo Porto-Álvarez, Antonio Mosqueira Martínez, Javier Martínez Fernández, Marta Sanmartín López, M. Blanco Ulla, F. Vázquez Herrero, J. Pumar, M. Rodríguez-Yáñez, Anxo Manuel Minguillón Pereiro, Alberto Bolón Villaverde, Ramón Iglesias Rey, M. Souto-Bayarri
Acute ischemic stroke (AIS) is the loss of neurological function due to a sudden reduction in cerebral blood flow and is a leading cause of disability and death worldwide. The field of radiological imaging has experienced growth in recent years, which could be boosted by the advent of artificial intelligence. One of the latest innovations in artificial intelligence is radiomics, which is based on the fact that a large amount of quantitative data can be extracted from radiological images, from which patterns can be identified and associated with specific pathologies. Since its inception, radiomics has been particularly associated with the field of oncology and has shown promising results in a wide range of clinical situations. The performance of radiomics in non-tumour pathologies has been increasingly explored in recent years, and the results continue to be promising. The aim of this review is to explore the potential applications of radiomics in AIS patients and to theorize how radiomics may change the paradigm for these patients in the coming years.
{"title":"How Can Radiomics Help the Clinical Management of Patients with Acute Ischemic Stroke?","authors":"Jacobo Porto-Álvarez, Antonio Mosqueira Martínez, Javier Martínez Fernández, Marta Sanmartín López, M. Blanco Ulla, F. Vázquez Herrero, J. Pumar, M. Rodríguez-Yáñez, Anxo Manuel Minguillón Pereiro, Alberto Bolón Villaverde, Ramón Iglesias Rey, M. Souto-Bayarri","doi":"10.3390/app131810061","DOIUrl":"https://doi.org/10.3390/app131810061","url":null,"abstract":"Acute ischemic stroke (AIS) is the loss of neurological function due to a sudden reduction in cerebral blood flow and is a leading cause of disability and death worldwide. The field of radiological imaging has experienced growth in recent years, which could be boosted by the advent of artificial intelligence. One of the latest innovations in artificial intelligence is radiomics, which is based on the fact that a large amount of quantitative data can be extracted from radiological images, from which patterns can be identified and associated with specific pathologies. Since its inception, radiomics has been particularly associated with the field of oncology and has shown promising results in a wide range of clinical situations. The performance of radiomics in non-tumour pathologies has been increasingly explored in recent years, and the results continue to be promising. The aim of this review is to explore the potential applications of radiomics in AIS patients and to theorize how radiomics may change the paradigm for these patients in the coming years.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42524907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youquan Dou, Qingsong Wang, Sensheng Wang, Xi Shu, Minghui Ni, Yan Li
Laser-induced breakdown spectroscopy (LIBS) technology has the characteristics of small sample demand, simple sample preparation, simultaneous measurement of multiple elements and safety, which has great potential application in the rapid detection of coal quality. In this paper, 59 kinds of coal commonly used in Chinese power plants were tested by a lab-designed field-portable laser-induced breakdown spectrometer. The data set division methods and the quantitative analysis algorithm of ash content, volatile matter and calorific value of coal samples were carried out. The accuracy and prediction accuracy of three kinds of dataset partitioning methods, random selection (RS), Kennard–Stone (KS) and sample partitioning based on joint X-Y distances (SPXY), coupled with three quantitative algorithms, partial least squares regression (PLS), support vector machine regression (SVR) and random forest (RF), were compared and analyzed in this paper. The results show that the model featuring SPXY combined with RF has the best prediction performance. The R2 of ash content by the RF and SPXY method is 0.9843, the RMSEP of ash content is 1.3303 and the mean relative error (MRE) is 7.47%. The R2 of volatile matter is 0.9801, RMSEP is 0.7843 and MRE is 2.19%. The R2 of calorific value is 0.9844, RMSEP is 0.7324 and MRE is 2.27%. This study demonstrates that the field-portable LIBS device combining appropriate chemometrics algorithms has a wide application prospect in the rapid analysis of coal quality.
{"title":"Quantitative Analysis of Coal Quality by a Portable Laser Induced Breakdown Spectroscopy and Three Chemometrics Methods","authors":"Youquan Dou, Qingsong Wang, Sensheng Wang, Xi Shu, Minghui Ni, Yan Li","doi":"10.3390/app131810049","DOIUrl":"https://doi.org/10.3390/app131810049","url":null,"abstract":"Laser-induced breakdown spectroscopy (LIBS) technology has the characteristics of small sample demand, simple sample preparation, simultaneous measurement of multiple elements and safety, which has great potential application in the rapid detection of coal quality. In this paper, 59 kinds of coal commonly used in Chinese power plants were tested by a lab-designed field-portable laser-induced breakdown spectrometer. The data set division methods and the quantitative analysis algorithm of ash content, volatile matter and calorific value of coal samples were carried out. The accuracy and prediction accuracy of three kinds of dataset partitioning methods, random selection (RS), Kennard–Stone (KS) and sample partitioning based on joint X-Y distances (SPXY), coupled with three quantitative algorithms, partial least squares regression (PLS), support vector machine regression (SVR) and random forest (RF), were compared and analyzed in this paper. The results show that the model featuring SPXY combined with RF has the best prediction performance. The R2 of ash content by the RF and SPXY method is 0.9843, the RMSEP of ash content is 1.3303 and the mean relative error (MRE) is 7.47%. The R2 of volatile matter is 0.9801, RMSEP is 0.7843 and MRE is 2.19%. The R2 of calorific value is 0.9844, RMSEP is 0.7324 and MRE is 2.27%. This study demonstrates that the field-portable LIBS device combining appropriate chemometrics algorithms has a wide application prospect in the rapid analysis of coal quality.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44513860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abeeb Opeyemi Alabi, Byoung-Gyu Song, Jong-Jin Bae, Namcheol Kang
Existing biodynamic models adopt apparent mass and seat-to-head transmissibility to predict the response of seated humans to whole-body vibration, limiting their ability to capture the actual response of distinct body segments in different excitation conditions. This study systematically develops a 7-DOF seated human model, a vibration experiment, and a novel hybrid optimization to estimate unknown mechanical parameters and predict the response of different human body segments to vertical vibrations. Experimental results showed that the upper trunk and head were most susceptible to transmitted vibrations. Combining the 7-DOF model and HOM resulted in accelerated optimization, improved numerical stability, and significant minimization of the objective function value compared to conventional algorithms. Notably, the estimated parameters, particularly stiffness, remained consistent regardless of increasing excitation magnitude or change in the body segment data used. Additionally, the model captured the non-linearity in human biodynamics through stiffness softening. These findings are applicable in seating systems optimization for comfort and safety.
{"title":"Development of a 7-DOF Biodynamic Model for a Seated Human and a Hybrid Optimization Method for Estimating Human-Seat Interaction Parameters","authors":"Abeeb Opeyemi Alabi, Byoung-Gyu Song, Jong-Jin Bae, Namcheol Kang","doi":"10.3390/app131810065","DOIUrl":"https://doi.org/10.3390/app131810065","url":null,"abstract":"Existing biodynamic models adopt apparent mass and seat-to-head transmissibility to predict the response of seated humans to whole-body vibration, limiting their ability to capture the actual response of distinct body segments in different excitation conditions. This study systematically develops a 7-DOF seated human model, a vibration experiment, and a novel hybrid optimization to estimate unknown mechanical parameters and predict the response of different human body segments to vertical vibrations. Experimental results showed that the upper trunk and head were most susceptible to transmitted vibrations. Combining the 7-DOF model and HOM resulted in accelerated optimization, improved numerical stability, and significant minimization of the objective function value compared to conventional algorithms. Notably, the estimated parameters, particularly stiffness, remained consistent regardless of increasing excitation magnitude or change in the body segment data used. Additionally, the model captured the non-linearity in human biodynamics through stiffness softening. These findings are applicable in seating systems optimization for comfort and safety.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49192514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}