首页 > 最新文献

Optical Memory and Neural Networks最新文献

英文 中文
Optimization of Metamaterial Unit Cell Using Radial Basis Function Neural Network 基于径向基函数神经网络的超材料单元优化
IF 0.9 Q3 Computer Science Pub Date : 2023-09-25 DOI: 10.3103/S1060992X23030098
Shilpa Srivastava, Sanjay Kumar Singh, Usha Tiwari

Microstrip Patch Antennas (MPA) are being used more and more in current communication systems because of their advantages such as being Lightweight, easy to construct, and low cost. However, MPA operational bandwidth and power handling capabilities are restricted. In this research, a novel unit cell MPA is designed and optimized using a Radial Basis Function Neural network (RBFNN). Flame-retardant (FR4) metamaterial is used in the fabrication of the envisioned antenna and the device. The High-Frequency Structure Simulator (HFSS) version 15 software is used for the design and simulation of the model. The design is simulated at a frequency range of 2 to 6 Hertz. Finally, the implementation of the antenna is performed using Complementary Split Ring Resonator (CSRR) technique. The proposed structure produces an excellent reflection coefficient, and Voltage Standing Wave Ratio (VSWR), which are –15.12 at 1.5 GHz, –55.41 at 2.5 GHz, and –25.63 dB at 3.5 GHz and 2.0 dB respectively. Simulation results show an excellent outcome, as return losses are 23.18, 38.67, and 44.12 dB at 0.6, 1.7, and 3.5 GHz respectively, and the gain is 8.5 dB at 6 GHz, which are quite similar to the actual values. The proposed unit cell antenna outperformed the other previously designed microstrip antenna and is suitable for wireless communication systems.

微带贴片天线(MPA)以其重量轻、结构简单、成本低等优点,在当前的通信系统中得到了越来越多的应用。然而,MPA的操作带宽和功率处理能力受到限制。在本研究中,使用径向基函数神经网络(RBFNN)设计并优化了一种新型的单元MPA。阻燃(FR4)超材料用于制造所设想的天线和设备。高频结构模拟器(HFSS)15版软件用于模型的设计和仿真。该设计是在2到6赫兹的频率范围内进行模拟的。最后,利用互补分裂环谐振器(CSRR)技术实现了该天线。所提出的结构产生了优异的反射系数和电压驻波比(VSWR),在1.5 GHz时为-15.12,在2.5 GHz时为-55.41,在3.5 GHz和2.0 dB时分别为-25.63 dB。仿真结果表明,在0.6、1.7和3.5GHz时,回波损耗分别为23.18、38.67和44.12dB,在6GHz时增益为8.5dB,与实际值非常相似。所提出的单元天线优于先前设计的其他微带天线,适用于无线通信系统。
{"title":"Optimization of Metamaterial Unit Cell Using Radial Basis Function Neural Network","authors":"Shilpa Srivastava,&nbsp;Sanjay Kumar Singh,&nbsp;Usha Tiwari","doi":"10.3103/S1060992X23030098","DOIUrl":"10.3103/S1060992X23030098","url":null,"abstract":"<p>Microstrip Patch Antennas (MPA) are being used more and more in current communication systems because of their advantages such as being Lightweight, easy to construct, and low cost. However, MPA operational bandwidth and power handling capabilities are restricted. In this research, a novel unit cell MPA is designed and optimized using a Radial Basis Function Neural network (RBFNN). Flame-retardant (FR4) metamaterial is used in the fabrication of the envisioned antenna and the device. The High-Frequency Structure Simulator (HFSS) version 15 software is used for the design and simulation of the model. The design is simulated at a frequency range of 2 to 6 Hertz. Finally, the implementation of the antenna is performed using Complementary Split Ring Resonator (CSRR) technique. The proposed structure produces an excellent reflection coefficient, and Voltage Standing Wave Ratio (VSWR), which are –15.12 at 1.5 GHz, –55.41 at 2.5 GHz, and –25.63 dB at 3.5 GHz and 2.0 dB respectively. Simulation results show an excellent outcome, as return losses are 23.18, 38.67, and 44.12 dB at 0.6, 1.7, and 3.5 GHz respectively, and the gain is 8.5 dB at 6 GHz, which are quite similar to the actual values. The proposed unit cell antenna outperformed the other previously designed microstrip antenna and is suitable for wireless communication systems.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41079808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning for Multiscale Video Coding 用于多尺度视频编码的机器学习
IF 0.9 Q3 Computer Science Pub Date : 2023-09-25 DOI: 10.3103/S1060992X23030037
M. V. Gashnikov

The research concerns the use of machine learning algorithms for multiscale coding of digital video sequences. Based on machine learning, the digital image coder is generalized to the coding of video sequences. To this end, we offer an algorithm that allows for videoframes interdependency by using linear regression. The generalized image coder uses multiscale representation of videoframes, neural network three-dimensional interpolation of multiscale videoframe interpretation levels and generative-adversarial neural net replacement of homogeneous portions of a videoframe by synthetic video data. The method of coding the entire video and method of coding videoframes are exemplified by block diagrams. Formalized description of how videoframe correlation is taken into account is given. Real video sequences are used to carry out numerical experiments. The experimental data allow us to make a conclusion about the promise of using the algorithm in video coding and processing.

该研究涉及将机器学习算法用于数字视频序列的多尺度编码。基于机器学习,将数字图像编码器推广到视频序列的编码中。为此,我们提供了一种算法,通过使用线性回归来实现视频帧的相互依赖性。广义图像编码器使用视频帧的多尺度表示、多尺度视频帧解释水平的神经网络三维插值以及合成视频数据对视频帧的同质部分的生成对抗性神经网络替换。通过框图举例说明对整个视频进行编码的方法和对视频帧进行编码的方式。给出了如何考虑视频帧相关性的形式化描述。使用真实的视频序列进行数值实验。实验数据使我们能够对该算法在视频编码和处理中的应用前景做出结论。
{"title":"Machine Learning for Multiscale Video Coding","authors":"M. V. Gashnikov","doi":"10.3103/S1060992X23030037","DOIUrl":"10.3103/S1060992X23030037","url":null,"abstract":"<p>The research concerns the use of machine learning algorithms for multiscale coding of digital video sequences. Based on machine learning, the digital image coder is generalized to the coding of video sequences. To this end, we offer an algorithm that allows for videoframes interdependency by using linear regression. The generalized image coder uses multiscale representation of videoframes, neural network three-dimensional interpolation of multiscale videoframe interpretation levels and generative-adversarial neural net replacement of homogeneous portions of a videoframe by synthetic video data. The method of coding the entire video and method of coding videoframes are exemplified by block diagrams. Formalized description of how videoframe correlation is taken into account is given. Real video sequences are used to carry out numerical experiments. The experimental data allow us to make a conclusion about the promise of using the algorithm in video coding and processing.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41079807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
English-Afaan Oromo Machine Translation Using Deep Attention Neural Network 基于深度注意神经网络的英语阿法阿罗莫语机器翻译
IF 0.9 Q3 Computer Science Pub Date : 2023-09-25 DOI: 10.3103/S1060992X23030049
Ebisa A. Gemechu, G. R. Kanagachidambaresan

Attention-based neural machine translation (attentional NMT), which jointly aligns and translates, has got much popularity in recent years. Besides, a language model needs an accurate and larger bilingual dataset_ from the source to the target, to boost translation performance. There are many such datasets publicly available for well-developed languages for model training. However, currently, there is no such dataset available for the English-Afaan Oromo pair to build NMT language models. To alleviate this problem, we manually prepared a 25K English-Afaan Oromo new dataset for our model. Language experts evaluate the prepared corpus for translation accuracy. We also used the publicly available English-French, and English-German datasets to see the translation performances among the three pairs. Further, we propose a deep attentional NMT model to train our models. Experimental results over the three language pairs demonstrate that the proposed system and our new dataset yield a significant gain. The result from the English-Afaan Oromo model achieved 1.19 BLEU points over the previous English-Afaan Oromo Machine Translation (MT) models. The result also indicated that the model could perform as closely as the other developed language pairs if supplied with a larger dataset. Our 25K new dataset also set a baseline for future researchers who have curiosity about English-Afaan Oromo machine translation.

基于注意力的神经机器翻译(attentional NMT)是一种联合对齐和翻译的方法,近年来得到了广泛的应用。此外,一个语言模型需要一个从源到目标的准确、更大的双语数据集,以提高翻译性能。有许多这样的数据集可供开发良好的语言用于模型训练。然而,目前还没有这样的数据集可供英语Afaan-Oromo对用于构建NMT语言模型。为了缓解这个问题,我们为我们的模型手动准备了一个25K英语Afaan Oromo新数据集。语言专家评估准备好的语料库的翻译准确性。我们还使用了公开的英语-法语和英语-德语数据集来查看三对之间的翻译性能。此外,我们提出了一个深度注意NMT模型来训练我们的模型。在三种语言对上的实验结果表明,所提出的系统和我们的新数据集产生了显著的增益。英语Afaan-Oromo模型的结果比以前的英语Afaan Oromo机器翻译(MT)模型获得了1.19个BLEU点。结果还表明,如果提供更大的数据集,该模型的性能可以与其他开发的语言对一样接近。我们的25K新数据集也为未来对英语Afaan Oromo机器翻译感兴趣的研究人员设定了一个基线。
{"title":"English-Afaan Oromo Machine Translation Using Deep Attention Neural Network","authors":"Ebisa A. Gemechu,&nbsp;G. R. Kanagachidambaresan","doi":"10.3103/S1060992X23030049","DOIUrl":"10.3103/S1060992X23030049","url":null,"abstract":"<p>Attention-based neural machine translation (attentional NMT), which jointly aligns and translates, has got much popularity in recent years. Besides, a language model needs an accurate and larger bilingual dataset_ from the source to the target, to boost translation performance. There are many such datasets publicly available for well-developed languages for model training. However, currently, there is no such dataset available for the English-Afaan Oromo pair to build NMT language models. To alleviate this problem, we manually prepared a 25K English-Afaan Oromo new dataset for our model. Language experts evaluate the prepared corpus for translation accuracy. We also used the publicly available English-French, and English-German datasets to see the translation performances among the three pairs. Further, we propose a deep attentional NMT model to train our models. Experimental results over the three language pairs demonstrate that the proposed system and our new dataset yield a significant gain. The result from the English-Afaan Oromo model achieved 1.19 BLEU points over the previous English-Afaan Oromo Machine Translation (MT) models. The result also indicated that the model could perform as closely as the other developed language pairs if supplied with a larger dataset. Our 25K new dataset also set a baseline for future researchers who have curiosity about English-Afaan Oromo machine translation.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41079707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skin Cancer Detection and Classification System by Applying Image Processing and Machine Learning Techniques 应用图像处理和机器学习技术的癌症皮肤检测与分类系统
IF 0.9 Q3 Computer Science Pub Date : 2023-09-25 DOI: 10.3103/S1060992X23030086
Dr. A. Rasmi,  Dr. A. Jayanthiladevi

In these modern days, cancers like Skin cancers is the general type of cancer that alters the life style of millions of citizens in each time. Around three million people are identified with it in each and every year in the US alone. Skin cancer related to the irregular enlargement of cells. On account of malignancy characteristic skin type cancer is termed as melanoma. Melanoma seems on skins because of the contact to ultraviolet emission and hereditary reasons. Thus melanoma seems like brown and black colour, but also occurs anyplace of the patient. Mostly the skin type cancers could be treatable at the earliest phases of beginning. So a fast recognition of skin cancer could rescue the life of patient. However, identifying skin cancer in its starting phases may be difficult and moreover it is expensive. Thus in the paper, they try to cope with such types of problems by making a wise decision scheme for skin lesion identification like the starting phase, that should be set into a smart robot for physical condition monitoring in our present surroundings to support early on detection. The scheme is enhanced to classify benign and malignant skin lesions with different procedures, comprising of pre-processing for instance noise elimination, segmentation, and feature extraction from lesion sections, feature collection and labelling. Following the separation of lots of raw images, colour and texture characteristics from the lesion regions, is employed to categorize the largely prejudiced noteworthy subsets for fit and cancerous circumstances. In it SVM has been applied to carry out benign and malignant lesion detection.

在现代,像皮肤癌这样的癌症是癌症的常见类型,每次都会改变数百万公民的生活方式。仅在美国,每年就有大约300万人被确诊患有此病。皮肤癌症与细胞不规则增大有关。由于恶性肿瘤的特征,癌症被称为黑色素瘤。黑色素瘤出现在皮肤上是因为接触紫外线和遗传的原因。因此,黑色素瘤看起来像棕色和黑色,但也发生在患者的任何地方。大多数皮肤型癌症在发病初期是可以治疗的。因此,快速识别癌症可以挽救患者的生命。然而,在皮肤癌症的起始阶段识别可能是困难的,而且是昂贵的。因此,在本文中,他们试图通过制定一个明智的皮肤损伤识别决策方案来应对这类问题,如启动阶段,该方案应设置在智能机器人中,用于在我们当前环境中监测身体状况,以支持早期检测。该方案经过改进,可以用不同的程序对良性和恶性皮肤病变进行分类,包括预处理,例如噪声消除、分割和病变切片的特征提取、特征收集和标记。在分离出大量原始图像后,采用病变区域的颜色和纹理特征对有很大偏见的值得注意的子集进行分类,以确定适合性和癌症情况。支持向量机已被应用于良恶性病变的检测。
{"title":"Skin Cancer Detection and Classification System by Applying Image Processing and Machine Learning Techniques","authors":"Dr. A. Rasmi,&nbsp; Dr. A. Jayanthiladevi","doi":"10.3103/S1060992X23030086","DOIUrl":"10.3103/S1060992X23030086","url":null,"abstract":"<p>In these modern days, cancers like Skin cancers is the general type of cancer that alters the life style of millions of citizens in each time. Around three million people are identified with it in each and every year in the US alone. Skin cancer related to the irregular enlargement of cells. On account of malignancy characteristic skin type cancer is termed as melanoma. Melanoma seems on skins because of the contact to ultraviolet emission and hereditary reasons. Thus melanoma seems like brown and black colour, but also occurs anyplace of the patient. Mostly the skin type cancers could be treatable at the earliest phases of beginning. So a fast recognition of skin cancer could rescue the life of patient. However, identifying skin cancer in its starting phases may be difficult and moreover it is expensive. Thus in the paper, they try to cope with such types of problems by making a wise decision scheme for skin lesion identification like the starting phase, that should be set into a smart robot for physical condition monitoring in our present surroundings to support early on detection. The scheme is enhanced to classify benign and malignant skin lesions with different procedures, comprising of pre-processing for instance noise elimination, segmentation, and feature extraction from lesion sections, feature collection and labelling. Following the separation of lots of raw images, colour and texture characteristics from the lesion regions, is employed to categorize the largely prejudiced noteworthy subsets for fit and cancerous circumstances. In it SVM has been applied to carry out benign and malignant lesion detection.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41079703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of the 2011 and 2020 Stratospheric Ozone Events at Arctic and Northern Eurasian Latitudes Using TEMIS and Aura MLS Data 利用TEMIS和Aura MLS数据对北极和欧亚北部纬度2011年和2020年平流层臭氧事件的比较
IF 0.9 Q3 Computer Science Pub Date : 2023-09-25 DOI: 10.3103/S1060992X23030025
O. E. Bazhenov

Winters-springs 2019–2020 and 2010–2011 became the periods of the severest ozone events in the Arctic throughout the satellite era. They stemmed from extremely cold and persistent polar stratospheric cloud (PSC) seasons, conducive to record strong chemical ozone destruction. TEMIS observations indicate that the total ozone (TO) column diverged from long-term norm by 45 to 55% in 2020 and by 37 to 44% in 2011 at Arctic sites; and by 27 to 32% in 2020 and by 27 to 36% in 2011 at midlatitudes. Aura MLS profiles showed that the minimum temperature was 8–13% lower than norm over the Arctic in 2020 and 8–12% lower in 2011. The ozone mixing ratios were 4% of the long-term mean at height of 20 km on March 27, 2020 and 25% at height of 21 km on March 20, 2011 for Eureka; and 7% at 19 km on April 19, 2020 and 24% at 20 km on March 20, 2011 for Ny-Ålesund. The divergences in water vapor and ozone mixing ratios, water vapor mixing ratio and temperature, and ozone mixing ratio and temperature show stronger correlations in 2020 than 2011. The correlations weaken equatorward, until becoming almost insignificant at extra-vortex latitudes.

2019-2020年和2010-2011年的冬春季成为整个卫星时代北极臭氧事件最严重的时期。它们源于极冷和持续的极地平流层云(PSC)季节,有利于创纪录的强烈化学臭氧破坏。TEMIS的观测结果表明,2020年北极地区的总臭氧(TO)柱与长期标准的偏差为45%至55%,2011年为37%至44%;中纬度地区在2020年和2011年分别增长了27%至32%和27%至36%。Aura MLS剖面显示,2020年北极地区的最低气温比正常水平低8-13%,2011年低8-12%。尤里卡在2020年3月27日20公里高度时的臭氧混合比为长期平均值的4%,在2011年3月20日21公里高度时为25%;2020年4月19日19公里时为7%,2011年3月20日20公里时为24%。2020年,水蒸气和臭氧混合比、水蒸气混合比和温度以及臭氧混合比和气温的差异显示出比2011年更强的相关性。这种相关性向赤道方向减弱,直到在涡旋外纬度变得几乎不重要。
{"title":"Comparison of the 2011 and 2020 Stratospheric Ozone Events at Arctic and Northern Eurasian Latitudes Using TEMIS and Aura MLS Data","authors":"O. E. Bazhenov","doi":"10.3103/S1060992X23030025","DOIUrl":"10.3103/S1060992X23030025","url":null,"abstract":"<p>Winters-springs 2019–2020 and 2010–2011 became the periods of the severest ozone events in the Arctic throughout the satellite era. They stemmed from extremely cold and persistent polar stratospheric cloud (PSC) seasons, conducive to record strong chemical ozone destruction. TEMIS observations indicate that the total ozone (TO) column diverged from long-term norm by 45 to 55% in 2020 and by 37 to 44% in 2011 at Arctic sites; and by 27 to 32% in 2020 and by 27 to 36% in 2011 at midlatitudes. Aura MLS profiles showed that the minimum temperature was 8–13% lower than norm over the Arctic in 2020 and 8–12% lower in 2011. The ozone mixing ratios were 4% of the long-term mean at height of 20 km on March 27, 2020 and 25% at height of 21 km on March 20, 2011 for Eureka; and 7% at 19 km on April 19, 2020 and 24% at 20 km on March 20, 2011 for Ny-Ålesund. The divergences in water vapor and ozone mixing ratios, water vapor mixing ratio and temperature, and ozone mixing ratio and temperature show stronger correlations in 2020 than 2011. The correlations weaken equatorward, until becoming almost insignificant at extra-vortex latitudes.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41079700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithm for Data Processing from Ozone Lidar Sensing in the Atmosphere 大气臭氧激光雷达遥感数据处理算法
IF 0.9 Q3 Computer Science Pub Date : 2023-09-25 DOI: 10.3103/S1060992X23030050
A. A. Nevzorov, A. V. Nevzorov, A. I. Nadeev, N. G. Zaitsev, Ya. O. Romanovskii

We developed an algorithm of software product for processing the data from lidar sensing at the wavelengths of 299/341 nm for a vertical path of atmospheric sensing with the spatial resolution from 1.5 to 150 m. The main options of the software include: recording the atmospheric lidar sensing data, conversion of DAT to TXT file format, and retrieval of ozone concentration profiles. The software complex, developed on the basis of our algorithm to process the lidar sensing data, makes it possible to obtain the ozone concentration profiles from 4 to 20 km. The blocks of recording the data from atmospheric lidar sensing and retrieving the ozone concentration profiles allow for a visual control of the recorded lidar returns and retrieved ozone concentration profiles. We present an example of retrieving the ozone concentration profile from lidar data, which was obtained in 2022.

我们开发了一种软件产品的算法,用于处理波长为299/341nm的激光雷达传感数据,用于空间分辨率为1.5至150m的垂直大气传感路径。该软件的主要选项包括:记录大气激光雷达传感数据,将DAT转换为TXT文件格式,以及检索臭氧浓度剖面。该软件复合体是在我们处理激光雷达传感数据的算法的基础上开发的,可以获得4至20公里的臭氧浓度剖面。记录大气激光雷达传感的数据和检索臭氧浓度剖面的块允许对记录的激光雷达返回和检索的臭氧浓度轮廓进行视觉控制。我们展示了一个从2022年获得的激光雷达数据中检索臭氧浓度剖面的例子。
{"title":"Algorithm for Data Processing from Ozone Lidar Sensing in the Atmosphere","authors":"A. A. Nevzorov,&nbsp;A. V. Nevzorov,&nbsp;A. I. Nadeev,&nbsp;N. G. Zaitsev,&nbsp;Ya. O. Romanovskii","doi":"10.3103/S1060992X23030050","DOIUrl":"10.3103/S1060992X23030050","url":null,"abstract":"<p>We developed an algorithm of software product for processing the data from lidar sensing at the wavelengths of 299/341 nm for a vertical path of atmospheric sensing with the spatial resolution from 1.5 to 150 m. The main options of the software include: recording the atmospheric lidar sensing data, conversion of DAT to TXT file format, and retrieval of ozone concentration profiles. The software complex, developed on the basis of our algorithm to process the lidar sensing data, makes it possible to obtain the ozone concentration profiles from 4 to 20 km. The blocks of recording the data from atmospheric lidar sensing and retrieving the ozone concentration profiles allow for a visual control of the recorded lidar returns and retrieved ozone concentration profiles. We present an example of retrieving the ozone concentration profile from lidar data, which was obtained in 2022.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41079702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Deep Neural Network Structures in Semantic Segmentation for Road Scene Understanding 深度神经网络结构在道路场景语义分割中的应用
IF 0.9 Q3 Computer Science Pub Date : 2023-06-23 DOI: 10.3103/S1060992X23020108
Qusay Sellat,  Kanagachidambaresan Ramasubramanian

Semantic segmentation is crucial for autonomous driving as the pixel-wise classification of the surrounding scene images is the main input in the scene understanding stage. With the development of deep learning technology and the impressive hardware capabilities, semantic segmentation has seen an important improvement towards higher segmentation accuracy. However, an efficient sematic segmentation model is needed for real-time applications such as autonomous driving. In this paper, we discover the potential of employing the design principles of two deep learning models, namely PSPNet and EfficientNet to produce a high accurate and efficient convolutional autoencoder model for semantic segmentation. Also, we benefit from data augmentation for better model training. Our experiment on CamVid dataset produces optimistic results and the comparison with other mainstream semantic segmentation models justifies the used approach.

语义分割对于自动驾驶至关重要,因为对周围场景图像的逐像素分类是场景理解阶段的主要输入。随着深度学习技术的发展和令人印象深刻的硬件能力,语义分割已经看到了一个重要的改进,以更高的分割精度。然而,自动驾驶等实时应用需要高效的语义分割模型。在本文中,我们发现了利用两个深度学习模型(即PSPNet和EfficientNet)的设计原则来产生高精度和高效的卷积自编码器模型用于语义分割的潜力。此外,我们还可以从数据增强中受益,以便更好地进行模型训练。我们在CamVid数据集上的实验产生了乐观的结果,并与其他主流语义分割模型进行了比较,证明了所使用的方法是正确的。
{"title":"Application of Deep Neural Network Structures in Semantic Segmentation for Road Scene Understanding","authors":"Qusay Sellat,&nbsp; Kanagachidambaresan Ramasubramanian","doi":"10.3103/S1060992X23020108","DOIUrl":"10.3103/S1060992X23020108","url":null,"abstract":"<p>Semantic segmentation is crucial for autonomous driving as the pixel-wise classification of the surrounding scene images is the main input in the scene understanding stage. With the development of deep learning technology and the impressive hardware capabilities, semantic segmentation has seen an important improvement towards higher segmentation accuracy. However, an efficient sematic segmentation model is needed for real-time applications such as autonomous driving. In this paper, we discover the potential of employing the design principles of two deep learning models, namely PSPNet and EfficientNet to produce a high accurate and efficient convolutional autoencoder model for semantic segmentation. Also, we benefit from data augmentation for better model training. Our experiment on CamVid dataset produces optimistic results and the comparison with other mainstream semantic segmentation models justifies the used approach.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4898548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the Performance of Human Part Segmentation Based on Swin Transformer 改进基于Swin变压器的人体部位分割性能
IF 0.9 Q3 Computer Science Pub Date : 2023-06-23 DOI: 10.3103/S1060992X23020030
Juan Du,  Tao Yang

One of the current challenges in deep learning is semantic segmentation. Moreover, human part segmentation is a sub-task in image segmentation, which differs from traditional segmentation to understand the human body’s intrinsic connections. Convolutional Neural Network (CNN) has always been a standard feature extraction network in human part segmentation. Recently, the proposed Swin Transformer surpasses CNN for many image applications. However, few articles have explored the performance of Swin Transformer in human part segmentation compared to CNN. In this paper, we make a comparison experiment on this issue, and the experimental results prove that even in the area of human part segmentation and without any additional trick, the Swin Transformer has good results compared with CNN. At the same time, this paper also combines the Edge Perceiving Module (EPM) currently commonly used in CNN with Swin Transformer to prove that Swin Transformer can see the intrinsic connection of segmented parts. This research demonstrates the feasibility of applying Swin Transformer to the part segmentation of images, which is conducive to advancing image segmentation technology in the future.

当前深度学习面临的挑战之一是语义分割。此外,人体部位分割是图像分割中的一个子任务,不同于传统的分割,人体部位分割是为了了解人体的内在联系。卷积神经网络(CNN)一直是人体部位分割的标准特征提取网络。最近,Swin Transformer在许多图像应用中都超过了CNN。然而,与CNN相比,很少有文章探讨Swin Transformer在人体部位分割方面的性能。本文针对这一问题进行了对比实验,实验结果证明,即使在人体部位分割方面,没有任何额外的技巧,Swin Transformer与CNN相比也有很好的效果。同时,本文还将目前CNN中常用的边缘感知模块(Edge Perceiving Module, EPM)与Swin Transformer相结合,证明Swin Transformer能够看到被分割部件之间的内在联系。本研究验证了Swin变压器应用于图像局部分割的可行性,有利于未来图像分割技术的发展。
{"title":"Improving the Performance of Human Part Segmentation Based on Swin Transformer","authors":"Juan Du,&nbsp; Tao Yang","doi":"10.3103/S1060992X23020030","DOIUrl":"10.3103/S1060992X23020030","url":null,"abstract":"<p>One of the current challenges in deep learning is semantic segmentation. Moreover, human part segmentation is a sub-task in image segmentation, which differs from traditional segmentation to understand the human body’s intrinsic connections. Convolutional Neural Network (CNN) has always been a standard feature extraction network in human part segmentation. Recently, the proposed Swin Transformer surpasses CNN for many image applications. However, few articles have explored the performance of Swin Transformer in human part segmentation compared to CNN. In this paper, we make a comparison experiment on this issue, and the experimental results prove that even in the area of human part segmentation and without any additional trick, the Swin Transformer has good results compared with CNN. At the same time, this paper also combines the Edge Perceiving Module (EPM) currently commonly used in CNN with Swin Transformer to prove that Swin Transformer can see the intrinsic connection of segmented parts. This research demonstrates the feasibility of applying Swin Transformer to the part segmentation of images, which is conducive to advancing image segmentation technology in the future.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4898556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and Prediction of Breast Cancer Using Improved Faster Regional Convolutional Neural Network Based on Multilayer Perceptron’s Network 基于多层感知器网络的改进更快区域卷积神经网络乳腺癌检测与预测
IF 0.9 Q3 Computer Science Pub Date : 2023-06-23 DOI: 10.3103/S1060992X23020054
Poonam Rana, Pradeep Kumar Gupta, Vineet Sharma

One of the most frequent causes of death for women worldwide is breast cancer. In most cases, breast cancer can be quickly identified if certain symptoms emerge. But many women with breast cancer don’t show any symptoms. So, it is very critical to detect this disease in early stage also numerous radiologists are needed to diagnose this disease which is quite expensive for the majority of cancer hospitals. To address these concerns, the proposed methodology creates a Faster-Regional Convolutional Neural Network (Faster-RCNN) for recognizing breast cancer. Ultrasound images are collected and pre-processed utilizing resizing, adaptive median filter, histogram global contrast enhancement and high boost filtering. Image resizing is utilized to change the image size without cutting anything out. Adaptive median filter is utilized to remove unwanted noise present in the resized image. Histogram global contrast enhancement is used to enhancing the contrast level of the image. High boost filtering is utilized to sharpening the edges present in the image. After that, pre-processed images are fetched as an input to Faster R-CNN, which extract the features and segment the accurate region of the tumour. These segmented regions are classified using Multilayer Perceptron’s for detecting whether the patients are affected by breast cancer or not. According to the experimental study, the proposed approach achieves 97.1% correctness, 0.03% error, 91% precision and 93% specificity. Therefore, the developed approach attains better performance compared to other existing approaches. This prediction model helps to detect breast cancer at early stage and improve patient’s living standard.

全世界妇女最常见的死亡原因之一是乳腺癌。在大多数情况下,如果出现某些症状,乳腺癌可以很快被确诊。但许多患乳腺癌的女性没有表现出任何症状。因此,在早期发现这种疾病是非常重要的,并且需要大量的放射科医生来诊断这种疾病,这对于大多数癌症医院来说是非常昂贵的。为了解决这些问题,提出的方法创建了一个更快的区域卷积神经网络(Faster-RCNN)来识别乳腺癌。超声图像采集和预处理利用调整大小,自适应中值滤波,直方图全局对比度增强和高升压滤波。图像大小调整是用来改变图像的大小,而不削减任何东西。自适应中值滤波用于去除调整后图像中存在的不需要的噪声。直方图全局对比度增强用于增强图像的对比度水平。利用高升压滤波来锐化图像中存在的边缘。之后,提取预处理图像作为Faster R-CNN的输入,Faster R-CNN提取特征并对肿瘤的准确区域进行分割。这些分割的区域使用多层感知器进行分类,以检测患者是否受到乳腺癌的影响。实验研究表明,该方法的准确率为97.1%,误差为0.03%,精密度为91%,特异度为93%。因此,与其他现有方法相比,所开发的方法获得了更好的性能。该预测模型有助于早期发现乳腺癌,提高患者的生活水平。
{"title":"Detection and Prediction of Breast Cancer Using Improved Faster Regional Convolutional Neural Network Based on Multilayer Perceptron’s Network","authors":"Poonam Rana,&nbsp;Pradeep Kumar Gupta,&nbsp;Vineet Sharma","doi":"10.3103/S1060992X23020054","DOIUrl":"10.3103/S1060992X23020054","url":null,"abstract":"<p>One of the most frequent causes of death for women worldwide is breast cancer. In most cases, breast cancer can be quickly identified if certain symptoms emerge. But many women with breast cancer don’t show any symptoms. So, it is very critical to detect this disease in early stage also numerous radiologists are needed to diagnose this disease which is quite expensive for the majority of cancer hospitals. To address these concerns, the proposed methodology creates a Faster-Regional Convolutional Neural Network (Faster-RCNN) for recognizing breast cancer. Ultrasound images are collected and pre-processed utilizing resizing, adaptive median filter, histogram global contrast enhancement and high boost filtering. Image resizing is utilized to change the image size without cutting anything out. Adaptive median filter is utilized to remove unwanted noise present in the resized image. Histogram global contrast enhancement is used to enhancing the contrast level of the image. High boost filtering is utilized to sharpening the edges present in the image. After that, pre-processed images are fetched as an input to Faster R-CNN, which extract the features and segment the accurate region of the tumour. These segmented regions are classified using Multilayer Perceptron’s for detecting whether the patients are affected by breast cancer or not. According to the experimental study, the proposed approach achieves 97.1% correctness, 0.03% error, 91% precision and 93% specificity. Therefore, the developed approach attains better performance compared to other existing approaches. This prediction model helps to detect breast cancer at early stage and improve patient’s living standard.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4895873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text-Text Neural Machine Translation: A Survey 文本-文本神经机器翻译研究综述
IF 0.9 Q3 Computer Science Pub Date : 2023-06-23 DOI: 10.3103/S1060992X23020042
Ebisa Gemechu, G. R. Kanagachidambaresan

We present a review of Neural Machine Translation (NMT), which has got much popularity in recent decades. Machine translation eased the way we do massive language translation in the new digital era. Otherwise, language translation would have been manually done by human experts. However, manual translation is very costly, time-consuming, and prominently inefficient. So far, three main Machine Translation (MT) techniques have been developed over the past few decades. Viz rule-based, statistical, and neural machine translations. We have presented the merits and demerits of each of these methods and discussed a more detailed review of articles under each category. In the present survey, we conducted an in-depth review of existing approaches, basic architecture, and models for MT systems. Our effort is to shed light on the existing MT systems and assist potential researchers, in revealing related works in the literature. In the process, critical research gaps have been identified. This review intrinsically helps researchers who are interested in the study of MT.

本文对神经机器翻译(NMT)进行了综述,该技术在近几十年来得到了广泛的应用。机器翻译简化了我们在新数字时代进行大量语言翻译的方式。否则,语言翻译将由人类专家手动完成。然而,手工翻译非常昂贵、耗时,而且效率低下。到目前为止,在过去的几十年里,有三种主要的机器翻译技术得到了发展。即基于规则、统计和神经的机器翻译。我们已经介绍了这些方法的优点和缺点,并讨论了每个类别下的文章的更详细的审查。在目前的调查中,我们对现有的MT系统方法、基本架构和模型进行了深入的回顾。我们的努力是阐明现有的机器翻译系统,并协助潜在的研究人员,在揭示文献中的相关工作。在这个过程中,关键的研究差距已经被确定。这篇综述从本质上帮助了对MT研究感兴趣的研究者。
{"title":"Text-Text Neural Machine Translation: A Survey","authors":"Ebisa Gemechu,&nbsp;G. R. Kanagachidambaresan","doi":"10.3103/S1060992X23020042","DOIUrl":"10.3103/S1060992X23020042","url":null,"abstract":"<p>We present a review of Neural Machine Translation (NMT), which has got much popularity in recent decades. Machine translation eased the way we do massive language translation in the new digital era. Otherwise, language translation would have been manually done by human experts. However, manual translation is very costly, time-consuming, and prominently inefficient. So far, three main Machine Translation (MT) techniques have been developed over the past few decades. Viz rule-based, statistical, and neural machine translations. We have presented the merits and demerits of each of these methods and discussed a more detailed review of articles under each category. In the present survey, we conducted an in-depth review of existing approaches, basic architecture, and models for MT systems. Our effort is to shed light on the existing MT systems and assist potential researchers, in revealing related works in the literature. In the process, critical research gaps have been identified. This review intrinsically helps researchers who are interested in the study of MT.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4898744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Optical Memory and Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1