首页 > 最新文献

Int. J. Image Graph.最新文献

英文 中文
A Chaotic Encryption System Based on DNA Coding Using a Deep Neural Network 基于深度神经网络的DNA编码混沌加密系统
Pub Date : 2022-01-27 DOI: 10.1142/s0219467823500201
K. Sudha, V. C. Castro, G. Muthulakshmii, T. I. Parithi, S. Raja
Critical to computer vision applications, deep learning demands a massive volume of training data for great performance. However, encrypting the sensitive information in a photograph is fraught with difficulty, despite rapid technological advancements. The Advanced Encryption System (AES) is the bedrock of classical encryption technologies. The Data Encryption Standard (DES) has low sensitivity, with weak anti-hacking capabilities. In a chaotic encryption system, a chaotic logistic map is employed to generate a key double logistic sequence, and deoxyribonucleic acid (DNA) matrices are created by DNA coding. The XOR operation is carried out between the DNA sequence matrix and the key matrix. Finally, the DNA matrix is decoded to obtain an encrypted image. Given that encrypted images are susceptible to attacks, a rapid and efficient Convolutional Neural Network (CNN) denoiser is used that enhances the robustness of the algorithm by maximizing the resolution of rebuilt images. The use of a key mixing percentage factor gives the proposed system vast key space and great key sensitivity. Its implementation is examined using statistical techniques such as histogram analysis, information entropy, key space analysis and key sensitivity. Experiments have shown that the suggested system is secure and robust to statistical and noise attacks.
深度学习对于计算机视觉应用至关重要,需要大量的训练数据才能获得出色的性能。然而,尽管技术进步迅速,但对照片中的敏感信息进行加密却充满了困难。高级加密系统(AES)是传统加密技术的基础。数据加密标准DES (Data Encryption Standard)的灵敏度较低,抗黑客能力较弱。在混沌加密系统中,使用混沌逻辑映射生成关键的双逻辑序列,并通过DNA编码生成脱氧核糖核酸(DNA)矩阵。在DNA序列矩阵和密钥矩阵之间进行异或操作。最后,对DNA矩阵进行解码,得到加密图像。考虑到加密图像容易受到攻击,使用快速高效的卷积神经网络(CNN)去噪,通过最大化重建图像的分辨率来增强算法的鲁棒性。使用键混合百分比因子使系统具有较大的键空间和较高的键灵敏度。利用直方图分析、信息熵、键空间分析和键灵敏度等统计技术对其实现进行了检验。实验结果表明,该系统对统计攻击和噪声攻击具有较强的鲁棒性和安全性。
{"title":"A Chaotic Encryption System Based on DNA Coding Using a Deep Neural Network","authors":"K. Sudha, V. C. Castro, G. Muthulakshmii, T. I. Parithi, S. Raja","doi":"10.1142/s0219467823500201","DOIUrl":"https://doi.org/10.1142/s0219467823500201","url":null,"abstract":"Critical to computer vision applications, deep learning demands a massive volume of training data for great performance. However, encrypting the sensitive information in a photograph is fraught with difficulty, despite rapid technological advancements. The Advanced Encryption System (AES) is the bedrock of classical encryption technologies. The Data Encryption Standard (DES) has low sensitivity, with weak anti-hacking capabilities. In a chaotic encryption system, a chaotic logistic map is employed to generate a key double logistic sequence, and deoxyribonucleic acid (DNA) matrices are created by DNA coding. The XOR operation is carried out between the DNA sequence matrix and the key matrix. Finally, the DNA matrix is decoded to obtain an encrypted image. Given that encrypted images are susceptible to attacks, a rapid and efficient Convolutional Neural Network (CNN) denoiser is used that enhances the robustness of the algorithm by maximizing the resolution of rebuilt images. The use of a key mixing percentage factor gives the proposed system vast key space and great key sensitivity. Its implementation is examined using statistical techniques such as histogram analysis, information entropy, key space analysis and key sensitivity. Experiments have shown that the suggested system is secure and robust to statistical and noise attacks.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120981013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
T2FRF Filter: An Effective Algorithm for the Restoration of Fingerprint Images T2FRF滤波器:一种有效的指纹图像复原算法
Pub Date : 2021-12-31 DOI: 10.1142/s0219467823500043
Joycy K. Antony, K. Kanagalakshmi
Images captured in dim light are hardly satisfactory and increasing the International Organization for Standardization (ISO) for a short duration of exposure makes them noisy. The image restoration methods have a wide range of applications in the field of medical imaging, computer vision, remote sensing, and graphic design. Although the use of flash improves the lighting, it changed the image tone besides developing unnecessary highlight and shadow. Thus, these drawbacks are overcome using the image restoration methods that recovered the image with high quality from the degraded observation. The main challenge in the image restoration approach is recovering the degraded image contaminated with the noise. In this research, an effective algorithm, named T2FRF filter, is developed for the restoration of the image. The noisy pixel is identified from the input fingerprint image using Deep Convolutional Neural Network (Deep CNN), which is trained using the neighboring pixels. The Rider Optimization Algorithm (ROA) is used for the removal of the noisy pixel in the image. The enhancement of the pixel is performed using the type II fuzzy system. The developed T2FRF filter is measured using the metrics, such as correlation coefficient and Peak Signal to Noise Ratio (PSNR) for evaluating the performance. When compared with the existing image restoration method, the developed method obtained a maximum correlation coefficient of 0.7504 and a maximum PSNR of 28.2467dB, respectively.
在昏暗的光线下拍摄的图像很难令人满意,并且在短时间内增加国际标准化组织(ISO)的曝光会使图像嘈杂。图像恢复方法在医学成像、计算机视觉、遥感和平面设计等领域有着广泛的应用。闪光灯的使用虽然改善了光线,但除了产生不必要的高光和阴影外,还改变了图像的色调。因此,利用从退化观测中恢复高质量图像的图像恢复方法克服了这些缺点。图像恢复方法面临的主要挑战是如何恢复被噪声污染的退化图像。本研究提出了一种有效的T2FRF滤波算法,用于图像的恢复。使用深度卷积神经网络(Deep CNN)从输入指纹图像中识别噪声像素,该网络使用邻近像素进行训练。采用骑手优化算法(ROA)去除图像中的噪声像素。使用II型模糊系统对像素进行增强。利用相关系数和峰值信噪比(PSNR)等指标对所研制的T2FRF滤波器进行了性能评价。与现有的图像恢复方法相比,该方法的最大相关系数为0.7504,最大PSNR为28.2467dB。
{"title":"T2FRF Filter: An Effective Algorithm for the Restoration of Fingerprint Images","authors":"Joycy K. Antony, K. Kanagalakshmi","doi":"10.1142/s0219467823500043","DOIUrl":"https://doi.org/10.1142/s0219467823500043","url":null,"abstract":"Images captured in dim light are hardly satisfactory and increasing the International Organization for Standardization (ISO) for a short duration of exposure makes them noisy. The image restoration methods have a wide range of applications in the field of medical imaging, computer vision, remote sensing, and graphic design. Although the use of flash improves the lighting, it changed the image tone besides developing unnecessary highlight and shadow. Thus, these drawbacks are overcome using the image restoration methods that recovered the image with high quality from the degraded observation. The main challenge in the image restoration approach is recovering the degraded image contaminated with the noise. In this research, an effective algorithm, named T2FRF filter, is developed for the restoration of the image. The noisy pixel is identified from the input fingerprint image using Deep Convolutional Neural Network (Deep CNN), which is trained using the neighboring pixels. The Rider Optimization Algorithm (ROA) is used for the removal of the noisy pixel in the image. The enhancement of the pixel is performed using the type II fuzzy system. The developed T2FRF filter is measured using the metrics, such as correlation coefficient and Peak Signal to Noise Ratio (PSNR) for evaluating the performance. When compared with the existing image restoration method, the developed method obtained a maximum correlation coefficient of 0.7504 and a maximum PSNR of 28.2467dB, respectively.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"2344 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127475277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multimodal Biometric Person Authentication Using Face, Ear and Periocular Region Based on Convolution Neural Networks 基于卷积神经网络的人脸、耳朵和眼周区域的多模态生物识别身份验证
Pub Date : 2021-12-31 DOI: 10.1142/s0219467823500195
M. Lohith, Yoga Suhas Kuruba Manjunath, M. N. Eshwarappa
Biometrics is an active area of research because of the increase in need for accurate person identification in numerous applications ranging from entertainment to security. Unimodal and multimodal are the well-known biometric methods. Unimodal biometrics uses one biometric modality of a person for person identification. The performance of an unimodal biometric system is degraded due to certain limitations such as: intra-class variations and nonuniversality. The person identification using more than one biometric modality of a person is multimodal biometrics. This method of identification has gained more interest due to resistance on spoof attacks and more recognition rate. Conventional methods of feature extraction have difficulty in engineering features that are liable to more variations such as illumination, pose and age variations. Feature extraction using convolution neural network (CNN) can overcome these difficulties because large dataset with robust variations can be used for training, where CNN can learn these variations. In this paper, we propose multimodal biometrics at feature level horizontal fusion using face, ear and periocular region biometric modalities and apply deep learning CNN for feature representation and also we propose face, ear and periocular region dataset that are robust to intra-class variations. The evaluation of the system is made by using proposed database. Accuracy, Precision, Recall and [Formula: see text] score are calculated to evaluate the performance of the system and had shown remarkable improvement over existing biometric system.
生物识别技术是一个活跃的研究领域,因为在从娱乐到安全的许多应用中对准确的人员识别的需求不断增加。单峰和多峰两种生物识别方法是众所周知的。单模态生物识别技术使用人的一种生物识别模态来进行人的身份识别。单峰生物识别系统的性能由于某些限制而降低,例如:类内变化和非普适性。使用一个人的一种以上生物识别模态的人的身份识别是多模态生物识别。该方法具有抗欺骗攻击、识别率高的特点,受到越来越多的关注。传统的特征提取方法在易受光照、姿态和年龄变化等变化的工程特征中存在困难。使用卷积神经网络(CNN)进行特征提取可以克服这些困难,因为具有鲁棒变化的大型数据集可以用于训练,CNN可以学习这些变化。在本文中,我们提出了在特征水平融合的多模态生物识别技术,使用面部、耳朵和眼周区域的生物识别模式,并应用深度学习CNN进行特征表示,我们还提出了对类内变化具有鲁棒性的面部、耳朵和眼周区域数据集。利用提出的数据库对系统进行了评价。通过计算准确率、精密度、召回率和[公式:见文本]分数来评价系统的性能,与现有的生物识别系统相比有了显著的提高。
{"title":"Multimodal Biometric Person Authentication Using Face, Ear and Periocular Region Based on Convolution Neural Networks","authors":"M. Lohith, Yoga Suhas Kuruba Manjunath, M. N. Eshwarappa","doi":"10.1142/s0219467823500195","DOIUrl":"https://doi.org/10.1142/s0219467823500195","url":null,"abstract":"Biometrics is an active area of research because of the increase in need for accurate person identification in numerous applications ranging from entertainment to security. Unimodal and multimodal are the well-known biometric methods. Unimodal biometrics uses one biometric modality of a person for person identification. The performance of an unimodal biometric system is degraded due to certain limitations such as: intra-class variations and nonuniversality. The person identification using more than one biometric modality of a person is multimodal biometrics. This method of identification has gained more interest due to resistance on spoof attacks and more recognition rate. Conventional methods of feature extraction have difficulty in engineering features that are liable to more variations such as illumination, pose and age variations. Feature extraction using convolution neural network (CNN) can overcome these difficulties because large dataset with robust variations can be used for training, where CNN can learn these variations. In this paper, we propose multimodal biometrics at feature level horizontal fusion using face, ear and periocular region biometric modalities and apply deep learning CNN for feature representation and also we propose face, ear and periocular region dataset that are robust to intra-class variations. The evaluation of the system is made by using proposed database. Accuracy, Precision, Recall and [Formula: see text] score are calculated to evaluate the performance of the system and had shown remarkable improvement over existing biometric system.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132023253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HHO-Based Vector Quantization Technique for Biomedical Image Compression in Cloud Computing 基于hho的云计算生物医学图像压缩矢量量化技术
Pub Date : 2021-12-31 DOI: 10.1142/s0219467822400083
T. S. Kumar, S. Jothilakshmi, B. C. James, M. Prakash, N. Arulkumar, C. Rekha
In the present digital era, the exploitation of medical technologies and massive generation of medical data using different imaging modalities, adequate storage, management, and transmission of biomedical images necessitate image compression techniques. Vector quantization (VQ) is an effective image compression approach, and the widely employed VQ technique is Linde–Buzo–Gray (LBG), which generates local optimum codebooks for image compression. The codebook construction is treated as an optimization issue solved with utilization of metaheuristic optimization techniques. In this view, this paper designs an effective biomedical image compression technique in the cloud computing (CC) environment using Harris Hawks Optimization (HHO)-based LBG techniques. The HHO-LBG algorithm achieves a smooth transition among exploration as well as exploitation. To investigate the better performance of the HHO-LBG technique, an extensive set of simulations was carried out on benchmark biomedical images. The proposed HHO-LBG technique has accomplished promising results in terms of compression performance and reconstructed image quality.
在当前的数字时代,医学技术的开发和使用不同成像模式的大量医学数据的生成,生物医学图像的充分存储,管理和传输都需要图像压缩技术。矢量量化(VQ)是一种有效的图像压缩方法,目前广泛应用的矢量量化技术是Linde-Buzo-Gray (LBG),它可以生成局部最优的图像压缩码本。码本构造是一个利用元启发式优化技术解决的优化问题。鉴于此,本文利用基于Harris Hawks Optimization (HHO)的LBG技术,设计了一种在云计算(CC)环境下有效的生物医学图像压缩技术。HHO-LBG算法实现了勘探和开采之间的平滑过渡。为了研究HHO-LBG技术的更好性能,对基准生物医学图像进行了广泛的模拟。所提出的HHO-LBG技术在压缩性能和重建图像质量方面取得了令人满意的效果。
{"title":"HHO-Based Vector Quantization Technique for Biomedical Image Compression in Cloud Computing","authors":"T. S. Kumar, S. Jothilakshmi, B. C. James, M. Prakash, N. Arulkumar, C. Rekha","doi":"10.1142/s0219467822400083","DOIUrl":"https://doi.org/10.1142/s0219467822400083","url":null,"abstract":"In the present digital era, the exploitation of medical technologies and massive generation of medical data using different imaging modalities, adequate storage, management, and transmission of biomedical images necessitate image compression techniques. Vector quantization (VQ) is an effective image compression approach, and the widely employed VQ technique is Linde–Buzo–Gray (LBG), which generates local optimum codebooks for image compression. The codebook construction is treated as an optimization issue solved with utilization of metaheuristic optimization techniques. In this view, this paper designs an effective biomedical image compression technique in the cloud computing (CC) environment using Harris Hawks Optimization (HHO)-based LBG techniques. The HHO-LBG algorithm achieves a smooth transition among exploration as well as exploitation. To investigate the better performance of the HHO-LBG technique, an extensive set of simulations was carried out on benchmark biomedical images. The proposed HHO-LBG technique has accomplished promising results in terms of compression performance and reconstructed image quality.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125156844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel Ensemble Stacking Classification of Genetic Variations Using Machine Learning Algorithms 基于机器学习算法的遗传变异集成叠加分类
Pub Date : 2021-12-31 DOI: 10.1142/s0219467823500158
Y. Jahnavi, Poongothai Elango, S. Raja, P. Kumar
Genetics is the clinical review of congenital mutation, where the principal advantage of analyzing genetic mutation of humans is the exploration, analysis, interpretation and description of the genetic transmitted and inherited effect of several diseases such as cancer, diabetes and heart diseases. Cancer is the most troublesome and disordered affliction as the proportion of cancer sufferers is growing massively. Identification and discrimination of the mutations that impart to the enlargement of tumor from the unbiased mutations is difficult, as majority tumors of cancer are able to exercise genetic mutations. The genetic mutations are systematized and categorized to sort the cancer by way of medical observations and considering clinical studies. At the present time, genetic mutations are being annotated and these interpretations are being accomplished either manually or using the existing primary algorithms. Evaluation and classification of each and every individual genetic mutation was basically predicated on evidence from documented content built on medical literature. Consequently, as a means to build genetic mutations, basically, depending on the clinical evidences persists a challenging task. There exist various algorithms such as one hot encoding technique is used to derive features from genes and their variations, TF-IDF is used to extract features from the clinical text data. In order to increase the accuracy of the classification, machine learning algorithms such as support vector machine, logistic regression, Naive Bayes, etc., are experimented. A stacking model classifier has been developed to increase the accuracy. The proposed stacking model classifier has obtained the log loss 0.8436 and 0.8572 for cross-validation data set and test data set, respectively. By the experimentation, it has been proved that the proposed stacking model classifier outperforms the existing algorithms in terms of log loss. Basically, minimum log loss refers to the efficient model. Here the log loss has been reduced to less than 1 by using the proposed stacking model classifier. The performance of these algorithms can be gauged on the basis of the various measures like multi-class log loss.
遗传学是先天性突变的临床综述,对人类基因突变进行分析的主要优点是对癌症、糖尿病、心脏病等几种疾病的遗传传递和遗传作用进行探索、分析、解释和描述。随着癌症患者比例的大幅增长,癌症是最麻烦、最混乱的疾病。从无偏突变中识别和区分导致肿瘤扩大的突变是困难的,因为大多数癌症肿瘤都能够进行基因突变。结合医学观察和临床研究,对基因突变进行系统化分类,对癌症进行分类。目前,基因突变正在被注释,这些解释要么是手动完成的,要么是使用现有的主要算法完成的。每个个体基因突变的评估和分类基本上都是基于医学文献中记录的证据。因此,作为构建基因突变的手段,基本上依靠临床证据仍然是一项具有挑战性的任务。目前存在多种算法,如一种热编码技术用于从基因及其变异中提取特征,TF-IDF用于从临床文本数据中提取特征。为了提高分类的准确率,实验了支持向量机、逻辑回归、朴素贝叶斯等机器学习算法。为了提高分类精度,开发了一种堆叠模型分类器。本文提出的叠加模型分类器在交叉验证数据集和测试数据集上的对数损失分别为0.8436和0.8572。通过实验证明,本文提出的叠加模型分类器在对数损失方面优于现有算法。基本上,最小对数损失指的是高效模型。在这里,通过使用所提出的堆叠模型分类器,日志损失减少到小于1。这些算法的性能可以根据多类日志损失等各种度量来衡量。
{"title":"A Novel Ensemble Stacking Classification of Genetic Variations Using Machine Learning Algorithms","authors":"Y. Jahnavi, Poongothai Elango, S. Raja, P. Kumar","doi":"10.1142/s0219467823500158","DOIUrl":"https://doi.org/10.1142/s0219467823500158","url":null,"abstract":"Genetics is the clinical review of congenital mutation, where the principal advantage of analyzing genetic mutation of humans is the exploration, analysis, interpretation and description of the genetic transmitted and inherited effect of several diseases such as cancer, diabetes and heart diseases. Cancer is the most troublesome and disordered affliction as the proportion of cancer sufferers is growing massively. Identification and discrimination of the mutations that impart to the enlargement of tumor from the unbiased mutations is difficult, as majority tumors of cancer are able to exercise genetic mutations. The genetic mutations are systematized and categorized to sort the cancer by way of medical observations and considering clinical studies. At the present time, genetic mutations are being annotated and these interpretations are being accomplished either manually or using the existing primary algorithms. Evaluation and classification of each and every individual genetic mutation was basically predicated on evidence from documented content built on medical literature. Consequently, as a means to build genetic mutations, basically, depending on the clinical evidences persists a challenging task. There exist various algorithms such as one hot encoding technique is used to derive features from genes and their variations, TF-IDF is used to extract features from the clinical text data. In order to increase the accuracy of the classification, machine learning algorithms such as support vector machine, logistic regression, Naive Bayes, etc., are experimented. A stacking model classifier has been developed to increase the accuracy. The proposed stacking model classifier has obtained the log loss 0.8436 and 0.8572 for cross-validation data set and test data set, respectively. By the experimentation, it has been proved that the proposed stacking model classifier outperforms the existing algorithms in terms of log loss. Basically, minimum log loss refers to the efficient model. Here the log loss has been reduced to less than 1 by using the proposed stacking model classifier. The performance of these algorithms can be gauged on the basis of the various measures like multi-class log loss.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124106084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DCT Coefficients Weighting (DCTCW)-Based Gray Wolf Optimization (GWO) for Brightness Preserving Image Contrast Enhancement 基于DCT系数加权(DCTCW)的灰度狼优化(GWO)保亮度图像对比度增强
Pub Date : 2021-12-31 DOI: 10.1142/s0219467823500183
Saorabh Kumar Mondal, Arpitam Chatterjee, B. Tudu
Image contrast enhancement (CE) is a frequent image enhancement requirement in diverse applications. Histogram equalization (HE), in its conventional and different further improved ways, is a popular technique to enhance the image contrast. The conventional as well as many of the later versions of HE algorithms often cause loss of original image characteristics particularly brightness distribution of original image that results artificial appearance and feature loss in the enhanced image. Discrete Cosine Transform (DCT) coefficient mapping is one of the recent methods to minimize such problems while enhancing the image contrast. Tuning of DCT parameters plays a crucial role towards avoiding the saturations of pixel values. Optimization can be a possible solution to address this problem and generate contrast enhanced image preserving the desired original image characteristics. Biological behavior-inspired optimization techniques have shown remarkable betterment over conventional optimization techniques in different complex engineering problems. Gray wolf optimization (GWO) is a comparatively new algorithm in this domain that has shown promising potential. The objective function has been formulated using different parameters to retain original image characteristics. The objective evaluation against CEF, PCQI, FSIM, BRISQUE and NIQE with test images from three standard databases, namely, SIPI, TID and CSIQ shows that the presented method can result in values up to 1.4, 1.4, 0.94, 19 and 4.18, respectively, for the stated metrics which are competitive to the reported conventional and improved techniques. This paper can be considered a first-time application of GWO towards DCT-based image CE.
图像对比度增强(CE)是各种应用中常见的图像增强需求。直方图均衡化(HE)是一种常用的增强图像对比度的技术,有其传统的和不同的进一步改进方法。传统的以及后来的许多版本的HE算法通常会导致原始图像特征的丢失,特别是原始图像的亮度分布,从而导致增强图像的人工外观和特征丢失。离散余弦变换(DCT)系数映射是在增强图像对比度的同时最小化此类问题的最新方法之一。DCT参数的调整对于避免像素值饱和起着至关重要的作用。优化可以是解决这个问题的一个可能的解决方案,并生成对比度增强的图像,保留所需的原始图像特征。在不同的复杂工程问题中,基于生物行为的优化技术比传统的优化技术表现出显著的优越性。灰狼优化(GWO)是该领域一种较新的算法,具有广阔的应用前景。目标函数采用不同的参数来保持原始图像的特征。利用SIPI、TID和CSIQ三个标准数据库的测试图像,对CEF、PCQI、FSIM、BRISQUE和NIQE进行客观评价,结果表明,该方法所述指标的值分别高达1.4、1.4、0.94、19和4.18,与已有的传统和改进技术相比具有竞争力。本文可视为GWO在基于dct的图像CE中的首次应用。
{"title":"DCT Coefficients Weighting (DCTCW)-Based Gray Wolf Optimization (GWO) for Brightness Preserving Image Contrast Enhancement","authors":"Saorabh Kumar Mondal, Arpitam Chatterjee, B. Tudu","doi":"10.1142/s0219467823500183","DOIUrl":"https://doi.org/10.1142/s0219467823500183","url":null,"abstract":"Image contrast enhancement (CE) is a frequent image enhancement requirement in diverse applications. Histogram equalization (HE), in its conventional and different further improved ways, is a popular technique to enhance the image contrast. The conventional as well as many of the later versions of HE algorithms often cause loss of original image characteristics particularly brightness distribution of original image that results artificial appearance and feature loss in the enhanced image. Discrete Cosine Transform (DCT) coefficient mapping is one of the recent methods to minimize such problems while enhancing the image contrast. Tuning of DCT parameters plays a crucial role towards avoiding the saturations of pixel values. Optimization can be a possible solution to address this problem and generate contrast enhanced image preserving the desired original image characteristics. Biological behavior-inspired optimization techniques have shown remarkable betterment over conventional optimization techniques in different complex engineering problems. Gray wolf optimization (GWO) is a comparatively new algorithm in this domain that has shown promising potential. The objective function has been formulated using different parameters to retain original image characteristics. The objective evaluation against CEF, PCQI, FSIM, BRISQUE and NIQE with test images from three standard databases, namely, SIPI, TID and CSIQ shows that the presented method can result in values up to 1.4, 1.4, 0.94, 19 and 4.18, respectively, for the stated metrics which are competitive to the reported conventional and improved techniques. This paper can be considered a first-time application of GWO towards DCT-based image CE.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116090111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Modal Medical Image Fusion Using 3-Stage Multiscale Decomposition and PCNN with Adaptive Arguments 基于3阶段多尺度分解和PCNN自适应参数的医学图像融合
Pub Date : 2021-12-31 DOI: 10.1142/s0219467822400101
Mummadi Gowthami Reddy, P. V. Reddy, P. Reddy
In the current era of technological development, medical imaging plays an important role in many applications of medical diagnosis and therapy. In this regard, medical image fusion could be a powerful tool to combine multi-modal images by using image processing techniques. But, conventional approaches failed to provide the effective image quality assessments and robustness of fused image. To overcome these drawbacks, in this work three-stage multiscale decomposition (TSMSD) using pulse-coupled neural networks with adaptive arguments (PCNN-AA) approach is proposed for multi-modal medical image fusion. Initially, nonsubsampled shearlet transform (NSST) is applied onto the source images to decompose them into low frequency and high frequency bands. Then, low frequency bands of both the source images are fused using nonlinear anisotropic filtering with discrete Karhunen–Loeve transform (NLAF-DKLT) methodology. Next, high frequency bands obtained from NSST are fused using PCNN-AA approach. Now, fused low frequency and high frequency bands are reconstructed using NSST reconstruction. Finally, band fusion rule algorithm with pyramid reconstruction is applied to get final fused medical image. Extensive simulation outcome discloses the superiority of proposed TSMSD using PCNN-AA approach as compared to state-of-the-art medical image fusion methods in terms of fusion quality metrics such as entropy (E), mutual information (MI), mean (M), standard deviation (STD), correlation coefficient (CC) and computational complexity.
在当今科技发展的时代,医学影像在医学诊断和治疗的许多应用中发挥着重要的作用。在这方面,医学图像融合可以成为一个强大的工具,结合多模态图像的图像处理技术。但是,传统的方法无法提供有效的图像质量评估和融合图像的鲁棒性。为了克服这些缺点,本文提出了基于脉冲耦合神经网络自适应参数的三阶段多尺度分解(TSMSD)方法用于多模态医学图像融合。首先对源图像进行非下采样shearlet变换(NSST),将其分解为低频和高频两个波段。然后,采用离散Karhunen-Loeve变换(naff - dklt)方法对两幅源图像的低频段进行非线性各向异性滤波融合。其次,采用PCNN-AA方法对nst获得的高频进行融合。目前,采用NSST重构法对融合的低频和高频进行重构。最后,采用金字塔重建的波段融合规则算法得到最终融合的医学图像。大量的仿真结果表明,与最先进的医学图像融合方法相比,采用PCNN-AA方法的TSMSD在融合质量指标(如熵(E)、互信息(MI)、均值(M)、标准差(STD)、相关系数(CC)和计算复杂度)方面具有优势。
{"title":"Multi-Modal Medical Image Fusion Using 3-Stage Multiscale Decomposition and PCNN with Adaptive Arguments","authors":"Mummadi Gowthami Reddy, P. V. Reddy, P. Reddy","doi":"10.1142/s0219467822400101","DOIUrl":"https://doi.org/10.1142/s0219467822400101","url":null,"abstract":"In the current era of technological development, medical imaging plays an important role in many applications of medical diagnosis and therapy. In this regard, medical image fusion could be a powerful tool to combine multi-modal images by using image processing techniques. But, conventional approaches failed to provide the effective image quality assessments and robustness of fused image. To overcome these drawbacks, in this work three-stage multiscale decomposition (TSMSD) using pulse-coupled neural networks with adaptive arguments (PCNN-AA) approach is proposed for multi-modal medical image fusion. Initially, nonsubsampled shearlet transform (NSST) is applied onto the source images to decompose them into low frequency and high frequency bands. Then, low frequency bands of both the source images are fused using nonlinear anisotropic filtering with discrete Karhunen–Loeve transform (NLAF-DKLT) methodology. Next, high frequency bands obtained from NSST are fused using PCNN-AA approach. Now, fused low frequency and high frequency bands are reconstructed using NSST reconstruction. Finally, band fusion rule algorithm with pyramid reconstruction is applied to get final fused medical image. Extensive simulation outcome discloses the superiority of proposed TSMSD using PCNN-AA approach as compared to state-of-the-art medical image fusion methods in terms of fusion quality metrics such as entropy (E), mutual information (MI), mean (M), standard deviation (STD), correlation coefficient (CC) and computational complexity.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126264489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Integrated Double Hybrid Fusion Approach for Image Smoothing 一种集成的双混合融合图像平滑方法
Pub Date : 2021-12-30 DOI: 10.1142/s0219467823500031
Anchal Kumawat, S. Panda
Often in practice, during the process of image acquisition, the acquired image gets degraded due to various factors like noise, motion blur, mis-focus of a camera, atmospheric turbulence, etc. resulting in the image unsuitable for further analysis or processing. To improve the quality of these degraded images, a double hybrid restoration filter is proposed on the two same sets of input images and the output images are fused to get a unified filter in combination with the concept of image fusion. First image set is processed by applying deconvolution using Wiener Filter (DWF) twice and decomposing the output image using Discrete Wavelet Transform (DWT). Similarly, second image set is also processed simultaneously by applying Deconvolution using Lucy–Richardson Filter (DLR) twice followed by the above procedure. The proposed filter gives a better performance as compared to DWF and DLR filters in case of both blurry as well as noisy images. The proposed filter is compared with some standard deconvolution algorithms and also some state-of-the-art restoration filters with the help of seven image quality assessment parameters. Simulation results prove the success of the proposed algorithm and at the same time, visual and quantitative results are very impressive.
在实际应用中,在图像采集过程中,由于噪声、运动模糊、相机失焦、大气湍流等各种因素的影响,采集到的图像往往会出现降级,导致图像不适合进一步分析或处理。为了提高退化图像的质量,结合图像融合的概念,对两组相同的输入图像提出了双混合恢复滤波器,并对输出图像进行融合,得到一个统一的滤波器。首先对图像集进行两次维纳滤波(Wiener Filter, DWF)反卷积处理,然后对输出图像进行离散小波变换(Discrete Wavelet Transform, DWT)分解。同样,第二图像集也同时处理,使用Lucy-Richardson Filter (DLR)进行两次反卷积,然后按照上述步骤进行处理。与DWF和DLR滤波器相比,所提出的滤波器在模糊和噪声图像的情况下都具有更好的性能。利用7个图像质量评价参数,将该滤波器与一些标准的反卷积算法和一些最先进的恢复滤波器进行了比较。仿真结果证明了所提算法的成功,同时,可视化和定量结果都令人印象深刻。
{"title":"An Integrated Double Hybrid Fusion Approach for Image Smoothing","authors":"Anchal Kumawat, S. Panda","doi":"10.1142/s0219467823500031","DOIUrl":"https://doi.org/10.1142/s0219467823500031","url":null,"abstract":"Often in practice, during the process of image acquisition, the acquired image gets degraded due to various factors like noise, motion blur, mis-focus of a camera, atmospheric turbulence, etc. resulting in the image unsuitable for further analysis or processing. To improve the quality of these degraded images, a double hybrid restoration filter is proposed on the two same sets of input images and the output images are fused to get a unified filter in combination with the concept of image fusion. First image set is processed by applying deconvolution using Wiener Filter (DWF) twice and decomposing the output image using Discrete Wavelet Transform (DWT). Similarly, second image set is also processed simultaneously by applying Deconvolution using Lucy–Richardson Filter (DLR) twice followed by the above procedure. The proposed filter gives a better performance as compared to DWF and DLR filters in case of both blurry as well as noisy images. The proposed filter is compared with some standard deconvolution algorithms and also some state-of-the-art restoration filters with the help of seven image quality assessment parameters. Simulation results prove the success of the proposed algorithm and at the same time, visual and quantitative results are very impressive.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133638184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Performance Evaluation of Convolutional Neural Network Using Synthetic Medical Data Augmentation Generated by GAN 基于GAN合成医疗数据增强的卷积神经网络性能评价
Pub Date : 2021-12-28 DOI: 10.1142/s021946782350002x
Ramesh Adhikari, Suresh Pokharel
Data augmentation is widely used in image processing and pattern recognition problems in order to increase the richness in diversity of available data. It is commonly used to improve the classification accuracy of images when the available datasets are limited. Deep learning approaches have demonstrated an immense breakthrough in medical diagnostics over the last decade. A significant amount of datasets are needed for the effective training of deep neural networks. The appropriate use of data augmentation techniques prevents the model from over-fitting and thus increases the generalization capability of the network while testing afterward on unseen data. However, it remains a huge challenge to obtain such a large dataset from rare diseases in the medical field. This study presents the synthetic data augmentation technique using Generative Adversarial Networks to evaluate the generalization capability of neural networks using existing data more effectively. In this research, the convolutional neural network (CNN) model is used to classify the X-ray images of the human chest in both normal and pneumonia conditions; then, the synthetic images of the X-ray from the available dataset are generated by using the deep convolutional generative adversarial network (DCGAN) model. Finally, the CNN model is trained again with the original dataset and augmented data generated using the DCGAN model. The classification performance of the CNN model is improved by 3.2% when the augmented data were used along with the originally available dataset.
为了增加可用数据的丰富性和多样性,数据增强被广泛应用于图像处理和模式识别问题。它通常用于在可用数据集有限的情况下提高图像的分类精度。在过去的十年里,深度学习方法在医学诊断方面取得了巨大的突破。深度神经网络的有效训练需要大量的数据集。适当使用数据增强技术可以防止模型过度拟合,从而在随后对未见过的数据进行测试时提高网络的泛化能力。然而,在医学领域获得如此庞大的罕见病数据集仍然是一个巨大的挑战。本文提出了一种基于生成对抗网络的综合数据增强技术,以更有效地评估神经网络利用现有数据的泛化能力。在本研究中,使用卷积神经网络(CNN)模型对正常和肺炎情况下的人体胸部x射线图像进行分类;然后,利用深度卷积生成对抗网络(DCGAN)模型从可用数据集中生成x射线的合成图像。最后,使用原始数据集和DCGAN模型生成的增强数据对CNN模型进行再次训练。当增强数据与原始可用数据集一起使用时,CNN模型的分类性能提高了3.2%。
{"title":"Performance Evaluation of Convolutional Neural Network Using Synthetic Medical Data Augmentation Generated by GAN","authors":"Ramesh Adhikari, Suresh Pokharel","doi":"10.1142/s021946782350002x","DOIUrl":"https://doi.org/10.1142/s021946782350002x","url":null,"abstract":"Data augmentation is widely used in image processing and pattern recognition problems in order to increase the richness in diversity of available data. It is commonly used to improve the classification accuracy of images when the available datasets are limited. Deep learning approaches have demonstrated an immense breakthrough in medical diagnostics over the last decade. A significant amount of datasets are needed for the effective training of deep neural networks. The appropriate use of data augmentation techniques prevents the model from over-fitting and thus increases the generalization capability of the network while testing afterward on unseen data. However, it remains a huge challenge to obtain such a large dataset from rare diseases in the medical field. This study presents the synthetic data augmentation technique using Generative Adversarial Networks to evaluate the generalization capability of neural networks using existing data more effectively. In this research, the convolutional neural network (CNN) model is used to classify the X-ray images of the human chest in both normal and pneumonia conditions; then, the synthetic images of the X-ray from the available dataset are generated by using the deep convolutional generative adversarial network (DCGAN) model. Finally, the CNN model is trained again with the original dataset and augmented data generated using the DCGAN model. The classification performance of the CNN model is improved by 3.2% when the augmented data were used along with the originally available dataset.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126395267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Descriptive Survey on Face Emotion Recognition Techniques 人脸情感识别技术综述
Pub Date : 2021-12-24 DOI: 10.1142/s0219467823500080
B. Devi, M. Preetha
Recognition of natural emotion from human faces has applications in Human–Computer Interaction, image and video retrieval, automated tutoring systems, smart environment as well as driver warning systems. It is also a significant indication of nonverbal communication among the individuals. The assignment of Face Emotion Recognition (FER) is predominantly complex for two reasons. The first reason is the nonexistence of a large database of training images, and the second one is about classifying the emotions, which can be complex based on the static input image. In addition, robust unbiased FER in real time remains the foremost challenge for various supervised learning-based techniques. This survey analyzes diverse techniques regarding the FER systems. It reviews a bunch of research papers and performs a significant analysis. Initially, the analysis depicts various techniques that are contributed in different research papers. In addition, this paper offers a comprehensive study regarding the chronological review and performance achievements in each contribution. The analytical review is also concerned about the measures for which the maximum performance was achieved in several contributions. Finally, the survey is extended with various research issues and gaps that can be useful for the researchers to promote improved future works on the FER models.
从人脸中识别自然情感在人机交互、图像和视频检索、自动辅导系统、智能环境以及驾驶员警告系统中都有应用。这也是个体间非语言交流的重要标志。由于两个原因,人脸情绪识别(FER)的分配非常复杂。第一个原因是没有一个大型的训练图像数据库,第二个原因是关于情绪的分类,基于静态输入图像的分类可能会很复杂。此外,实时鲁棒无偏FER仍然是各种基于监督学习技术面临的最大挑战。本调查分析了关于FER系统的各种技术。它回顾了一堆研究论文,并进行了重要的分析。最初,分析描述了在不同的研究论文中贡献的各种技术。此外,本文还对每篇论文的时间回顾和绩效成果进行了全面的研究。分析性审查还关注在若干贡献中取得最大成效的措施。最后,调查扩展了各种研究问题和差距,这些问题和差距可以为研究人员促进未来FER模型的改进工作提供有用的信息。
{"title":"A Descriptive Survey on Face Emotion Recognition Techniques","authors":"B. Devi, M. Preetha","doi":"10.1142/s0219467823500080","DOIUrl":"https://doi.org/10.1142/s0219467823500080","url":null,"abstract":"Recognition of natural emotion from human faces has applications in Human–Computer Interaction, image and video retrieval, automated tutoring systems, smart environment as well as driver warning systems. It is also a significant indication of nonverbal communication among the individuals. The assignment of Face Emotion Recognition (FER) is predominantly complex for two reasons. The first reason is the nonexistence of a large database of training images, and the second one is about classifying the emotions, which can be complex based on the static input image. In addition, robust unbiased FER in real time remains the foremost challenge for various supervised learning-based techniques. This survey analyzes diverse techniques regarding the FER systems. It reviews a bunch of research papers and performs a significant analysis. Initially, the analysis depicts various techniques that are contributed in different research papers. In addition, this paper offers a comprehensive study regarding the chronological review and performance achievements in each contribution. The analytical review is also concerned about the measures for which the maximum performance was achieved in several contributions. Finally, the survey is extended with various research issues and gaps that can be useful for the researchers to promote improved future works on the FER models.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129857320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Int. J. Image Graph.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1