首页 > 最新文献

2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)最新文献

英文 中文
Image Based Virtual Reality Haptic Simulation for Multimodal Skin Tumor Surgery Training 基于图像的多模态皮肤肿瘤手术训练虚拟现实触觉仿真
Pub Date : 2021-12-08 DOI: 10.1109/BioSMART54244.2021.9677802
Jin Woo Kim, Hyunjae Jeong, Kwangtaek Kim, Dustin P. DeMeo, B. Carroll
We present the Virtual Reality Haptic Surgery Platform (VRHSP), a multimodal haptic virtual reality training simulator for elliptical skin excisions (i.e., skin tumor surgeries). Using a haptic device and a head mounted display, participants interact with actual skin images mapped to a 3D simulated surgical suite. In this study, the primary aim is to build the VRHSP with an initial narrow focus of simulating the outlining and incision steps of skin tumor surgeries with realistic tactile and visual feedback collocated in a 3D clinical scene. The secondary aim is to investigate the effectiveness of the VRHSP's haptic feedback capability, which we hypothesized would play an important role because skin tumor surgery is a tactile skill. The results of user studies on non-medical and medical participants from Kent State University and University Hospitals Cleveland Medical Center, respectively. The qualitative results suggest that the VRHSP has potential for high adoption, especially with haptic feedback. The quantitative results demonstrate the VRHSP's ability to discern experts from non-experts. Finally, the improved performance of participants with feedback suggests that haptic feedback can be used as a teaching tool as well as a realism tool.
我们提出了虚拟现实触觉手术平台(VRHSP),一个用于椭圆皮肤切除(即皮肤肿瘤手术)的多模态触觉虚拟现实训练模拟器。使用触觉设备和头戴式显示器,参与者与映射到3D模拟手术套件的实际皮肤图像进行交互。在本研究中,主要目的是构建VRHSP,最初的狭窄焦点是模拟皮肤肿瘤手术的轮廓和切口步骤,并在三维临床场景中结合真实的触觉和视觉反馈。第二个目的是研究VRHSP触觉反馈能力的有效性,我们假设这将发挥重要作用,因为皮肤肿瘤手术是一种触觉技能。分别来自肯特州立大学和克利夫兰医学中心大学医院的非医疗和医疗参与者的用户研究结果。定性结果表明VRHSP具有很高的应用潜力,特别是在触觉反馈方面。定量结果证明了VRHSP区分专家和非专家的能力。最后,有反馈的参与者的表现得到改善,这表明触觉反馈可以作为一种教学工具,也可以作为一种现实主义工具。
{"title":"Image Based Virtual Reality Haptic Simulation for Multimodal Skin Tumor Surgery Training","authors":"Jin Woo Kim, Hyunjae Jeong, Kwangtaek Kim, Dustin P. DeMeo, B. Carroll","doi":"10.1109/BioSMART54244.2021.9677802","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677802","url":null,"abstract":"We present the Virtual Reality Haptic Surgery Platform (VRHSP), a multimodal haptic virtual reality training simulator for elliptical skin excisions (i.e., skin tumor surgeries). Using a haptic device and a head mounted display, participants interact with actual skin images mapped to a 3D simulated surgical suite. In this study, the primary aim is to build the VRHSP with an initial narrow focus of simulating the outlining and incision steps of skin tumor surgeries with realistic tactile and visual feedback collocated in a 3D clinical scene. The secondary aim is to investigate the effectiveness of the VRHSP's haptic feedback capability, which we hypothesized would play an important role because skin tumor surgery is a tactile skill. The results of user studies on non-medical and medical participants from Kent State University and University Hospitals Cleveland Medical Center, respectively. The qualitative results suggest that the VRHSP has potential for high adoption, especially with haptic feedback. The quantitative results demonstrate the VRHSP's ability to discern experts from non-experts. Finally, the improved performance of participants with feedback suggests that haptic feedback can be used as a teaching tool as well as a realism tool.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121218907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interpretable AI Model-Based Predictions of ECG changes in COVID-recovered patients 基于可解释AI模型的新冠肺炎康复患者心电图变化预测
Pub Date : 2021-12-08 DOI: 10.1109/BioSMART54244.2021.9677747
Anubha Gupta, Jayant Jain, Shubhankar Poundrik, M. Shetty, M. Girish, M. Gupta
COVID-19 has caused immense social and economic losses throughout the world. Subjects recovered from COVID are learned to have complications. Some studies have shown a change in the heart rate variability (HRV) in COVID-recovered subjects compared to the healthy ones. This change indicates an increased risk of heart problems among the survivors of moderate-to-severe COVID. Hence, this study is aimed at finding HRV features that get altered in COVID-recovered subjects compared to healthy subjects. Data of COVID-recovered and healthy subjects were collected from two hospitals in Delhi, India. Seven ML models have been built to classify healthy versus COVID-recovered subjects. The best-performing model was further analyzed to explore the ranking of altered heart features in COVID-recovered subjects via AI interpretability. Ranking of these features can indicate cardiovascular health status to doctors, who can provide support to the COVID-recovered subjects for timely safeguard from heart disorders. To the best of our knowledge, this is the first study with an in-depth analysis of the heart status of COVID-recovered subjects via ECG analysis.
2019冠状病毒病在全世界造成了巨大的社会和经济损失。从COVID中恢复的受试者已知有并发症。一些研究表明,与健康受试者相比,covid - 19康复受试者的心率变异性(HRV)发生了变化。这一变化表明,中重度COVID幸存者患心脏病的风险增加。因此,本研究旨在发现与健康受试者相比,covid - 19康复受试者的HRV特征发生了变化。从印度德里的两家医院收集了covid - 19康复和健康受试者的数据。已经建立了7个ML模型来对健康和covid - 19康复的受试者进行分类。通过AI可解释性,进一步分析表现最佳的模型,以探索新冠肺炎康复受试者心脏特征改变的排名。这些特征的排名可以向医生显示心血管健康状况,医生可以为covid - 19康复的受试者提供支持,及时预防心脏病。据我们所知,这是第一个通过心电图分析深入分析covid - 19康复受试者心脏状况的研究。
{"title":"Interpretable AI Model-Based Predictions of ECG changes in COVID-recovered patients","authors":"Anubha Gupta, Jayant Jain, Shubhankar Poundrik, M. Shetty, M. Girish, M. Gupta","doi":"10.1109/BioSMART54244.2021.9677747","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677747","url":null,"abstract":"COVID-19 has caused immense social and economic losses throughout the world. Subjects recovered from COVID are learned to have complications. Some studies have shown a change in the heart rate variability (HRV) in COVID-recovered subjects compared to the healthy ones. This change indicates an increased risk of heart problems among the survivors of moderate-to-severe COVID. Hence, this study is aimed at finding HRV features that get altered in COVID-recovered subjects compared to healthy subjects. Data of COVID-recovered and healthy subjects were collected from two hospitals in Delhi, India. Seven ML models have been built to classify healthy versus COVID-recovered subjects. The best-performing model was further analyzed to explore the ranking of altered heart features in COVID-recovered subjects via AI interpretability. Ranking of these features can indicate cardiovascular health status to doctors, who can provide support to the COVID-recovered subjects for timely safeguard from heart disorders. To the best of our knowledge, this is the first study with an in-depth analysis of the heart status of COVID-recovered subjects via ECG analysis.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123387749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automated Cell Segmentation for Phase-Contrast Images of Adhesion Cell Culture 粘附细胞培养相衬图像的自动细胞分割
Pub Date : 2021-12-08 DOI: 10.1109/BioSMART54244.2021.9677717
Guochang Ye, Mehmet Kaya
Cell segmentation is a critical step for performing image-based experimental analysis. This study proposes an efficient and accurate cell segmentation method. This image processing pipeline involving simple morphological operations automatically achieves cell segmentation for phase-contrast images. Manual/Visual cell segmentation serves as the control group to evaluate the proposed methodology's performance. Regarding the manual labeling data (156 images as ground truth), the proposed method achieves 90.07% as the average dice coefficient, 82.16% as the average intersection over union, and 6.52% as the average relative error on measuring cell growth area. Additionally, similar degrees of segmentation accuracy are observed on training a modified U-Net model (16848 images) individually with the ground truth and the generated data resulting from the proposed method. These results demonstrate good accuracy and high practicality of the proposed cell segmentation method capable of quantitating cell growth area and generating labeled data for deep learning cell segmentation techniques.
细胞分割是进行基于图像的实验分析的关键步骤。本研究提出了一种高效、准确的细胞分割方法。该图像处理流水线涉及简单的形态学操作,可自动实现相衬图像的细胞分割。手动/视觉细胞分割作为对照组来评估所提出的方法的性能。对于人工标注数据(156幅图像为ground truth),该方法在测量细胞生长面积时,平均骰子系数达到90.07%,平均交点/联合误差达到82.16%,平均相对误差达到6.52%。此外,在使用ground truth和由所提出的方法生成的数据单独训练改进的U-Net模型(16848张图像)时,可以观察到相似程度的分割精度。这些结果表明,所提出的细胞分割方法具有良好的准确性和实用性,能够定量细胞生长面积并为深度学习细胞分割技术生成标记数据。
{"title":"Automated Cell Segmentation for Phase-Contrast Images of Adhesion Cell Culture","authors":"Guochang Ye, Mehmet Kaya","doi":"10.1109/BioSMART54244.2021.9677717","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677717","url":null,"abstract":"Cell segmentation is a critical step for performing image-based experimental analysis. This study proposes an efficient and accurate cell segmentation method. This image processing pipeline involving simple morphological operations automatically achieves cell segmentation for phase-contrast images. Manual/Visual cell segmentation serves as the control group to evaluate the proposed methodology's performance. Regarding the manual labeling data (156 images as ground truth), the proposed method achieves 90.07% as the average dice coefficient, 82.16% as the average intersection over union, and 6.52% as the average relative error on measuring cell growth area. Additionally, similar degrees of segmentation accuracy are observed on training a modified U-Net model (16848 images) individually with the ground truth and the generated data resulting from the proposed method. These results demonstrate good accuracy and high practicality of the proposed cell segmentation method capable of quantitating cell growth area and generating labeled data for deep learning cell segmentation techniques.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125298344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intracranial Pressure Prediction with a Recurrent Neural Network Model 用递归神经网络模型预测颅内压
Pub Date : 2021-12-08 DOI: 10.1109/BioSMART54244.2021.9677652
Guochang Ye, Vignesh Balasubramanian, J. Li, M. Kaya
Abnormal elevation of intracranial pressure (ICP) can cause dangerous or even fatal outcomes. The early detection of high intracranial pressure events can be crucial in saving patients' life in an intensive care unit (ICU). This study proposes an efficient artificial recurrent neural network to predict intracranial pressure evaluation for thirteen patients. The learning model is generated uniquely for each patient to predict the occurrence of the ICP event (classified into high ICP or low ICP) for the upcoming 10 minutes by inputting the previous 20-minutes signal. The results showed that the minimal accuracy of predicting intracranial pressure events was 90% for 11 patients, whereas a minimum of 95% accuracy was obtained among five patients. This study introduces an efficient artificial recurrent neural network model on the early prediction of intracranial pressure evaluation supported by the high adaptive performance of the LSTM model.
颅内压(ICP)异常升高可引起危险甚至致命的后果。早期发现高颅内压事件对于挽救重症监护病房(ICU)患者的生命至关重要。本研究提出一种有效的人工递归神经网络来预测13例患者的颅内压评估。为每个患者单独生成学习模型,通过输入前20分钟的信号来预测未来10分钟内的ICP事件(分为高ICP或低ICP)的发生。结果显示,11例患者预测颅内压事件的最低准确度为90%,而5例患者预测颅内压事件的最低准确度为95%。本研究在LSTM模型高自适应性能的支持下,引入了一种高效的人工递归神经网络模型,用于颅内压评估的早期预测。
{"title":"Intracranial Pressure Prediction with a Recurrent Neural Network Model","authors":"Guochang Ye, Vignesh Balasubramanian, J. Li, M. Kaya","doi":"10.1109/BioSMART54244.2021.9677652","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677652","url":null,"abstract":"Abnormal elevation of intracranial pressure (ICP) can cause dangerous or even fatal outcomes. The early detection of high intracranial pressure events can be crucial in saving patients' life in an intensive care unit (ICU). This study proposes an efficient artificial recurrent neural network to predict intracranial pressure evaluation for thirteen patients. The learning model is generated uniquely for each patient to predict the occurrence of the ICP event (classified into high ICP or low ICP) for the upcoming 10 minutes by inputting the previous 20-minutes signal. The results showed that the minimal accuracy of predicting intracranial pressure events was 90% for 11 patients, whereas a minimum of 95% accuracy was obtained among five patients. This study introduces an efficient artificial recurrent neural network model on the early prediction of intracranial pressure evaluation supported by the high adaptive performance of the LSTM model.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125061161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
PAANet: Progressive Alternating Attention for Automatic Medical Image Segmentation PAANet:渐进式交替关注自动医学图像分割
Pub Date : 2021-11-20 DOI: 10.1109/BioSMART54244.2021.9677844
Abhishek Srivastava, S. Chanda, Debesh Jha, M. Riegler, P. Halvorsen, Dag Johansen, U. Pal
Medical image segmentation can provide detailed information for clinical analysis which can be useful for scenarios where the detailed location of a finding is important. Knowing the location of a disease can play a vital role in treatment and decision-making. Convolutional neural network (CNN) based encoder-decoder techniques have advanced the performance of automated medical image segmentation systems. Several such CNN-based methodologies utilize techniques such as spatial- and channel-wise attention to enhance performance. Another technique that has drawn attention in recent years is residual dense blocks (RDBs). The successive convolutional layers in densely connected blocks are capable of extracting diverse features with varied receptive fields and thus, enhancing performance. However, consecutive stacked convolutional operators may not necessarily generate features that facilitate the identification of the target structures. In this paper, we propose a progressive alternating attention network (PAANet). We develop progressive alternating attention dense (PAAD) blocks, which construct a guiding attention map (GAM) after every convolutional layer in the dense blocks using features from all scales. The GAM allows the following layers in the dense blocks to focus on the spatial locations relevant to the target region. Every alternate PAAD block inverts the GAM to generate a reverse attention map which guides ensuing layers to extract boundary and edge-related information, refining the segmentation process. Our experiments on three different biomedical image segmentation datasets exhibit that our PAANet achieves favorable performance when compared to other state-of-the-art methods.
医学图像分割可以为临床分析提供详细的信息,这对于发现的详细位置很重要的情况是有用的。了解疾病的位置可以在治疗和决策中发挥至关重要的作用。基于卷积神经网络(CNN)的编解码器技术提高了自动医学图像分割系统的性能。一些这样的基于cnn的方法利用诸如空间和频道明智的注意力等技术来提高性能。另一项近年来备受关注的技术是残差密集块(rdb)。密集连接块中的连续卷积层能够提取具有不同接收野的不同特征,从而提高性能。然而,连续堆叠卷积算子不一定能生成有利于目标结构识别的特征。本文提出了一种渐进式交替注意网络(PAANet)。我们开发了渐进式交替注意密集(PAAD)块,它在密集块的每个卷积层之后使用所有尺度的特征构建一个引导注意图(GAM)。GAM允许密集块中的以下层聚焦于与目标区域相关的空间位置。每个交替的PAAD块反转GAM以生成一个反向注意图,该图指导后续层提取边界和边缘相关信息,改进分割过程。我们在三种不同的生物医学图像分割数据集上的实验表明,与其他最先进的方法相比,我们的PAANet取得了良好的性能。
{"title":"PAANet: Progressive Alternating Attention for Automatic Medical Image Segmentation","authors":"Abhishek Srivastava, S. Chanda, Debesh Jha, M. Riegler, P. Halvorsen, Dag Johansen, U. Pal","doi":"10.1109/BioSMART54244.2021.9677844","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677844","url":null,"abstract":"Medical image segmentation can provide detailed information for clinical analysis which can be useful for scenarios where the detailed location of a finding is important. Knowing the location of a disease can play a vital role in treatment and decision-making. Convolutional neural network (CNN) based encoder-decoder techniques have advanced the performance of automated medical image segmentation systems. Several such CNN-based methodologies utilize techniques such as spatial- and channel-wise attention to enhance performance. Another technique that has drawn attention in recent years is residual dense blocks (RDBs). The successive convolutional layers in densely connected blocks are capable of extracting diverse features with varied receptive fields and thus, enhancing performance. However, consecutive stacked convolutional operators may not necessarily generate features that facilitate the identification of the target structures. In this paper, we propose a progressive alternating attention network (PAANet). We develop progressive alternating attention dense (PAAD) blocks, which construct a guiding attention map (GAM) after every convolutional layer in the dense blocks using features from all scales. The GAM allows the following layers in the dense blocks to focus on the spatial locations relevant to the target region. Every alternate PAAD block inverts the GAM to generate a reverse attention map which guides ensuing layers to extract boundary and edge-related information, refining the segmentation process. Our experiments on three different biomedical image segmentation datasets exhibit that our PAANet achieves favorable performance when compared to other state-of-the-art methods.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128015927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Skeleton-Split Framework using Spatial Temporal Graph Convolutional Networks for Action Recognition 基于时空图卷积网络的动作识别骨架分割框架
Pub Date : 2021-11-04 DOI: 10.1109/BioSMART54244.2021.9677634
Motasem S. Alsawadi, Miguel Rio
There has been a dramatic increase in the volume of videos and their related content uploaded to the internet. Accordingly, the need for efficient algorithms to analyse this vast amount of data has attracted significant research interest. This work aims to recognize activities of daily living using the ST-GCN model, providing a comparison between four different partitioning strategies: spatial configuration partitioning, full distance split, connection split, and index split. To achieve this aim, we present the first implementation of the ST-GCN framework upon the HMDB-51 dataset. Additionally, we show that our proposals have achieved the highest accuracy performance on the UCF-101 dataset using the ST-GCN framework than the state-of-the-art approach.
上传到互联网上的视频及其相关内容的数量急剧增加。因此,需要有效的算法来分析这大量的数据已经引起了重大的研究兴趣。本文旨在利用ST-GCN模型对日常生活活动进行识别,并对空间配置划分、全距离分割、连接分割和索引分割四种不同的划分策略进行比较。为了实现这一目标,我们提出了基于HMDB-51数据集的ST-GCN框架的第一个实现。此外,我们表明,我们的建议在使用ST-GCN框架的UCF-101数据集上取得了比最先进方法更高的精度性能。
{"title":"Skeleton-Split Framework using Spatial Temporal Graph Convolutional Networks for Action Recognition","authors":"Motasem S. Alsawadi, Miguel Rio","doi":"10.1109/BioSMART54244.2021.9677634","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677634","url":null,"abstract":"There has been a dramatic increase in the volume of videos and their related content uploaded to the internet. Accordingly, the need for efficient algorithms to analyse this vast amount of data has attracted significant research interest. This work aims to recognize activities of daily living using the ST-GCN model, providing a comparison between four different partitioning strategies: spatial configuration partitioning, full distance split, connection split, and index split. To achieve this aim, we present the first implementation of the ST-GCN framework upon the HMDB-51 dataset. Additionally, we show that our proposals have achieved the highest accuracy performance on the UCF-101 dataset using the ST-GCN framework than the state-of-the-art approach.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134151195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1