首页 > 最新文献

Workshop on Speech, Music and Mind (SMM 2018)最新文献

英文 中文
Time-frequency spectral error for analysis of high arousal speech 高唤醒语音分析的时频误差
Pub Date : 2018-09-01 DOI: 10.21437/SMM.2018-4
P. Gangamohan, S. Gangashetty, B. Yegnanarayana
High arousal speech is produced by speakers when they raise their loudness levels. There are deviations from neutral speech, especially in the excitation component of the speech production mechanism in the high arousal mode. In this study, a parameter, called the time-frequency spectral error (TFe) is derived using the single frequency filtering (SFF) spectrogram. It is used to characterize the high arousal regions in speech signals. The proposed parameter captures the fine temporal and spectral variations due to changes in the excitation source.
当说话者提高音量水平时,就会产生高唤醒性的语言。在高唤醒模式下,言语产生机制的兴奋成分与中性言语存在偏差。在本研究中,使用单频滤波(SFF)谱图导出了一个称为时频谱误差(TFe)的参数。它用于表征语音信号中的高唤醒区域。所提出的参数捕获了由于激发源变化而引起的精细时间和光谱变化。
{"title":"Time-frequency spectral error for analysis of high arousal speech","authors":"P. Gangamohan, S. Gangashetty, B. Yegnanarayana","doi":"10.21437/SMM.2018-4","DOIUrl":"https://doi.org/10.21437/SMM.2018-4","url":null,"abstract":"High arousal speech is produced by speakers when they raise their loudness levels. There are deviations from neutral speech, especially in the excitation component of the speech production mechanism in the high arousal mode. In this study, a parameter, called the time-frequency spectral error (TFe) is derived using the single frequency filtering (SFF) spectrogram. It is used to characterize the high arousal regions in speech signals. The proposed parameter captures the fine temporal and spectral variations due to changes in the excitation source.","PeriodicalId":158743,"journal":{"name":"Workshop on Speech, Music and Mind (SMM 2018)","volume":"75 s321","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113953560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A component-based approach to study the effect of Indian music on emotions 以成分为基础的方法研究印度音乐对情绪的影响
Pub Date : 2018-09-01 DOI: 10.21437/SMM.2018-7
V. Viraraghavan, A. Pal, H. Murthy, R. Aravind
The emotional impact of Indian music on human listeners has been studied mainly with respect to ragas. Although this approach aligns with the traditional and musicological views, some studies show that raga-specific effects may not be consistent. In this paper, we propose an alternative method of study based on the components of Indian Classical Music, which may be viewed as consisting of constant-pitch notes (CPNs) provid-ing the context, and transients, the detail. One hundred concert pieces in four ragas each in Carnatic music (CM) and Hindustani music (HM) are analyzed to show that the transients are, on average, longer than CPNs. Further, the defined scale of the raga is not always mirrored in the CPNs for CM. We also draw upon the result that CPNs and transients scale non-uniformly when changing the tempo of CM pieces. Based on the observations and previous results on the emotional impact of the major and minor scales in Western music, we propose that the effect of CPNs and transients should be analyzed separately. We present a preliminary experiment that brings outs related challenges.
印度音乐对人类听众的情感影响主要是关于拉格乐的研究。尽管这种方法与传统和音乐学观点一致,但一些研究表明,拉格乐特有的效果可能并不一致。在本文中,我们提出了一种基于印度古典音乐组成部分的替代研究方法,可以将其视为由恒定音高音符(cpn)组成,提供上下文和瞬态,细节。分析了卡纳蒂克音乐(CM)和印度斯坦音乐(HM)的四种拉格乐中的一百首音乐会作品,表明瞬变平均比cpn长。此外,定义的拉格尺度并不总是反映在CM的cpn中。我们还利用了cpn和瞬态尺度在改变CM片段的速度时不均匀的结果。基于对西方音乐中大调音阶和小调音阶的情绪影响的观察和前人的研究结果,我们建议将cpn和瞬变音阶的影响分开分析。我们提出了一个初步的实验,并提出了相关的挑战。
{"title":"A component-based approach to study the effect of Indian music on emotions","authors":"V. Viraraghavan, A. Pal, H. Murthy, R. Aravind","doi":"10.21437/SMM.2018-7","DOIUrl":"https://doi.org/10.21437/SMM.2018-7","url":null,"abstract":"The emotional impact of Indian music on human listeners has been studied mainly with respect to ragas. Although this approach aligns with the traditional and musicological views, some studies show that raga-specific effects may not be consistent. In this paper, we propose an alternative method of study based on the components of Indian Classical Music, which may be viewed as consisting of constant-pitch notes (CPNs) provid-ing the context, and transients, the detail. One hundred concert pieces in four ragas each in Carnatic music (CM) and Hindustani music (HM) are analyzed to show that the transients are, on average, longer than CPNs. Further, the defined scale of the raga is not always mirrored in the CPNs for CM. We also draw upon the result that CPNs and transients scale non-uniformly when changing the tempo of CM pieces. Based on the observations and previous results on the emotional impact of the major and minor scales in Western music, we propose that the effect of CPNs and transients should be analyzed separately. We present a preliminary experiment that brings outs related challenges.","PeriodicalId":158743,"journal":{"name":"Workshop on Speech, Music and Mind (SMM 2018)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115339411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Analysis of Speech Emotions in Realistic Environments 现实环境中的言语情绪分析
Pub Date : 2018-09-01 DOI: 10.21437/smm.2018-3
B. Sarma, Rohan Kumar Das, Abhishek Dey, Risto Haukioja
The classification of emotional speech is a challenging task and it depends critically on the correctness of labeled data. Most of the databases used for research purposes are either acted or simulated. Annotation of such acted database is easier as the actor exaggerates the emotions. On the other hand, emotion labeling on real-world data is very difficult due to confusion among the emotion classes. Another problem in such scenario is the class imbalance, because most of the data is found to be neutral in realistic environment. In this study, we perform emotion labeling on realistic data in a customized manner using emotion priority and confidence level. The annotated speech corpus is then used for analysis and study. Percentage distribution of different emotion classes in the real-world data and the confusions between the emotions during labeling are presented.
情绪言语的分类是一项具有挑战性的任务,它严重依赖于标记数据的正确性。大多数用于研究目的的数据库要么是演的,要么是模拟的。由于演员对情感的夸大,对这种行为数据库的标注更加容易。另一方面,由于情感类别之间的混淆,在现实世界数据上进行情感标记非常困难。在这种情况下的另一个问题是类不平衡,因为在现实环境中发现大多数数据是中性的。在本研究中,我们使用情感优先级和置信度以定制的方式对现实数据进行情感标记。然后使用标注的语音语料库进行分析和研究。给出了真实世界数据中不同情绪类别的百分比分布以及标记过程中情绪之间的混淆。
{"title":"Analysis of Speech Emotions in Realistic Environments","authors":"B. Sarma, Rohan Kumar Das, Abhishek Dey, Risto Haukioja","doi":"10.21437/smm.2018-3","DOIUrl":"https://doi.org/10.21437/smm.2018-3","url":null,"abstract":"The classification of emotional speech is a challenging task and it depends critically on the correctness of labeled data. Most of the databases used for research purposes are either acted or simulated. Annotation of such acted database is easier as the actor exaggerates the emotions. On the other hand, emotion labeling on real-world data is very difficult due to confusion among the emotion classes. Another problem in such scenario is the class imbalance, because most of the data is found to be neutral in realistic environment. In this study, we perform emotion labeling on realistic data in a customized manner using emotion priority and confidence level. The annotated speech corpus is then used for analysis and study. Percentage distribution of different emotion classes in the real-world data and the confusions between the emotions during labeling are presented.","PeriodicalId":158743,"journal":{"name":"Workshop on Speech, Music and Mind (SMM 2018)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115924913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Emotional Speech Classifier Systems: For Sensitive Assistance to support Disabled Individuals 情感语音分类系统:用于支持残疾人的敏感援助
Pub Date : 2018-09-01 DOI: 10.21437/SMM.2018-2
V. V. Raju, P. Jain, K. Gurugubelli, A. Vuppala
This paper provides the classification of emotionally annotated speech of mentally impaired people. The main problem encoun-tered in the classification task is the class-imbalance. This imbalance is due to the availability of large number of speech samples for the neutral speech compared to other emotional speech. Different sampling methodologies are explored at the back-end to handle this class-imbalance problem. Mel-frequency cepstral coefficients (MFCCs) features are considered at the front-end, deep neural networks (DNNs) and gradient boosted decision trees (GBDT) are investigated at the back-end as classifiers. The experimental results obtained from the EmotAsS dataset have shown higher classification accuracy and Unweighted Average Recall (UAR) scores over the baseline system.
本文对智障人士的情感注释言语进行了分类。分类任务中遇到的主要问题是类不平衡。这种不平衡是由于与其他情绪言语相比,中性言语有大量的言语样本。在后端探索了不同的抽样方法来处理这种类不平衡问题。在前端考虑Mel-frequency倒谱系数(MFCCs)特征,在后端研究深度神经网络(dnn)和梯度增强决策树(GBDT)作为分类器。EmotAsS数据集的实验结果显示,与基线系统相比,EmotAsS的分类准确率和未加权平均召回率(UAR)得分更高。
{"title":"Emotional Speech Classifier Systems: For Sensitive Assistance to support Disabled Individuals","authors":"V. V. Raju, P. Jain, K. Gurugubelli, A. Vuppala","doi":"10.21437/SMM.2018-2","DOIUrl":"https://doi.org/10.21437/SMM.2018-2","url":null,"abstract":"This paper provides the classification of emotionally annotated speech of mentally impaired people. The main problem encoun-tered in the classification task is the class-imbalance. This imbalance is due to the availability of large number of speech samples for the neutral speech compared to other emotional speech. Different sampling methodologies are explored at the back-end to handle this class-imbalance problem. Mel-frequency cepstral coefficients (MFCCs) features are considered at the front-end, deep neural networks (DNNs) and gradient boosted decision trees (GBDT) are investigated at the back-end as classifiers. The experimental results obtained from the EmotAsS dataset have shown higher classification accuracy and Unweighted Average Recall (UAR) scores over the baseline system.","PeriodicalId":158743,"journal":{"name":"Workshop on Speech, Music and Mind (SMM 2018)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116317089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Task-Independent EEG based Subject Identification using Auditory Stimulus 基于听觉刺激的任务无关脑电主体识别
Pub Date : 2018-09-01 DOI: 10.21437/SMM.2018-6
D. Vinothkumar, Mari Ganesh Kumar, Abhishek Kumar, H. Gupta, S. SaranyaM, M. Sur, H. Murthy
Recent studies have shown that task-specific electroencephalography (EEG) can be used as a reliable biometric. This paper extends this study to task-independent EEG with auditory stimuli. Data collected from 40 subjects in response to various types of audio stimuli, using a 128 channel EEG system is presented to different classifiers, namely, k-nearest neighbor (k-NN), arti-ficial neural network (ANN) and universal background model - Gaussian mixture model (UBM-GMM). It is observed that k-NN and ANN perform well when testing is performed intrasession, while UBM-GMM framework is more robust when testing is performed intersession. This can be attributed to the fact that the correspondence of the sensor locations across sessions is only approximate. It is also observed that EEG from parietal and temporal regions contain more subject information although the performance using all the 128 channel data is marginally better.
最近的研究表明,任务特异性脑电图(EEG)可以作为一种可靠的生物识别方法。本文将此研究扩展到具有听觉刺激的任务无关脑电图。采用128通道脑电系统采集了40名被试在不同类型音频刺激下的脑电信号数据,并将其提交给不同的分类器,即k-最近邻(k-NN)、人工神经网络(ANN)和通用背景模型-高斯混合模型(UBM-GMM)。观察到,k-NN和ANN在会话内执行测试时表现良好,而UBM-GMM框架在会话间执行测试时更稳健。这可以归因于这样一个事实,即传感器位置在会话之间的对应关系只是近似的。虽然使用所有128通道数据的性能略好,但顶叶和颞叶区域的EEG包含更多的主题信息。
{"title":"Task-Independent EEG based Subject Identification using Auditory Stimulus","authors":"D. Vinothkumar, Mari Ganesh Kumar, Abhishek Kumar, H. Gupta, S. SaranyaM, M. Sur, H. Murthy","doi":"10.21437/SMM.2018-6","DOIUrl":"https://doi.org/10.21437/SMM.2018-6","url":null,"abstract":"Recent studies have shown that task-specific electroencephalography (EEG) can be used as a reliable biometric. This paper extends this study to task-independent EEG with auditory stimuli. Data collected from 40 subjects in response to various types of audio stimuli, using a 128 channel EEG system is presented to different classifiers, namely, k-nearest neighbor (k-NN), arti-ficial neural network (ANN) and universal background model - Gaussian mixture model (UBM-GMM). It is observed that k-NN and ANN perform well when testing is performed intrasession, while UBM-GMM framework is more robust when testing is performed intersession. This can be attributed to the fact that the correspondence of the sensor locations across sessions is only approximate. It is also observed that EEG from parietal and temporal regions contain more subject information although the performance using all the 128 channel data is marginally better.","PeriodicalId":158743,"journal":{"name":"Workshop on Speech, Music and Mind (SMM 2018)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122006855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Discriminating between High-Arousal and Low-Arousal Emotional States of Mind using Acoustic Analysis 用声学分析区分高唤醒和低唤醒的情绪状态
Pub Date : 2018-09-01 DOI: 10.21437/SMM.2018-1
Esther Ramdinmawii, V. K. Mittal
Identification of emotions from human speech can be attempted by focusing upon three aspects of emotional speech: valence, arousal and dominance. In this paper, changes in the production characteristics of emotional speech are examined to discriminate between the high-arousal and low-arousal emotions, and amongst emotions within each of these categories. Basic emotions anger, happy and fear are examined in high-arousal, and neutral speech and sad emotion in low-arousal emotional speech. Discriminating changes are examined first in the excitation source characteristics, i.e., instantaneous fundamental frequency (F0) derived using the zero-frequency filtering (ZFF) method. Differences observed in the spectrograms are then validated by examining changes in the combined characteristics of the source and the vocal tract filter, i.e., strength of excitation (SoE), derived using ZFF method, and signal energy features. Emotions within each category are distinguished by examining changes in two scarcely explored discriminating features, namely, zero-crossing rate and the ratios amongst the spectral sub-band energies computed using short-time Fourier transform. Effectiveness of these features in discriminating emotions is validated using two emotion databases, Berlin EMO-DB (German) and IIT-KGP-SESC (Telugu). Proposed features exhibit highly encouraging results in discriminating these emotions. This study can be helpful towards automatic classification of emotions from speech.
从人类言语中识别情感可以通过关注情感言语的三个方面来尝试:效价、唤醒和支配。在本文中,研究了情绪言语的产生特征的变化,以区分高唤醒和低唤醒情绪,以及这些类别中的情绪。在高唤醒的情绪言语中检测愤怒、快乐和恐惧的基本情绪,在低唤醒的情绪言语中检测中性情绪和悲伤情绪。鉴别变化首先在激励源特性,即瞬时基频(F0)中进行检查,使用零频率滤波(ZFF)方法导出。在频谱图中观察到的差异,然后通过检查源和声道滤波器的组合特征的变化来验证,即使用ZFF方法导出的激发强度(SoE)和信号能量特征。每个类别中的情绪通过检查两个几乎没有探索的区别特征的变化来区分,即零交叉率和使用短时傅里叶变换计算的光谱子带能量之间的比率。使用Berlin EMO-DB(德语)和IIT-KGP-SESC(泰卢固语)两个情绪数据库验证了这些特征在区分情绪方面的有效性。提出的特征在区分这些情绪方面显示出非常令人鼓舞的结果。本研究有助于对言语情绪的自动分类。
{"title":"Discriminating between High-Arousal and Low-Arousal Emotional States of Mind using Acoustic Analysis","authors":"Esther Ramdinmawii, V. K. Mittal","doi":"10.21437/SMM.2018-1","DOIUrl":"https://doi.org/10.21437/SMM.2018-1","url":null,"abstract":"Identification of emotions from human speech can be attempted by focusing upon three aspects of emotional speech: valence, arousal and dominance. In this paper, changes in the production characteristics of emotional speech are examined to discriminate between the high-arousal and low-arousal emotions, and amongst emotions within each of these categories. Basic emotions anger, happy and fear are examined in high-arousal, and neutral speech and sad emotion in low-arousal emotional speech. Discriminating changes are examined first in the excitation source characteristics, i.e., instantaneous fundamental frequency (F0) derived using the zero-frequency filtering (ZFF) method. Differences observed in the spectrograms are then validated by examining changes in the combined characteristics of the source and the vocal tract filter, i.e., strength of excitation (SoE), derived using ZFF method, and signal energy features. Emotions within each category are distinguished by examining changes in two scarcely explored discriminating features, namely, zero-crossing rate and the ratios amongst the spectral sub-band energies computed using short-time Fourier transform. Effectiveness of these features in discriminating emotions is validated using two emotion databases, Berlin EMO-DB (German) and IIT-KGP-SESC (Telugu). Proposed features exhibit highly encouraging results in discriminating these emotions. This study can be helpful towards automatic classification of emotions from speech.","PeriodicalId":158743,"journal":{"name":"Workshop on Speech, Music and Mind (SMM 2018)","volume":"2010 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121742227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CNN+LSTM Architecture for Speech Emotion Recognition with Data Augmentation 基于CNN+LSTM架构的数据增强语音情感识别
Pub Date : 2018-02-15 DOI: 10.21437/SMM.2018-5
Caroline Etienne, Guillaume Fidanza, Andrei Petrovskii, L. Devillers, B. Schmauch
In this work we design a neural network for recognizing emotions in speech, using the IEMOCAP dataset. Following the latest advances in audio analysis, we use an architecture involving both convolutional layers, for extracting high-level features from raw spectrograms, and recurrent ones for aggregating long-term dependencies. We examine the techniques of data augmentation with vocal track length perturbation, layer-wise optimizer adjustment, batch normalization of recurrent layers and obtain highly competitive results of 64.5% for weighted accuracy and 61.7% for unweighted accuracy on four emotions.
在这项工作中,我们设计了一个神经网络来识别语音中的情绪,使用IEMOCAP数据集。根据音频分析的最新进展,我们使用了一种架构,包括卷积层,用于从原始频谱图中提取高级特征,以及用于聚合长期依赖关系的循环层。我们研究了用声道长度扰动、分层优化器调整、周期性层的批归一化来增强数据的技术,并在四种情绪上获得了加权准确率64.5%和非加权准确率61.7%的极具竞争力的结果。
{"title":"CNN+LSTM Architecture for Speech Emotion Recognition with Data Augmentation","authors":"Caroline Etienne, Guillaume Fidanza, Andrei Petrovskii, L. Devillers, B. Schmauch","doi":"10.21437/SMM.2018-5","DOIUrl":"https://doi.org/10.21437/SMM.2018-5","url":null,"abstract":"In this work we design a neural network for recognizing emotions in speech, using the IEMOCAP dataset. Following the latest advances in audio analysis, we use an architecture involving both convolutional layers, for extracting high-level features from raw spectrograms, and recurrent ones for aggregating long-term dependencies. We examine the techniques of data augmentation with vocal track length perturbation, layer-wise optimizer adjustment, batch normalization of recurrent layers and obtain highly competitive results of 64.5% for weighted accuracy and 61.7% for unweighted accuracy on four emotions.","PeriodicalId":158743,"journal":{"name":"Workshop on Speech, Music and Mind (SMM 2018)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114667222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
期刊
Workshop on Speech, Music and Mind (SMM 2018)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1