首页 > 最新文献

Integrated Computer-Aided Engineering最新文献

英文 中文
Gap imputation in related multivariate time series through recurrent neural network-based denoising autoencoder1 通过基于递归神经网络的去噪自编码器对相关多元时间序列进行差距估算1
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-21 DOI: 10.3233/ica-230728
Serafín Alonso, Antonio Morán, Daniel Pérez, Miguel A. Prada, Juan J. Fuertes, Manuel Domínguez

Abstract

Technological advances in industry have made it possible to install many connected sensors, generating a great amount of observations at high rate. The advent of Industry 4.0 requires analysis capabilities of heterogeneous data in form of related multivariate time series. However, missing data can degrade processing and lead to bias and misunderstandings or even wrong decision-making. In this paper, a recurrent neural network-based denoising autoencoder is proposed for gap imputation in related multivariate time series, i.e., series that exhibit spatio-temporal correlations. The denoising autoencoder (DAE) is able to reproduce input missing data by learning to remove intentionally added gaps, while the recurrent neural network (RNN) captures temporal patterns and relationships among variables. For that reason, different unidirectional (simple RNN, GRU, LSTM) and bidirectional (BiSRNN, BiGRU, BiLSTM) architectures are compared with each other and to state-of-the-art methods using three different datasets in the experiments. The implementation with BiGRU layers outperforms the others, effectively filling gaps with a low reconstruction error. The use of this approach is appropriate for complex scenarios where several variables contain long gaps. However, extreme scenarios with very short gaps in one variable or no available data should be avoided.

摘要工业领域的技术进步使得安装大量联网传感器成为可能,从而产生了大量高速观测数据。工业 4.0 的到来要求对相关多变量时间序列形式的异构数据具备分析能力。然而,数据缺失会降低处理能力,导致偏差和误解,甚至错误决策。本文提出了一种基于递归神经网络的去噪自动编码器,用于相关多变量时间序列(即表现出时空相关性的序列)中的缺失估算。去噪自编码器(DAE)能够通过学习消除有意添加的间隙来重现输入的缺失数据,而递归神经网络(RNN)则能捕捉时间模式和变量之间的关系。为此,我们在实验中使用三个不同的数据集,对不同的单向(简单 RNN、GRU、LSTM)和双向(BiSRNN、BiGRU、BiLSTM)架构进行了比较,并将其与最先进的方法进行了比较。使用 BiGRU 层的实现方法优于其他方法,它能以较低的重建误差有效填补空白。这种方法适用于多个变量包含长间隙的复杂场景。但是,应避免出现某个变量间隙很短或没有可用数据的极端情况。
{"title":"Gap imputation in related multivariate time series through recurrent neural network-based denoising autoencoder1","authors":"Serafín Alonso, Antonio Morán, Daniel Pérez, Miguel A. Prada, Juan J. Fuertes, Manuel Domínguez","doi":"10.3233/ica-230728","DOIUrl":"https://doi.org/10.3233/ica-230728","url":null,"abstract":"<h4><span>Abstract</span></h4><p>Technological advances in industry have made it possible to install many connected sensors, generating a great amount of observations at high rate. The advent of Industry 4.0 requires analysis capabilities of heterogeneous data in form of related multivariate time series. However, missing data can degrade processing and lead to bias and misunderstandings or even wrong decision-making. In this paper, a recurrent neural network-based denoising autoencoder is proposed for gap imputation in related multivariate time series, i.e., series that exhibit spatio-temporal correlations. The denoising autoencoder (DAE) is able to reproduce input missing data by learning to remove intentionally added gaps, while the recurrent neural network (RNN) captures temporal patterns and relationships among variables. For that reason, different unidirectional (simple RNN, GRU, LSTM) and bidirectional (BiSRNN, BiGRU, BiLSTM) architectures are compared with each other and to state-of-the-art methods using three different datasets in the experiments. The implementation with BiGRU layers outperforms the others, effectively filling gaps with a low reconstruction error. The use of this approach is appropriate for complex scenarios where several variables contain long gaps. However, extreme scenarios with very short gaps in one variable or no available data should be avoided.</p>","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"1 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139067089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highly compressed image representation for classification and content retrieval 用于分类和内容检索的高压缩图像表示法
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-21 DOI: 10.3233/ica-230729
Stanisław Łażewski, Bogusław Cyganek

Abstract

In this paper, we propose a new method of representing images using highly compressed features for classification and image content retrieval – called PCA-ResFeats. They are obtained by fusing high- and low-level features from the outputs of ResNet-50 residual blocks and applying to them principal component analysis, which leads to a significant reduction in dimensionality. Further on, by applying a floating-point compression, we are able to reduce the memory required to store a single image by up to 1,200 times compared to jpg images and 220 times compared to features obtained by simple output fusion of ResNet-50. As a result, the representation of a single image from the dataset can be as low as 35 bytes on average. In comparison with the classification results on features from fusion of the last ResNet-50 residual block, we achieve a comparable accuracy (no worse than five percentage points), while preserving two orders of magnitude data compression. We also tested our method in the content-based image retrieval task, achieving better results than other known methods using sparse features. Moreover, our method enables the creation of concise summaries of image content, which can find numerous applications in databases.

摘要 在本文中,我们提出了一种使用高度压缩特征表示图像的新方法,用于分类和图像内容检索--称为 PCA-ResFeats。这些特征是通过融合 ResNet-50 残差块输出中的高层和低层特征并对其进行主成分分析而得到的,从而显著降低了维度。此外,通过应用浮点压缩,我们能够将存储单张图像所需的内存减少到 jpg 图像的 1,200 倍,比 ResNet-50 的简单输出融合特征减少 220 倍。因此,数据集中单张图像的平均表示量可低至 35 字节。与通过融合最后一个 ResNet-50 剩余块获得的特征进行分类的结果相比,我们获得了相当高的准确率(不低于五个百分点),同时保持了两个数量级的数据压缩。我们还在基于内容的图像检索任务中测试了我们的方法,结果优于其他使用稀疏特征的已知方法。此外,我们的方法还能创建图像内容的简明摘要,这在数据库中应用广泛。
{"title":"Highly compressed image representation for classification and content retrieval","authors":"Stanisław Łażewski, Bogusław Cyganek","doi":"10.3233/ica-230729","DOIUrl":"https://doi.org/10.3233/ica-230729","url":null,"abstract":"<h4><span>Abstract</span></h4><p>In this paper, we propose a new method of representing images using highly compressed features for classification and image content retrieval – called <i>PCA-ResFeats</i>. They are obtained by fusing high- and low-level features from the outputs of ResNet-50 residual blocks and applying to them principal component analysis, which leads to a significant reduction in dimensionality. Further on, by applying a floating-point compression, we are able to reduce the memory required to store a single image by up to 1,200 times compared to jpg images and 220 times compared to features obtained by simple output fusion of ResNet-50. As a result, the representation of a single image from the dataset can be as low as 35 bytes on average. In comparison with the classification results on features from fusion of the last ResNet-50 residual block, we achieve a comparable accuracy (no worse than five percentage points), while preserving two orders of magnitude data compression. We also tested our method in the content-based image retrieval task, achieving better results than other known methods using sparse features. Moreover, our method enables the creation of concise summaries of image content, which can find numerous applications in databases.</p>","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"21 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139067095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle side-slip angle estimation under snowy conditions using machine learning 利用机器学习估算雪地条件下的车辆侧滑角
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-21 DOI: 10.3233/ica-230727
Georg Novotny, Yuzhou Liu, Walter Morales-Alvarez, Wilfried Wöber, Cristina Olaverri-Monreal
Adverse weather conditions, such as snow-covered roads, represent a challenge for autonomous vehicle research. This is particularly challenging as it might cause misalignment between the longitudinal axis of the vehicle and the actual direction of travel. In this paper, we extend previous work in the field of autonomous vehicles on snow-covered roads and present a novel approach for side-slip angle estimation that combines perception with a hybrid artificial neural network pushing the prediction horizon beyond existing approaches. We exploited the feature extraction capabilities of convolutional neural networks and the dynamic time series relationship learning capabilities of gated recurrent units and combined them with a motion model to estimate the side-slip angle. Subsequently, we evaluated the model using the 3DCoAutoSim simulation platform, where we designed a suitable simulation environment with snowfall, friction, and car tracks in snow. The results revealed that our approach outperforms the baseline model for prediction horizons ⩾ 2 seconds. This extended prediction horizon has practical implications, by providing drivers and autonomous systems with more time to make informed decisions, thereby enhancing road safety.
恶劣的天气条件,如积雪覆盖的道路,是自动驾驶汽车研究面临的一项挑战。这尤其具有挑战性,因为它可能导致车辆纵轴与实际行驶方向不一致。在本文中,我们扩展了之前在积雪路面上自动驾驶车辆领域的工作,提出了一种新的侧滑角估计方法,该方法将感知与混合人工神经网络相结合,使预测范围超越了现有方法。我们利用卷积神经网络的特征提取能力和门控递归单元的动态时间序列关系学习能力,并将其与运动模型相结合来估计侧滑角。随后,我们利用 3DCoAutoSim 仿真平台对模型进行了评估,在该平台上,我们设计了一个具有降雪、摩擦力和雪地中汽车行驶轨迹的合适仿真环境。结果表明,在预测范围⩾ 2 秒时,我们的方法优于基准模型。这种扩展的预测范围具有实际意义,可为驾驶员和自动驾驶系统提供更多时间做出明智决策,从而提高道路安全性。
{"title":"Vehicle side-slip angle estimation under snowy conditions using machine learning","authors":"Georg Novotny, Yuzhou Liu, Walter Morales-Alvarez, Wilfried Wöber, Cristina Olaverri-Monreal","doi":"10.3233/ica-230727","DOIUrl":"https://doi.org/10.3233/ica-230727","url":null,"abstract":"Adverse weather conditions, such as snow-covered roads, represent a challenge for autonomous vehicle research. This is particularly challenging as it might cause misalignment between the longitudinal axis of the vehicle and the actual direction of travel. In this paper, we extend previous work in the field of autonomous vehicles on snow-covered roads and present a novel approach for side-slip angle estimation that combines perception with a hybrid artificial neural network pushing the prediction horizon beyond existing approaches. We exploited the feature extraction capabilities of convolutional neural networks and the dynamic time series relationship learning capabilities of gated recurrent units and combined them with a motion model to estimate the side-slip angle. Subsequently, we evaluated the model using the 3DCoAutoSim simulation platform, where we designed a suitable simulation environment with snowfall, friction, and car tracks in snow. The results revealed that our approach outperforms the baseline model for prediction horizons ⩾ 2 seconds. This extended prediction horizon has practical implications, by providing drivers and autonomous systems with more time to make informed decisions, thereby enhancing road safety.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"64 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139410512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing smart home appliance recognition with wavelet and scalogram analysis using data augmentation 利用数据增强技术,通过小波和扫描图分析提高智能家电识别能力
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3233/ica-230726
José L. Salazar-González, José María Luna-Romera, Manuel Carranza-García, Juan A. Álvarez-García, Luis M. Soria-Morillo

Abstract

The development of smart homes, equipped with devices connected to the Internet of Things (IoT), has opened up new possibilities to monitor and control energy consumption. In this context, non-intrusive load monitoring (NILM) techniques have emerged as a promising solution for the disaggregation of total energy consumption into the consumption of individual appliances. The classification of electrical appliances in a smart home remains a challenging task for machine learning algorithms. In the present study, we propose comparing and evaluating the performance of two different algorithms, namely Multi-Label K-Nearest Neighbors (MLkNN) and Convolutional Neural Networks (CNN), for NILM in two different scenarios: without and with data augmentation (DAUG). Our results show how the classification results can be better interpreted by generating a scalogram image from the power consumption signal data and processing it with CNNs. The results indicate that the CNN model with the proposed data augmentation performed significantly higher, obtaining a mean F1-score of 0.484 (an improvement of +0.234), better than the other methods. Additionally, after performing the Friedman statistical test, it indicates that it is significantly different from the other methods compared. Our proposed system can potentially reduce energy waste and promote more sustainable energy use in homes and buildings by providing personalized feedback and energy savings tips.

摘要 智能家居的发展,配备了连接到物联网(IoT)的设备,为监测和控制能源消耗提供了新的可能性。在此背景下,非侵入式负载监控(NILM)技术成为将总能耗分解为单个电器能耗的一种有前途的解决方案。对于机器学习算法来说,智能家居中的电器分类仍然是一项具有挑战性的任务。在本研究中,我们建议比较和评估两种不同算法的性能,即多标签 K-最近邻(MLkNN)和卷积神经网络(CNN),在两种不同的场景下用于 NILM:无数据增强(DAUG)和有数据增强(DAUG)。我们的研究结果表明,通过从功耗信号数据中生成扫描图像并使用 CNN 进行处理,可以更好地解释分类结果。结果表明,采用了拟议数据增强技术的 CNN 模型性能明显提高,平均 F1 分数为 0.484(提高了 +0.234),优于其他方法。此外,在进行弗里德曼统计检验后,结果表明它与其他比较方法有显著差异。我们提出的系统可以通过提供个性化反馈和节能提示,减少能源浪费,促进家庭和建筑更可持续地使用能源。
{"title":"Enhancing smart home appliance recognition with wavelet and scalogram analysis using data augmentation","authors":"José L. Salazar-González, José María Luna-Romera, Manuel Carranza-García, Juan A. Álvarez-García, Luis M. Soria-Morillo","doi":"10.3233/ica-230726","DOIUrl":"https://doi.org/10.3233/ica-230726","url":null,"abstract":"<h4><span>Abstract</span></h4><p>The development of smart homes, equipped with devices connected to the Internet of Things (IoT), has opened up new possibilities to monitor and control energy consumption. In this context, non-intrusive load monitoring (NILM) techniques have emerged as a promising solution for the disaggregation of total energy consumption into the consumption of individual appliances. The classification of electrical appliances in a smart home remains a challenging task for machine learning algorithms. In the present study, we propose comparing and evaluating the performance of two different algorithms, namely Multi-Label K-Nearest Neighbors (MLkNN) and Convolutional Neural Networks (CNN), for NILM in two different scenarios: without and with data augmentation (DAUG). Our results show how the classification results can be better interpreted by generating a scalogram image from the power consumption signal data and processing it with CNNs. The results indicate that the CNN model with the proposed data augmentation performed significantly higher, obtaining a mean F1-score of 0.484 (an improvement of <span><mml:math alttext=\"+\" display=\"inline\" xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"><mml:mo>+</mml:mo></mml:math></span>0.234), better than the other methods. Additionally, after performing the Friedman statistical test, it indicates that it is significantly different from the other methods compared. Our proposed system can potentially reduce energy waste and promote more sustainable energy use in homes and buildings by providing personalized feedback and energy savings tips.</p>","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"24 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139410513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep deterministic policy gradient with constraints for gait optimisation of biped robots 带约束条件的深度确定性策略梯度,用于优化双足机器人的步态
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3233/ica-230724
Xingyang Liu, Haina Rong, Ferrante Neri, Peng Yue, Gexiang Zhang
In this paper, we propose a novel Reinforcement Learning (RL) algorithm for robotic motion control, that is, a constrained Deep Deterministic Policy Gradient (DDPG) deviation learning strategy to assist biped robots in walking safely and accurately. The previous research on this topic highlighted the limitations in the controller’s ability to accurately track foot placement on discrete terrains and the lack of consideration for safety concerns. In this study, we address these challenges by focusing on ensuring the overall system’s safety. To begin with, we tackle the inverse kinematics problem by introducing constraints to the damping least squares method. This enhancement not only addresses singularity issues but also guarantees safe ranges for joint angles, thus ensuring the stability and reliability of the system. Based on this, we propose the adoption of the constrained DDPG method to correct controller deviations. In constrained DDPG, we incorporate a constraint layer into the Actor network, incorporating joint deviations as state inputs. By conducting offline training within the range of safe angles, it serves as a deviation corrector. Lastly, we validate the effectiveness of our proposed approach by conducting dynamic simulations using the CRANE biped robot. Through comprehensive assessments, including singularity analysis, constraint effectiveness evaluation, and walking experiments on discrete terrains, we demonstrate the superiority and practicality of our approach in enhancing walking performance while ensuring safety. Overall, our research contributes to the advancement of biped robot locomotion by addressing gait optimisation from multiple perspectives, including singularity handling, safety constraints, and deviation learning.
在本文中,我们提出了一种用于机器人运动控制的新型强化学习(RL)算法,即有约束的深度确定性策略梯度(DDPG)偏差学习策略,以帮助双足机器人安全、准确地行走。以往关于这一主题的研究强调了控制器在离散地形上精确跟踪脚部位置能力的局限性,以及缺乏对安全问题的考虑。在本研究中,我们通过重点确保整个系统的安全性来应对这些挑战。首先,我们通过在阻尼最小二乘法中引入约束条件来解决逆运动学问题。这一改进不仅解决了奇异性问题,还保证了关节角度的安全范围,从而确保了系统的稳定性和可靠性。在此基础上,我们提出采用约束 DDPG 方法来修正控制器偏差。在受约束 DDPG 中,我们在 Actor 网络中加入了一个约束层,将关节偏差作为状态输入。通过在安全角度范围内进行离线训练,它可作为偏差校正器。最后,我们通过使用 CRANE 双足机器人进行动态模拟,验证了我们提出的方法的有效性。通过奇异性分析、约束有效性评估和离散地形行走实验等综合评估,我们证明了我们的方法在提高行走性能、确保安全方面的优越性和实用性。总之,我们的研究从奇异性处理、安全约束和偏差学习等多个角度解决了步态优化问题,为双足机器人运动的发展做出了贡献。
{"title":"Deep deterministic policy gradient with constraints for gait optimisation of biped robots","authors":"Xingyang Liu, Haina Rong, Ferrante Neri, Peng Yue, Gexiang Zhang","doi":"10.3233/ica-230724","DOIUrl":"https://doi.org/10.3233/ica-230724","url":null,"abstract":"In this paper, we propose a novel Reinforcement Learning (RL) algorithm for robotic motion control, that is, a constrained Deep Deterministic Policy Gradient (DDPG) deviation learning strategy to assist biped robots in walking safely and accurately. The previous research on this topic highlighted the limitations in the controller’s ability to accurately track foot placement on discrete terrains and the lack of consideration for safety concerns. In this study, we address these challenges by focusing on ensuring the overall system’s safety. To begin with, we tackle the inverse kinematics problem by introducing constraints to the damping least squares method. This enhancement not only addresses singularity issues but also guarantees safe ranges for joint angles, thus ensuring the stability and reliability of the system. Based on this, we propose the adoption of the constrained DDPG method to correct controller deviations. In constrained DDPG, we incorporate a constraint layer into the Actor network, incorporating joint deviations as state inputs. By conducting offline training within the range of safe angles, it serves as a deviation corrector. Lastly, we validate the effectiveness of our proposed approach by conducting dynamic simulations using the CRANE biped robot. Through comprehensive assessments, including singularity analysis, constraint effectiveness evaluation, and walking experiments on discrete terrains, we demonstrate the superiority and practicality of our approach in enhancing walking performance while ensuring safety. Overall, our research contributes to the advancement of biped robot locomotion by addressing gait optimisation from multiple perspectives, including singularity handling, safety constraints, and deviation learning.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"25 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138818438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and choreographed quality-of- service management in dense 6G verticals with high-speed mobility requirements 在具有高速移动要求的高密度 6G 垂直市场中实现高效、有序的服务质量管理
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3233/ica-230722
Borja Bordel, Ramón Alcarria, Joaquin Chung, Rajkumar Kettimuthu
Future 6G networks are envisioned to support very heterogeneous and extreme applications (known as verticals). Some examples are further-enhanced mobile broadband communications, where bitrates could go above one terabit per second, or extremely reliable and low-latency communications, whose end-to-end delay must be below one hundred microseconds. To achieve that ultra-high Quality-of-Service, 6G networks are commonly provided with redundant resources and intelligent management mechanisms to ensure that all devices get the expected performance. But this approach is not feasible or scalable for all verticals. Specifically, in 6G scenarios, mobile devices are expected to have speeds greater than 500 kilometers per hour, and device density will exceed ten million devices per square kilometer. In those verticals, resources cannot be redundant as, because of such a huge number of devices, Quality-of-Service requirements are pushing the effective performance of technologies at physical level. And, on the other hand, high-speed mobility prevents intelligent mechanisms to be useful, as devices move around and evolve faster than the usual convergence time of those intelligent solutions. New technologies are needed to fill this unexplored gap. Therefore, in this paper we propose a choreographed Quality-of-Service management solution, where 6G base stations predict the evolution of verticals at real-time, and run a lightweight distributed optimization algorithm in advance, so they can manage the resource consumption and ensure all devices get the required Quality-of-Service. Prediction mechanism includes mobility models (Markov, Bayesian, etc.) and models for time-variant communication channels. Besides, a traffic prediction solution is also considered to explore the achieved Quality-of-Service in advance. The optimization algorithm calculates an efficient resource distribution according to the predicted future vertical situation, so devices achieve the expected Quality-of-Service according to the proposed traffic models. An experimental validation based on simulation tools is also provided. Results show that the proposed approach reduces up to 12% of the network resource consumption for a given Quality-of-Service.
根据设想,未来的 6G 网络将支持非常异构和极端的应用(称为垂直应用)。例如,比特率可能超过每秒 1 太比特的进一步增强型移动宽带通信,或端到端延迟必须低于 100 微秒的极其可靠的低延迟通信。为实现超高服务质量,6G 网络通常配备冗余资源和智能管理机制,以确保所有设备都能获得预期性能。但这种方法并不可行,也不适合所有垂直行业。具体来说,在 6G 场景中,移动设备的速度预计将超过每小时 500 公里,设备密度将超过每平方公里 1000 万台设备。在这些垂直行业中,资源不能冗余,因为设备数量如此庞大,对服务质量的要求正在物理层面上推动技术的有效性能。另一方面,高速移动使得智能机制无法发挥作用,因为设备的移动和发展速度超过了这些智能解决方案的通常收敛时间。我们需要新技术来填补这一空白。因此,我们在本文中提出了一种编排式服务质量管理解决方案,即 6G 基站实时预测垂直网络的演进,并提前运行轻量级分布式优化算法,从而管理资源消耗,确保所有设备都能获得所需的服务质量。预测机制包括移动性模型(马尔科夫模型、贝叶斯模型等)和时变通信信道模型。此外,还考虑了流量预测解决方案,以提前探索实现的服务质量。优化算法根据预测的未来垂直情况计算出有效的资源分配,从而使设备根据提出的流量模型达到预期的服务质量。此外,还提供了基于仿真工具的实验验证。结果表明,对于给定的服务质量,建议的方法最多可减少 12% 的网络资源消耗。
{"title":"Efficient and choreographed quality-of- service management in dense 6G verticals with high-speed mobility requirements","authors":"Borja Bordel, Ramón Alcarria, Joaquin Chung, Rajkumar Kettimuthu","doi":"10.3233/ica-230722","DOIUrl":"https://doi.org/10.3233/ica-230722","url":null,"abstract":"Future 6G networks are envisioned to support very heterogeneous and extreme applications (known as verticals). Some examples are further-enhanced mobile broadband communications, where bitrates could go above one terabit per second, or extremely reliable and low-latency communications, whose end-to-end delay must be below one hundred microseconds. To achieve that ultra-high Quality-of-Service, 6G networks are commonly provided with redundant resources and intelligent management mechanisms to ensure that all devices get the expected performance. But this approach is not feasible or scalable for all verticals. Specifically, in 6G scenarios, mobile devices are expected to have speeds greater than 500 kilometers per hour, and device density will exceed ten million devices per square kilometer. In those verticals, resources cannot be redundant as, because of such a huge number of devices, Quality-of-Service requirements are pushing the effective performance of technologies at physical level. And, on the other hand, high-speed mobility prevents intelligent mechanisms to be useful, as devices move around and evolve faster than the usual convergence time of those intelligent solutions. New technologies are needed to fill this unexplored gap. Therefore, in this paper we propose a choreographed Quality-of-Service management solution, where 6G base stations predict the evolution of verticals at real-time, and run a lightweight distributed optimization algorithm in advance, so they can manage the resource consumption and ensure all devices get the required Quality-of-Service. Prediction mechanism includes mobility models (Markov, Bayesian, etc.) and models for time-variant communication channels. Besides, a traffic prediction solution is also considered to explore the achieved Quality-of-Service in advance. The optimization algorithm calculates an efficient resource distribution according to the predicted future vertical situation, so devices achieve the expected Quality-of-Service according to the proposed traffic models. An experimental validation based on simulation tools is also provided. Results show that the proposed approach reduces up to 12% of the network resource consumption for a given Quality-of-Service.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"26 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139071827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Look inside 3D point cloud deep neural network by patch-wise saliency map 通过斑块突出图观察三维点云深度神经网络的内部结构
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3233/ica-230725
Linkun Fan, Fazhi He, Yupeng Song, Huangxinxin Xu, Bing Li
The 3D point cloud deep neural network (3D DNN) has achieved remarkable success, but its black-box nature hinders its application in many safety-critical domains. The saliency map technique is a key method to look inside the black-box and determine where a 3D DNN focuses when recognizing a point cloud. Existing point-wise point cloud saliency methods are proposed to illustrate the point-wise saliency for a given 3D DNN. However, the above critical points are alternative and unreliable. The findings are grounded on our experimental results which show that a point becomes critical because it is responsible for representing one specific local structure. However, one local structure does not have to be represented by some specific points, conversely. As a result, discussing the saliency of the local structure (named patch-wise saliency) represented by critical points is more meaningful than discussing the saliency of some specific points. Based on the above motivations, this paper designs a black-box algorithm to generate patch-wise saliency map for point clouds. Our basic idea is to design the Mask Building-Dropping process, which adaptively matches the size of important/unimportant patches by clustering points with close saliency. Experimental results on several typical 3D DNNs show that our patch-wise saliency algorithm can provide better visual guidance, and can detect where a 3D DNN is focusing more efficiently than a point-wise saliency map. Finally, we apply our patch-wise saliency map to adversarial attacks and backdoor defenses. The results show that the improvement is significant.
三维点云深度神经网络(3D DNN)已取得显著成功,但其黑箱特性阻碍了它在许多安全关键领域的应用。突出图技术是观察黑箱内部并确定三维点云深度神经网络在识别点云时关注点的关键方法。现有的点云突出度方法可用于说明给定三维 DNN 的点云突出度。然而,上述临界点是替代性的,并不可靠。我们的实验结果表明,一个点之所以成为临界点,是因为它代表了一个特定的局部结构。然而,一个局部结构并不一定由某些特定点来代表,反之亦然。因此,讨论临界点所代表的局部结构的显著性(命名为片面显著性)比讨论某些特定点的显著性更有意义。基于上述动机,本文设计了一种黑盒算法来生成点云的斑块式显著性图。我们的基本思想是设计 "掩模构建-丢弃 "过程,通过对显著性接近的点进行聚类,自适应地匹配重要/不重要斑块的大小。在几个典型三维 DNN 上的实验结果表明,我们的片状显著性算法可以提供更好的视觉引导,比点状显著性地图更有效地检测出三维 DNN 的焦点所在。最后,我们将斑块式显著性图应用于对抗性攻击和后门防御。结果表明,改进效果显著。
{"title":"Look inside 3D point cloud deep neural network by patch-wise saliency map","authors":"Linkun Fan, Fazhi He, Yupeng Song, Huangxinxin Xu, Bing Li","doi":"10.3233/ica-230725","DOIUrl":"https://doi.org/10.3233/ica-230725","url":null,"abstract":"The 3D point cloud deep neural network (3D DNN) has achieved remarkable success, but its black-box nature hinders its application in many safety-critical domains. The saliency map technique is a key method to look inside the black-box and determine where a 3D DNN focuses when recognizing a point cloud. Existing point-wise point cloud saliency methods are proposed to illustrate the point-wise saliency for a given 3D DNN. However, the above critical points are alternative and unreliable. The findings are grounded on our experimental results which show that a point becomes critical because it is responsible for representing one specific local structure. However, one local structure does not have to be represented by some specific points, conversely. As a result, discussing the saliency of the local structure (named patch-wise saliency) represented by critical points is more meaningful than discussing the saliency of some specific points. Based on the above motivations, this paper designs a black-box algorithm to generate patch-wise saliency map for point clouds. Our basic idea is to design the Mask Building-Dropping process, which adaptively matches the size of important/unimportant patches by clustering points with close saliency. Experimental results on several typical 3D DNNs show that our patch-wise saliency algorithm can provide better visual guidance, and can detect where a 3D DNN is focusing more efficiently than a point-wise saliency map. Finally, we apply our patch-wise saliency map to adversarial attacks and backdoor defenses. The results show that the improvement is significant.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"5 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139067137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A broadcast sub-GHz framework for unmanned aerial vehicles clock synchronization 一种用于无人机时钟同步的广播sub-GHz框架
2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-16 DOI: 10.3233/ica-230723
Niccolò Cecchinato, Ivan Scagnetto, Andrea Toma, Carlo Drioli, Gian Luca Foresti
Nowadays, set of cooperative drones are commonly used as aerial sensors, in order to monitor areas and track objects of interest (think, e.g., of border and coastal security and surveillance, crime control, disaster management, emergency first responder, forest and wildlife, traffic monitoring). The drones generate a quite large and continuous in time multimodal (audio, video and telemetry) data stream towards a ground control station with enough computing power and resources to store and process it. Hence, due to the distributed nature of this setting, further complicated by the movement and varying distance among drones, and to possible interferences and obstacles compromising communications, a common clock between the nodes is of utmost importance to make feasible a correct reconstruction of the multimodal data stream from the single datagrams, which may be received out of order or with different delays. A framework architecture, using sub-GHz broadcasting communications, is proposed to ensure time synchronization for a set of drones, allowing one to recover even in difficult situations where the usual time sources, e.g. GPS, NTP etc., are not available for all the devices. Such architecture is then implemented and tested using LoRa radios and Raspberry Pi computers. However, other sub-GHz technologies can be used in the place of LoRa, and other kinds of single-board computers can substitute the Raspberry Pis, making the proposed solution easily customizable, according to specific needs. Moreover, the proposal is low cost, since it does not require expensive hardware like, e.g., onboard Rubidium based atomic clocks. Our experiments indicate a worst case skew of about 16 ms between drones clocks, using cheap components commonly available in the market. This is sufficient to deal with audio/video footage at 30 fps. Hence, it can be viewed as a useful and easy to implement architecture helping to maintain a decent synchronization even when traditional solutions are not available.
如今,一套合作无人机通常用作空中传感器,以监测区域和跟踪感兴趣的对象(例如,边境和沿海安全和监视,犯罪控制,灾害管理,紧急第一反应,森林和野生动物,交通监测)。无人机产生相当大的、连续的多模态(音频、视频和遥测)数据流,流向具有足够计算能力和资源来存储和处理数据的地面控制站。因此,由于这种设置的分布式性质,无人机之间的移动和距离的变化进一步复杂化,以及可能的干扰和阻碍通信的障碍,节点之间的公共时钟对于从单个数据报中正确重建多模态数据流是至关重要的,这些数据报可能被无序接收或具有不同的延迟。提出了一种使用sub-GHz广播通信的框架架构,以确保一组无人机的时间同步,即使在常规时间源(例如GPS, NTP等)不可用于所有设备的困难情况下也可以恢复。然后使用LoRa无线电和树莓派计算机实现和测试这种体系结构。然而,其他sub-GHz技术可以代替LoRa,其他类型的单板计算机可以替代Raspberry Pis,使所提出的解决方案易于根据特定需求进行定制。此外,该提议的成本很低,因为它不需要昂贵的硬件,例如基于铷的机载原子钟。我们的实验表明,使用市场上常见的廉价组件,无人机时钟之间的最坏情况偏差约为16毫秒。这足以处理30 fps的音频/视频素材。因此,它可以被视为一种有用且易于实现的体系结构,即使在传统解决方案不可用时,它也可以帮助维护良好的同步。
{"title":"A broadcast sub-GHz framework for unmanned aerial vehicles clock synchronization","authors":"Niccolò Cecchinato, Ivan Scagnetto, Andrea Toma, Carlo Drioli, Gian Luca Foresti","doi":"10.3233/ica-230723","DOIUrl":"https://doi.org/10.3233/ica-230723","url":null,"abstract":"Nowadays, set of cooperative drones are commonly used as aerial sensors, in order to monitor areas and track objects of interest (think, e.g., of border and coastal security and surveillance, crime control, disaster management, emergency first responder, forest and wildlife, traffic monitoring). The drones generate a quite large and continuous in time multimodal (audio, video and telemetry) data stream towards a ground control station with enough computing power and resources to store and process it. Hence, due to the distributed nature of this setting, further complicated by the movement and varying distance among drones, and to possible interferences and obstacles compromising communications, a common clock between the nodes is of utmost importance to make feasible a correct reconstruction of the multimodal data stream from the single datagrams, which may be received out of order or with different delays. A framework architecture, using sub-GHz broadcasting communications, is proposed to ensure time synchronization for a set of drones, allowing one to recover even in difficult situations where the usual time sources, e.g. GPS, NTP etc., are not available for all the devices. Such architecture is then implemented and tested using LoRa radios and Raspberry Pi computers. However, other sub-GHz technologies can be used in the place of LoRa, and other kinds of single-board computers can substitute the Raspberry Pis, making the proposed solution easily customizable, according to specific needs. Moreover, the proposal is low cost, since it does not require expensive hardware like, e.g., onboard Rubidium based atomic clocks. Our experiments indicate a worst case skew of about 16 ms between drones clocks, using cheap components commonly available in the market. This is sufficient to deal with audio/video footage at 30 fps. Hence, it can be viewed as a useful and easy to implement architecture helping to maintain a decent synchronization even when traditional solutions are not available.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"2 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136227744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An exploratory design science research on troll factories 巨魔工厂探索性设计科学研究
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-11 DOI: 10.3233/ica-230720
Francisco S. Marcondes, José João Almeida, Paulo Novais
Private and military troll factories (facilities used to spread rumours in online social media) are currently proliferating around the world. By their very nature, they are obscure companies whose internal workings are largely unknown, apart from leaks to the press. They are even more concealed when it comes to their underlying technology. At least in a broad sense, it is believed that there are two main tasks performed by a troll factory: sowing and spreading. The first is to create and, more importantly, maintain a social network that can be used for the spreading task. It is then a wicked long-term activity, subject to all sorts of problems. As an attempt to make this perspective a little clearer, this paper uses exploratory design science research to produce artefacts that could be applied to online rumour spreading in social media. Then, as a hypothesis: it is possible to design a fully automated social media agent capable of sowing a social network on microblogging platforms. The expectation is that it will be possible to identify common opportunities and difficulties in the development of such tools, which in turn will allow an evaluation of the technology, but above all the level of automation of these facilities. The research is based on a general domain Twitter corpus with 4M+ tokens and on ChatGPT, and discusses both knowledge-based and deep learning approaches for smooth tweet generation. These explorations suggest that for the current, widespread and publicly available NLP technology, troll factories work like a call centre; i.e. humans assisted by more or less sophisticated computing tools (often called cyborgs).
私人和军事喷子工厂(用于在在线社交媒体上传播谣言的设施)目前在世界各地激增。就其本质而言,它们是默默无闻的公司,除了向媒体泄露外,其内部运作基本上是未知的。当涉及到它们的底层技术时,它们甚至更加隐蔽。至少在广义上,人们认为巨魔工厂有两项主要任务:播种和传播。首先是创建,更重要的是维护一个可以用于传播任务的社会网络。因此,这是一项邪恶的长期活动,会受到各种问题的影响。为了使这一观点更清晰一些,本文使用探索性设计科学研究来制作可应用于社交媒体在线谣言传播的人工制品。然后,作为一个假设:有可能设计一个完全自动化的社交媒体代理,能够在微博平台上播种社交网络。预期将有可能确定开发此类工具的共同机会和困难,这反过来将允许对技术进行评估,但首先是对这些设施的自动化水平进行评估。该研究基于具有4M+令牌的通用领域Twitter语料库和ChatGPT,并讨论了基于知识和深度学习的平稳tweet生成方法。这些探索表明,对于目前广泛且公开可用的NLP技术,巨魔工厂的工作方式就像呼叫中心;也就是说,人类在或多或少复杂的计算工具(通常称为半机械人)的帮助下。
{"title":"An exploratory design science research on troll factories","authors":"Francisco S. Marcondes, José João Almeida, Paulo Novais","doi":"10.3233/ica-230720","DOIUrl":"https://doi.org/10.3233/ica-230720","url":null,"abstract":"Private and military troll factories (facilities used to spread rumours in online social media) are currently proliferating around the world. By their very nature, they are obscure companies whose internal workings are largely unknown, apart from leaks to the press. They are even more concealed when it comes to their underlying technology. At least in a broad sense, it is believed that there are two main tasks performed by a troll factory: sowing and spreading. The first is to create and, more importantly, maintain a social network that can be used for the spreading task. It is then a wicked long-term activity, subject to all sorts of problems. As an attempt to make this perspective a little clearer, this paper uses exploratory design science research to produce artefacts that could be applied to online rumour spreading in social media. Then, as a hypothesis: it is possible to design a fully automated social media agent capable of sowing a social network on microblogging platforms. The expectation is that it will be possible to identify common opportunities and difficulties in the development of such tools, which in turn will allow an evaluation of the technology, but above all the level of automation of these facilities. The research is based on a general domain Twitter corpus with 4M+ tokens and on ChatGPT, and discusses both knowledge-based and deep learning approaches for smooth tweet generation. These explorations suggest that for the current, widespread and publicly available NLP technology, troll factories work like a call centre; i.e. humans assisted by more or less sophisticated computing tools (often called cyborgs).","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"57 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An explainable machine learning system for left bundle branch block detection and classification 用于左束支传导阻滞检测和分类的可解释机器学习系统
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-28 DOI: 10.3233/ica-230719
Beatriz Macas, Javier Garrigós, J. Martínez, J. M. Ferrández, M. P. Bonomini
Left bundle branch block is a cardiac conduction disorder that occurs when the electrical impulses that control the heartbeat are blocked or delayed as they travel through the left bundle branch of the cardiac conduction system providing a characteristic electrocardiogram (ECG) pattern. We use a reduced set of biologically inspired features extracted from ECG data is proposed and used to train a variety of machine learning models for the LBBB classification task. Then, different methods are used to evaluate the importance of the features in the classification process of each model and to further reduce the feature set while maintaining the classification performance of the models. The performances obtained by the models using different metrics improve those obtained by other authors in the literature on the same dataset. Finally, XAI techniques are used to verify that the predictions made by the models are consistent with the existing relationships between the data. This increases the reliability of the models and their usefulness in the diagnostic support process. These explanations can help clinicians to better understand the reasoning behind diagnostic decisions.
左束支传导阻滞是一种心脏传导障碍,当控制心跳的电脉冲在通过提供特征心电图(ECG)模式的心脏传导系统的左束支时被阻断或延迟时发生。我们使用从ECG数据中提取的一组简化的生物启发特征,并用于训练LBBB分类任务的各种机器学习模型。然后,使用不同的方法来评估特征在每个模型的分类过程中的重要性,并在保持模型分类性能的同时进一步减少特征集。使用不同度量的模型获得的性能改进了文献中其他作者在同一数据集上获得的性能。最后,使用XAI技术来验证模型所做的预测与数据之间的现有关系是否一致。这增加了模型的可靠性及其在诊断支持过程中的有用性。这些解释可以帮助临床医生更好地理解诊断决策背后的原因。
{"title":"An explainable machine learning system for left bundle branch block detection and classification","authors":"Beatriz Macas, Javier Garrigós, J. Martínez, J. M. Ferrández, M. P. Bonomini","doi":"10.3233/ica-230719","DOIUrl":"https://doi.org/10.3233/ica-230719","url":null,"abstract":"Left bundle branch block is a cardiac conduction disorder that occurs when the electrical impulses that control the heartbeat are blocked or delayed as they travel through the left bundle branch of the cardiac conduction system providing a characteristic electrocardiogram (ECG) pattern. We use a reduced set of biologically inspired features extracted from ECG data is proposed and used to train a variety of machine learning models for the LBBB classification task. Then, different methods are used to evaluate the importance of the features in the classification process of each model and to further reduce the feature set while maintaining the classification performance of the models. The performances obtained by the models using different metrics improve those obtained by other authors in the literature on the same dataset. Finally, XAI techniques are used to verify that the predictions made by the models are consistent with the existing relationships between the data. This increases the reliability of the models and their usefulness in the diagnostic support process. These explanations can help clinicians to better understand the reasoning behind diagnostic decisions.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":" ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44767202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Integrated Computer-Aided Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1