首页 > 最新文献

Journal of Computing and Information Science in Engineering最新文献

英文 中文
Unsupervised Anomaly Detection via Nonlinear Manifold Learning 基于非线性流形学习的无监督异常检测
3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-10-04 DOI: 10.1115/1.4063642
Amin Yousefpour, Mehdi Shishehbor, Zahra Zanjani Foumani, Ramin Bostanabad
Abstract Anomalies are samples that significantly deviate from the rest of the data and their detection plays a major role in building machine learning models that can be reliably used in applications such as data-driven design and novelty detection. The majority of existing anomaly detection methods either are exclusively developed for (semi) supervised settings, or provide poor performance in unsupervised applications where there is no training data with labeled anomalous samples. To bridge this research gap, we introduce a robust, efficient, and interpretable methodology based on nonlinear manifold learning to detect anomalies in unsupervised settings. The essence of our approach is to learn a low-dimensional and interpretable latent representation (aka manifold) for all the data points such that normal samples are automatically clustered together and hence can be easily and robustly identified. We learn this low-dimensional manifold by designing a learning algorithm that leverages either a latent map Gaussian process (LMGP) or a deep autoencoder (AE). Our LMGP-based approach, in particular, provides a probabilistic perspective on the learning task and is ideal for high-dimensional applications with scarce data. We demonstrate the superior performance of our approach over existing technologies via multiple analytic examples and real-world datasets.
异常是与其他数据明显偏离的样本,它们的检测在构建机器学习模型中起着重要作用,这些模型可以可靠地用于数据驱动设计和新颖性检测等应用。大多数现有的异常检测方法要么是专门为(半)监督设置开发的,要么在没有标记异常样本的训练数据的无监督应用中提供较差的性能。为了弥补这一研究差距,我们引入了一种基于非线性流形学习的鲁棒、高效和可解释的方法来检测无监督设置中的异常。我们的方法的本质是学习所有数据点的低维和可解释的潜在表示(又名流形),以便正常样本自动聚类在一起,从而可以轻松且稳健地识别。我们通过设计一种学习算法来学习这种低维流形,该算法利用了潜在映射高斯过程(LMGP)或深度自动编码器(AE)。特别是,我们基于lmpp的方法提供了学习任务的概率视角,非常适合具有稀缺数据的高维应用程序。我们通过多个分析示例和真实世界的数据集证明了我们的方法优于现有技术的性能。
{"title":"Unsupervised Anomaly Detection via Nonlinear Manifold Learning","authors":"Amin Yousefpour, Mehdi Shishehbor, Zahra Zanjani Foumani, Ramin Bostanabad","doi":"10.1115/1.4063642","DOIUrl":"https://doi.org/10.1115/1.4063642","url":null,"abstract":"Abstract Anomalies are samples that significantly deviate from the rest of the data and their detection plays a major role in building machine learning models that can be reliably used in applications such as data-driven design and novelty detection. The majority of existing anomaly detection methods either are exclusively developed for (semi) supervised settings, or provide poor performance in unsupervised applications where there is no training data with labeled anomalous samples. To bridge this research gap, we introduce a robust, efficient, and interpretable methodology based on nonlinear manifold learning to detect anomalies in unsupervised settings. The essence of our approach is to learn a low-dimensional and interpretable latent representation (aka manifold) for all the data points such that normal samples are automatically clustered together and hence can be easily and robustly identified. We learn this low-dimensional manifold by designing a learning algorithm that leverages either a latent map Gaussian process (LMGP) or a deep autoencoder (AE). Our LMGP-based approach, in particular, provides a probabilistic perspective on the learning task and is ideal for high-dimensional applications with scarce data. We demonstrate the superior performance of our approach over existing technologies via multiple analytic examples and real-world datasets.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135591731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physically-based Rendering of Animated Point Clouds for eXtended Reality 基于物理渲染的扩展现实动画点云
3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-28 DOI: 10.1115/1.4063559
Marco Rossoni, Matteo Pozzi, Giorgio Colombo, Marco Gribaudo, Pietro Piazzolla
Abstract Point cloud 3D models are becoming more and more popular thanks to the spreading of scanning systems employed in many fields. When used for rendering purposes, point clouds are usually displayed with their original color acquired at scan time, without considering the lighting condition of the scene where the model is placed. This leads to a lack of realism in many contexts, especially in the case of animated point clouds employed in eXtended Reality applications where it would be desirable to have the model reacting to incoming light and integrating with the surrounding environment. This paper proposes the application of Physically Based Rendering (PBR), a rendering technique widely used in Real-Time Computer Graphics applications, to animated point cloud models for reproducing specular reflections, and achieving a photo-realistic and physically accurate look under any lighting condition. Firstly, we consider the extension of commonly used animated point cloud formats, to include normal vectors, and PBR parameters, as well as the encoding of the animated environment maps required by the technique. Then, an animated point cloud model is rendered with a shader implementing the proposed PBR method. Finally, the PBR pipeline is compared to traditional renderings of the same point cloud obtained with commonly used shaders, under different lighting conditions and environments. It will be shown how the point cloud better integrates visually with its surroundings.
随着扫描系统在各个领域的广泛应用,点云三维模型越来越受欢迎。当用于渲染目的时,点云通常以扫描时获得的原始颜色显示,而不考虑模型所在场景的照明条件。这导致在许多情况下缺乏真实感,特别是在扩展现实应用中使用的动画点云的情况下,它希望模型对入射光做出反应并与周围环境相结合。本文提出了一种广泛应用于实时计算机图形学应用的基于物理的渲染技术(physical Based Rendering, PBR),用于动画点云模型,以再现镜面反射,并在任何光照条件下实现逼真的物理精确外观。首先,我们考虑了常用的动画点云格式的扩展,包括法向量和PBR参数,以及该技术所需的动画环境图的编码。然后,使用实现PBR方法的着色器渲染动画点云模型。最后,在不同的光照条件和环境下,将PBR管道与使用常用着色器获得的相同点云的传统渲染图进行比较。它将展示点云如何在视觉上更好地与周围环境融为一体。
{"title":"Physically-based Rendering of Animated Point Clouds for eXtended Reality","authors":"Marco Rossoni, Matteo Pozzi, Giorgio Colombo, Marco Gribaudo, Pietro Piazzolla","doi":"10.1115/1.4063559","DOIUrl":"https://doi.org/10.1115/1.4063559","url":null,"abstract":"Abstract Point cloud 3D models are becoming more and more popular thanks to the spreading of scanning systems employed in many fields. When used for rendering purposes, point clouds are usually displayed with their original color acquired at scan time, without considering the lighting condition of the scene where the model is placed. This leads to a lack of realism in many contexts, especially in the case of animated point clouds employed in eXtended Reality applications where it would be desirable to have the model reacting to incoming light and integrating with the surrounding environment. This paper proposes the application of Physically Based Rendering (PBR), a rendering technique widely used in Real-Time Computer Graphics applications, to animated point cloud models for reproducing specular reflections, and achieving a photo-realistic and physically accurate look under any lighting condition. Firstly, we consider the extension of commonly used animated point cloud formats, to include normal vectors, and PBR parameters, as well as the encoding of the animated environment maps required by the technique. Then, an animated point cloud model is rendered with a shader implementing the proposed PBR method. Finally, the PBR pipeline is compared to traditional renderings of the same point cloud obtained with commonly used shaders, under different lighting conditions and environments. It will be shown how the point cloud better integrates visually with its surroundings.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135344464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A tacholess order tracking method based on the STFTSC algorithm for rotor unbalance fault diagnosis under variable-speed conditions 基于STFTSC算法的无转速阶次跟踪方法用于变转速条件下转子不平衡故障诊断
3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-12 DOI: 10.1115/1.4063401
Binyun Wu, Liang Hou, Shaojie Wang, Xiaozhen Lian
Abstract Due to the fact that rotors usually operate in a non-stationary mode with changing speeds, the conventional rotor unbalance detection method based on the stationary signal will produce a major “spectrum ambiguity issue” and affect the accuracy of rotor unbalance detection. To this end, a tacholess order tracking method based on the STFTSC algorithm is suggested in this study, where the STFTSC algorithm is developed by combining the short-time Fourier transform and the seam carving algorithm. Firstly, the STFTSC algorithm is utilized to accurately extract the instantaneous frequency (IF) of the rotor and calculate the instantaneous phase under variable-speed conditions. Subsequently, the original signal is resampled in the angular domain to transform the non-stationary time domain signal into a stable angle domain signal, eliminating the effect of the speed variations. Finally, the angular domain signal is transformed into the order domain signal, which uses the discrete Fourier transform and the discrete spectrum correction method to identify the amplitude and phase corresponding to the fundamental frequency component of the signal. The simulation results show that the IF extracted by the STFTSC algorithm has higher extraction accuracy compared with the traditional STFT spectral peak detection method and effectively eliminates the effect of speed fluctuations. A rotor dynamic-balancing experiment shows that the unbalance correction effect based on the STFTSC algorithm is remarkable, with the average unbalance amount decrease rate on the left and right sides being 90.02% and 92.56%, respectively, after a single correction.
摘要由于转子通常处于变速的非平稳状态,传统的基于平稳信号的转子不平衡检测方法会产生较大的“频谱模糊问题”,影响转子不平衡检测的准确性。为此,本研究提出了一种基于STFTSC算法的无盲点阶次跟踪方法,其中STFTSC算法将短时傅里叶变换与切缝算法相结合,开发了STFTSC算法。首先,利用STFTSC算法精确提取转子的瞬时频率(IF),并计算出变速条件下的瞬时相位;随后,在角域对原始信号进行重采样,将非平稳时域信号转化为稳定的角域信号,消除了速度变化的影响。最后,将角域信号变换为阶域信号,利用离散傅里叶变换和离散频谱校正方法识别信号基频分量对应的幅值和相位。仿真结果表明,与传统的STFT谱峰检测方法相比,STFTSC算法提取的中频具有更高的提取精度,并且有效地消除了速度波动的影响。转子动平衡实验表明,基于STFTSC算法的不平衡校正效果显著,单次校正后左右两侧的平均不平衡量下降率分别为90.02%和92.56%。
{"title":"A tacholess order tracking method based on the STFTSC algorithm for rotor unbalance fault diagnosis under variable-speed conditions","authors":"Binyun Wu, Liang Hou, Shaojie Wang, Xiaozhen Lian","doi":"10.1115/1.4063401","DOIUrl":"https://doi.org/10.1115/1.4063401","url":null,"abstract":"Abstract Due to the fact that rotors usually operate in a non-stationary mode with changing speeds, the conventional rotor unbalance detection method based on the stationary signal will produce a major “spectrum ambiguity issue” and affect the accuracy of rotor unbalance detection. To this end, a tacholess order tracking method based on the STFTSC algorithm is suggested in this study, where the STFTSC algorithm is developed by combining the short-time Fourier transform and the seam carving algorithm. Firstly, the STFTSC algorithm is utilized to accurately extract the instantaneous frequency (IF) of the rotor and calculate the instantaneous phase under variable-speed conditions. Subsequently, the original signal is resampled in the angular domain to transform the non-stationary time domain signal into a stable angle domain signal, eliminating the effect of the speed variations. Finally, the angular domain signal is transformed into the order domain signal, which uses the discrete Fourier transform and the discrete spectrum correction method to identify the amplitude and phase corresponding to the fundamental frequency component of the signal. The simulation results show that the IF extracted by the STFTSC algorithm has higher extraction accuracy compared with the traditional STFT spectral peak detection method and effectively eliminates the effect of speed fluctuations. A rotor dynamic-balancing experiment shows that the unbalance correction effect based on the STFTSC algorithm is remarkable, with the average unbalance amount decrease rate on the left and right sides being 90.02% and 92.56%, respectively, after a single correction.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135878952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Global Correction Framework for Camera Registration in Video See-Through Augmented Reality Systems 一种用于视频透视增强现实系统中摄像机配准的全局校正框架
IF 3.1 3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-07 DOI: 10.1115/1.4063350
Wenhao Yang, Yunbo Zhang
Augmented Reality (AR) enhances the user's perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot's operations during the task. Based on previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the Head-Mounted Display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.
增强现实(AR)通过叠加计算机生成的虚拟图像来增强用户对真实环境的感知。这些虚拟图像提供了补充真实世界视图的附加视觉信息。AR系统在培训、维护、组装和机器人编程等各个制造领域迅速普及。在一些AR应用中,至关重要的是,不可见的虚拟环境与物理环境精确对齐,以确保人类用户能够结合他们的真实环境准确感知虚拟增强。实现这种精确对准的过程称为校准。在一些使用AR的机器人应用程序中,我们观察到指定工作空间内的视觉表示出现错位的情况。这种错位可能会在任务期间影响机器人操作的准确性。在以往对AR辅助机器人编程系统研究的基础上,本文研究了误差的来源,并提出了一种简单有效的校准程序,以降低普通视频透视AR系统的误差精度。为了将虚拟信息准确地叠加到真实环境中,有必要识别错误的来源和传播。在这项工作中,我们概述了每个点从虚拟世界空间到虚拟屏幕坐标的线性变换和投影。引入了一种离线校准方法来确定从头戴式显示器(HMD)到相机的偏移矩阵,并进行了实验来验证通过校准过程实现的改进。
{"title":"A Global Correction Framework for Camera Registration in Video See-Through Augmented Reality Systems","authors":"Wenhao Yang, Yunbo Zhang","doi":"10.1115/1.4063350","DOIUrl":"https://doi.org/10.1115/1.4063350","url":null,"abstract":"\u0000 Augmented Reality (AR) enhances the user's perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot's operations during the task. Based on previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the Head-Mounted Display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48143893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D-Slice-Super-Resolution-Net: A Fast Few Shooting Learning Model for 3D Super-resolution Using Slice-up and Slice-reconstruction 三维切片超分辨率网:一种基于切片向上和切片重建的三维超分辨率快速少拍学习模型
IF 3.1 3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-29 DOI: 10.1115/1.4063275
Hongbin Lin, Qingfeng Xu, Handing Xu, Yanjie Xu, Yiming Zheng, Yubin Zhong, Zhenguo Nie
A 3D model is a storage method that can accurately describe the objective world. However, the establishment of a 3D model requires a lot of acquisition resources in details, and a precise 3D model often consumes abundant storage space. To eliminate these drawback, we propose a 3D data super-resolution model named three dimension slice reconstruction model(3DSR) that use low resolution 3D data as input to acquire a high resolution result instantaneously and accurately, shortening time and storage when building a precise 3D model. To boost the efficiency and accuracy of deep learning model, the 3D data is split as multiple slices. The 3DSR processes the slice to high resolution 2D image, and reconstruct the image as high resolution 3D data. 3D data slice-up method and slice-reconstruction method are designed to maintain the main features of 3D data. Meanwhile, a pre-trained deep 2D convolution neural network is utilized to expand the resolution of 2D image, which achieve superior performance. Our method saving the time to train deep learning model and computation time when improve the resolution. Furthermore, our model can achieve better performance even less data is utilized to train the model.
三维模型是一种能够准确描述客观世界的存储方法。然而,3D模型的建立在细节上需要大量的获取资源,而精确的3D模型往往消耗丰富的存储空间。为了消除这些缺点,我们提出了一种3D数据超分辨率模型,称为三维切片重建模型(3DSR),该模型使用低分辨率的3D数据作为输入,即时准确地获得高分辨率的结果,从而在构建精确的3D模型时缩短了时间和存储。为了提高深度学习模型的效率和准确性,将3D数据分割为多个切片。3DSR将切片处理为高分辨率2D图像,并将图像重建为高分辨率3D数据。为了保持三维数据的主要特征,设计了三维数据切片方法和切片重建方法。同时,利用预先训练的深度二维卷积神经网络来扩展二维图像的分辨率,实现了优越的性能。我们的方法在提高分辨率的同时节省了训练深度学习模型的时间和计算时间。此外,即使使用更少的数据来训练模型,我们的模型也可以获得更好的性能。
{"title":"3D-Slice-Super-Resolution-Net: A Fast Few Shooting Learning Model for 3D Super-resolution Using Slice-up and Slice-reconstruction","authors":"Hongbin Lin, Qingfeng Xu, Handing Xu, Yanjie Xu, Yiming Zheng, Yubin Zhong, Zhenguo Nie","doi":"10.1115/1.4063275","DOIUrl":"https://doi.org/10.1115/1.4063275","url":null,"abstract":"\u0000 A 3D model is a storage method that can accurately describe the objective world. However, the establishment of a 3D model requires a lot of acquisition resources in details, and a precise 3D model often consumes abundant storage space. To eliminate these drawback, we propose a 3D data super-resolution model named three dimension slice reconstruction model(3DSR) that use low resolution 3D data as input to acquire a high resolution result instantaneously and accurately, shortening time and storage when building a precise 3D model. To boost the efficiency and accuracy of deep learning model, the 3D data is split as multiple slices. The 3DSR processes the slice to high resolution 2D image, and reconstruct the image as high resolution 3D data. 3D data slice-up method and slice-reconstruction method are designed to maintain the main features of 3D data. Meanwhile, a pre-trained deep 2D convolution neural network is utilized to expand the resolution of 2D image, which achieve superior performance. Our method saving the time to train deep learning model and computation time when improve the resolution. Furthermore, our model can achieve better performance even less data is utilized to train the model.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49111918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Learning of Spatially Varying Process Parameter Models for Robotic Finishing Tasks 机器人加工任务中空间变化过程参数模型的自监督学习
IF 3.1 3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-29 DOI: 10.1115/1.4063276
Yeo Jung Yoon, Santosh V. Narayan, S. Gupta
This paper presents a self-supervised learning approach for a robot to learn spatially varying process parameter models for contact-based finishing tasks. In many finishing tasks, a part has spatially varying stiffness. Some regions of the part enable the robot to efficiently execute the task. On the other hand, some other regions on the part may require the robot to move cautiously in order to prevent damage and ensure safety. Compared to the constant process parameter models, spatially varying process parameter models are more complex and challenging to learn. Our self-supervised learning approach consists of utilizing an initial parameter space exploration method, surrogate modeling, selection of region sequencing policy, and development of process parameter selection policy. We showed that by carefully selecting and optimizing learning components, this approach enables a robot to efficiently learn spatially varying process parameter models for a given contact-based finishing task. We demonstrated the effectiveness of our approach through computational simulations and physical experiments with a robotic sanding case study. Our work shows that the learning approach that has been optimized based on task characteristics significantly outperforms an unoptimized learning approach based on the overall task completion time.
本文提出了一种机器人的自监督学习方法,用于学习基于接触的精加工任务的空间变化过程参数模型。在许多精加工任务中,零件具有空间变化的刚度。零件的某些区域使机器人能够有效地执行任务。另一方面,零件上的其他一些区域可能需要机器人谨慎移动,以防止损坏并确保安全。与恒定过程参数模型相比,空间变化过程参数模型更复杂,学习起来更具挑战性。我们的自监督学习方法包括利用初始参数空间探索方法、代理建模、区域排序策略的选择和过程参数选择策略的开发。我们表明,通过仔细选择和优化学习组件,这种方法使机器人能够有效地学习给定基于接触的精加工任务的空间变化过程参数模型。我们通过机器人打磨案例研究的计算模拟和物理实验证明了我们方法的有效性。我们的工作表明,基于任务特征进行优化的学习方法显著优于基于整体任务完成时间的未优化学习方法。
{"title":"Self-Supervised Learning of Spatially Varying Process Parameter Models for Robotic Finishing Tasks","authors":"Yeo Jung Yoon, Santosh V. Narayan, S. Gupta","doi":"10.1115/1.4063276","DOIUrl":"https://doi.org/10.1115/1.4063276","url":null,"abstract":"\u0000 This paper presents a self-supervised learning approach for a robot to learn spatially varying process parameter models for contact-based finishing tasks. In many finishing tasks, a part has spatially varying stiffness. Some regions of the part enable the robot to efficiently execute the task. On the other hand, some other regions on the part may require the robot to move cautiously in order to prevent damage and ensure safety. Compared to the constant process parameter models, spatially varying process parameter models are more complex and challenging to learn. Our self-supervised learning approach consists of utilizing an initial parameter space exploration method, surrogate modeling, selection of region sequencing policy, and development of process parameter selection policy. We showed that by carefully selecting and optimizing learning components, this approach enables a robot to efficiently learn spatially varying process parameter models for a given contact-based finishing task. We demonstrated the effectiveness of our approach through computational simulations and physical experiments with a robotic sanding case study. Our work shows that the learning approach that has been optimized based on task characteristics significantly outperforms an unoptimized learning approach based on the overall task completion time.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"1 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42064611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STL-Free Adaptive Slicing Scheme for Additive Manufacturing of Cellular Materials 用于细胞材料增材制造的无STL自适应切片方案
IF 3.1 3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-23 DOI: 10.1115/1.4063227
Sina Rastegarzadeh, Jida Huang
In recent years, advances in additive manufacturing (AM) techniques have called for a scalable fabrication framework for high-resolution designs. Despite several process-specific handful design approaches, there is a gap to fill between computer-aided design (CAD) and the manufacturing of highly detailed multi-scale materials, especially for delicate cellular materials design. This gap ought to be filled with an avenue capable of efficiently slicing multi-scale intricate designs. Most existing methods depend on the mesh representation, which is time-consuming and memory-hogging to generate. This paper proposes an adaptive direct slicing (mesh-free) pipeline that exploits the function representation (FRep) for hierarchical architected cellular materials design. To explore the capabilities of the presented approach, several sample structures with delicate architectures are fabricated using a stereolithography (SLA) printer. The computational efficiency of the proposed slicing algorithm is studied. Furthermore, the geometry frustration problem brought by the connection of distinct structures between functionally graded unit cells at the micro-scale level is also investigated.
近年来,增材制造(AM)技术的进步要求为高分辨率设计提供可扩展的制造框架。尽管有几种特定于工艺的设计方法,但计算机辅助设计(CAD)和高度精细的多尺度材料的制造之间仍存在差距,尤其是对于精细的蜂窝材料设计。应该用一种能够有效分割多尺度复杂设计的方法来填补这一空白。大多数现有的方法都依赖于网格表示,这是一种耗时且占用内存的生成方法。本文提出了一种自适应直接切片(无网格)流水线,该流水线利用函数表示(FRep)进行分层结构的蜂窝材料设计。为了探索所提出的方法的能力,使用立体光刻(SLA)打印机制造了几个具有精细结构的样品结构。研究了所提出的切片算法的计算效率。此外,还在微观尺度上研究了功能梯度单元之间不同结构的连接所带来的几何挫折问题。
{"title":"STL-Free Adaptive Slicing Scheme for Additive Manufacturing of Cellular Materials","authors":"Sina Rastegarzadeh, Jida Huang","doi":"10.1115/1.4063227","DOIUrl":"https://doi.org/10.1115/1.4063227","url":null,"abstract":"\u0000 In recent years, advances in additive manufacturing (AM) techniques have called for a scalable fabrication framework for high-resolution designs. Despite several process-specific handful design approaches, there is a gap to fill between computer-aided design (CAD) and the manufacturing of highly detailed multi-scale materials, especially for delicate cellular materials design. This gap ought to be filled with an avenue capable of efficiently slicing multi-scale intricate designs. Most existing methods depend on the mesh representation, which is time-consuming and memory-hogging to generate. This paper proposes an adaptive direct slicing (mesh-free) pipeline that exploits the function representation (FRep) for hierarchical architected cellular materials design. To explore the capabilities of the presented approach, several sample structures with delicate architectures are fabricated using a stereolithography (SLA) printer. The computational efficiency of the proposed slicing algorithm is studied. Furthermore, the geometry frustration problem brought by the connection of distinct structures between functionally graded unit cells at the micro-scale level is also investigated.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42085100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HG-CAD: Hierarchical Graph Learning for Material Prediction and Recommendation in CAD HG-CAD:CAD中用于材料预测和推荐的层次图学习
IF 3.1 3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-23 DOI: 10.1115/1.4063226
Shijie Bian, Daniele Grandi, Tianyang Liu, P. Jayaraman, Karl Willis, Elliot T. Salder, Bodia Borijin, Thomas Lu, Richard Otis, Nhut Ho, Bingbing Li
To enable intelligent CAD design tools, we introduce a machine learning architecture, namely HG-CAD, that supports the automated material prediction and recommendation of assembly bodies through joint learning of body and assembly-level features using a hierarchical graph representation. Specifically, we formulate the material prediction and recommendation process as a node-level classification task over a novel hierarchical graph representation of CAD models, with a low-level graph capturing the body geometry, a high-level graph representing the assembly topology, and a batch-level mask randomization enabling contextual awareness. This enables our network to aggregate geometric and topological features from both the body and assembly levels, leading to superior performance. Qualitative and quantitative evaluation of the proposed architecture on the Fusion 360 Gallery Assembly Dataset demonstrates the feasibility of our approach, outperforming both computer vision and human baselines, while showing promise in application scenarios. The proposed HG-CAD architecture that unifies the processing, encoding, and joint learning of multi-modal CAD features can scale to large repositories, incorporating designers' knowledge into the learning process. These capabilities allow the architecture to serve as a recommendation system for design automation and a baseline for future work.
为了实现智能CAD设计工具,我们引入了一种机器学习架构,即HG-CAD,该架构通过使用层次图表示对车身和装配级特征进行联合学习,支持装配车身的自动材料预测和推荐。具体而言,我们将材料预测和推荐过程公式化为CAD模型的新层次图表示上的节点级分类任务,其中低层次图捕捉车身几何结构,高层次图表示装配拓扑结构,以及批量级掩码随机化,实现上下文感知。这使我们的网络能够从车身和装配级别聚合几何和拓扑特征,从而获得卓越的性能。在Fusion 360 Gallery Assembly数据集上对所提出的架构进行定性和定量评估,证明了我们方法的可行性,优于计算机视觉和人类基线,同时在应用场景中显示出前景。所提出的HG-CAD架构统一了多模态CAD特征的处理、编码和联合学习,可以扩展到大型存储库,将设计师的知识融入学习过程。这些功能允许体系结构作为设计自动化的推荐系统和未来工作的基线。
{"title":"HG-CAD: Hierarchical Graph Learning for Material Prediction and Recommendation in CAD","authors":"Shijie Bian, Daniele Grandi, Tianyang Liu, P. Jayaraman, Karl Willis, Elliot T. Salder, Bodia Borijin, Thomas Lu, Richard Otis, Nhut Ho, Bingbing Li","doi":"10.1115/1.4063226","DOIUrl":"https://doi.org/10.1115/1.4063226","url":null,"abstract":"\u0000 To enable intelligent CAD design tools, we introduce a machine learning architecture, namely HG-CAD, that supports the automated material prediction and recommendation of assembly bodies through joint learning of body and assembly-level features using a hierarchical graph representation. Specifically, we formulate the material prediction and recommendation process as a node-level classification task over a novel hierarchical graph representation of CAD models, with a low-level graph capturing the body geometry, a high-level graph representing the assembly topology, and a batch-level mask randomization enabling contextual awareness. This enables our network to aggregate geometric and topological features from both the body and assembly levels, leading to superior performance. Qualitative and quantitative evaluation of the proposed architecture on the Fusion 360 Gallery Assembly Dataset demonstrates the feasibility of our approach, outperforming both computer vision and human baselines, while showing promise in application scenarios. The proposed HG-CAD architecture that unifies the processing, encoding, and joint learning of multi-modal CAD features can scale to large repositories, incorporating designers' knowledge into the learning process. These capabilities allow the architecture to serve as a recommendation system for design automation and a baseline for future work.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49586545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Methods for the Automated Determination of Sustained Maximum Amplitudes in Oscillating Signals 振荡信号中持续最大振幅的自动测定方法
IF 3.1 3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-08 DOI: 10.1115/1.4063130
Nathaniel DeVol, Christopher Saldaña, Katherine Fu
Machine condition monitoring has been proven to reduce machine down time and increase productivity. State of the art research uses vibration monitoring for tasks such as maintenance and tool wear prediction. A less explored aspect is how vibration monitoring might be used to monitor equipment sensitive to vibration. In a manufacturing environment, one example of where this might be needed is in monitoring the vibration of optical linear encoders used in high precision machine tools and coordinate measuring machines. Monitoring the vibration of sensitive equipment presents a unique case for vibration monitoring because an accurate calculation of the maximum sustained vibration is needed, as opposed to extracting trends from the data. To do this, techniques for determining sustained peaks in vibration signals are needed. This work fills this gap by formalizing and testing methods for determining sustained vibration amplitudes. The methods are tested on simulated signals based on experimental data. Results show that processing the signal directly with the novel Expire Timer method produces the smallest amounts of error on average under various test conditions. Additionally, this method can operate in real-time on streaming vibration data.
机器状态监测已被证明可以减少机器停机时间并提高生产率。最先进的研究将振动监测用于维护和工具磨损预测等任务。一个较少探索的方面是如何使用振动监测来监测对振动敏感的设备。在制造环境中,可能需要这样做的一个例子是监测高精度机床和坐标测量机中使用的光学线性编码器的振动。监测敏感设备的振动是振动监测的一个独特案例,因为需要准确计算最大持续振动,而不是从数据中提取趋势。为此,需要确定振动信号中持续峰值的技术。这项工作通过确定持续振幅的正式化和测试方法填补了这一空白。基于实验数据在模拟信号上对这些方法进行了测试。结果表明,在各种测试条件下,用新型Expire-Timer方法直接处理信号平均产生的误差最小。此外,该方法可以实时处理流式振动数据。
{"title":"Methods for the Automated Determination of Sustained Maximum Amplitudes in Oscillating Signals","authors":"Nathaniel DeVol, Christopher Saldaña, Katherine Fu","doi":"10.1115/1.4063130","DOIUrl":"https://doi.org/10.1115/1.4063130","url":null,"abstract":"\u0000 Machine condition monitoring has been proven to reduce machine down time and increase productivity. State of the art research uses vibration monitoring for tasks such as maintenance and tool wear prediction. A less explored aspect is how vibration monitoring might be used to monitor equipment sensitive to vibration. In a manufacturing environment, one example of where this might be needed is in monitoring the vibration of optical linear encoders used in high precision machine tools and coordinate measuring machines. Monitoring the vibration of sensitive equipment presents a unique case for vibration monitoring because an accurate calculation of the maximum sustained vibration is needed, as opposed to extracting trends from the data. To do this, techniques for determining sustained peaks in vibration signals are needed. This work fills this gap by formalizing and testing methods for determining sustained vibration amplitudes. The methods are tested on simulated signals based on experimental data. Results show that processing the signal directly with the novel Expire Timer method produces the smallest amounts of error on average under various test conditions. Additionally, this method can operate in real-time on streaming vibration data.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43002284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human digital twin, the development and impact on design 人类数字孪生的发展及其对设计的影响
IF 3.1 3区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-08 DOI: 10.1115/1.4063132
Yun-Hwa Song
In the past decade, human digital twins (HDTs) attracted much attention in and beyond digital twin (DT) applications. In this paper, we discuss the concept and the development of HDTs with a focus on their architecture, ethical concerns, key enabling technologies, and the opportunities of using HDTs in design. Based on the literature, we identified that data, model, and interface are three key modules in the proposed HDT architecture. Ethics is an important concern in the development and the use of the HDT from the humanities perspective. For key enabling technologies that support the functions of the HDT, we argue that the IoT infrastructure, data security, wearables, human modeling, explainable artificial intelligence, minimum viable sensing, and data visualization are strongly associated with the development of HDTs. Based on current applications, we highlight the design opportunities of using HDTs in designing products, services, and systems, as well as a design tool to facilitate the design process.
在过去的十年里,人类数字双胞胎(HDT)在数字双胞胎(DT)应用中引起了广泛关注。在本文中,我们讨论了HDT的概念和发展,重点是其架构、伦理问题、关键使能技术以及在设计中使用HDT的机会。基于文献,我们确定数据、模型和接口是所提出的HDT架构中的三个关键模块。从人文学科的角度来看,伦理学是HDT发展和使用中的一个重要问题。对于支持HDT功能的关键使能技术,我们认为物联网基础设施、数据安全、可穿戴设备、人体建模、可解释的人工智能、最小可行感知和数据可视化与HDT的发展密切相关。基于当前的应用,我们强调了在设计产品、服务和系统时使用HDT的设计机会,以及促进设计过程的设计工具。
{"title":"Human digital twin, the development and impact on design","authors":"Yun-Hwa Song","doi":"10.1115/1.4063132","DOIUrl":"https://doi.org/10.1115/1.4063132","url":null,"abstract":"\u0000 In the past decade, human digital twins (HDTs) attracted much attention in and beyond digital twin (DT) applications. In this paper, we discuss the concept and the development of HDTs with a focus on their architecture, ethical concerns, key enabling technologies, and the opportunities of using HDTs in design. Based on the literature, we identified that data, model, and interface are three key modules in the proposed HDT architecture. Ethics is an important concern in the development and the use of the HDT from the humanities perspective. For key enabling technologies that support the functions of the HDT, we argue that the IoT infrastructure, data security, wearables, human modeling, explainable artificial intelligence, minimum viable sensing, and data visualization are strongly associated with the development of HDTs. Based on current applications, we highlight the design opportunities of using HDTs in designing products, services, and systems, as well as a design tool to facilitate the design process.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43914245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Journal of Computing and Information Science in Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1