首页 > 最新文献

2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)最新文献

英文 中文
DW vs OLTP Performance Optimization in the Cloud on PostgreSQL (A Case Study) 基于PostgreSQL的DW与OLTP性能优化(一个案例研究)
Dakota Joiner, Mathias Clement, Shek Tom Chan, Keegan Pereira, Albert Wong, Y. Khmelevsky, Joe Mahony, Michael Ferri
This case study shows the performance issues and solutions for a data warehouse (DW) performing well to serve industrial partners in improving customer data retrieval performance. An online transaction processing (OLTP) relational database and a DW were deployed in PostgreSQL and tested against each other. Several test cases were carried out with the DW, including indexing and creating pre-aggregated tables, all guided by in-depth analysis of EXPLAIN plans. Queries and DW design were continually improved throughout testing to ensure that the OLTP and DW were compared equally. Seven queries (requested by the industrial client) were used to thoroughly test different performance aspects concerning client feedback and the complexity of requests for all areas the DW might cover. On average, the data warehouse showed a one to three magnitudes increase in query execution performance, with the highest calibre results coming in at 2,493 times faster than the OLTP. All test cases showed an increase in performance over the OLTP. Additionally, the data contained in the DWtook up 24% less storage space than the OLTP. The results here indicate a promising direction to take business analytics with data warehousing, as customers will experience significant cost savings and a reduction in time to receive desired results from their data storage platforms in the cloud. The work in this case study is a continuation of previous work in a much larger project concerning integrating database technologies with machine learning to improve natural language processing solutions as a cost-saving measure for utilities consumers.
本案例研究展示了性能良好的数据仓库(DW)的性能问题和解决方案,以便为工业合作伙伴提供改进客户数据检索性能的服务。在PostgreSQL中部署了一个联机事务处理(OLTP)关系数据库和一个DW,并对彼此进行了测试。使用DW执行了几个测试用例,包括索引和创建预聚合表,所有这些都由对EXPLAIN计划的深入分析指导。在整个测试过程中,查询和DW设计不断得到改进,以确保OLTP和DW得到平等的比较。七个查询(由工业客户端请求)用于彻底测试与客户反馈有关的不同性能方面,以及DW可能涵盖的所有领域的请求复杂性。平均而言,数据仓库显示查询执行性能提高了1到3个数量级,最高水平的结果比OLTP快2,493倍。所有的测试用例都表明,与OLTP相比,性能有所提高。此外,dwp中包含的数据占用的存储空间比OLTP少24%。这里的结果表明了将业务分析与数据仓库结合起来的一个有希望的方向,因为客户将体验到显著的成本节约和从云中的数据存储平台接收所需结果的时间减少。本案例研究中的工作是之前一个更大项目的延续,该项目涉及将数据库技术与机器学习集成,以改进自然语言处理解决方案,为公用事业用户节省成本。
{"title":"DW vs OLTP Performance Optimization in the Cloud on PostgreSQL (A Case Study)","authors":"Dakota Joiner, Mathias Clement, Shek Tom Chan, Keegan Pereira, Albert Wong, Y. Khmelevsky, Joe Mahony, Michael Ferri","doi":"10.1109/RASSE54974.2022.9989603","DOIUrl":"https://doi.org/10.1109/RASSE54974.2022.9989603","url":null,"abstract":"This case study shows the performance issues and solutions for a data warehouse (DW) performing well to serve industrial partners in improving customer data retrieval performance. An online transaction processing (OLTP) relational database and a DW were deployed in PostgreSQL and tested against each other. Several test cases were carried out with the DW, including indexing and creating pre-aggregated tables, all guided by in-depth analysis of EXPLAIN plans. Queries and DW design were continually improved throughout testing to ensure that the OLTP and DW were compared equally. Seven queries (requested by the industrial client) were used to thoroughly test different performance aspects concerning client feedback and the complexity of requests for all areas the DW might cover. On average, the data warehouse showed a one to three magnitudes increase in query execution performance, with the highest calibre results coming in at 2,493 times faster than the OLTP. All test cases showed an increase in performance over the OLTP. Additionally, the data contained in the DWtook up 24% less storage space than the OLTP. The results here indicate a promising direction to take business analytics with data warehousing, as customers will experience significant cost savings and a reduction in time to receive desired results from their data storage platforms in the cloud. The work in this case study is a continuation of previous work in a much larger project concerning integrating database technologies with machine learning to improve natural language processing solutions as a cost-saving measure for utilities consumers.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129177426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Edge-Guided Video Super-Resolution Network 边缘引导视频超分辨率网络
Haohsuan Tseng, Chih-Hung Kuo, Yiting Chen, Sinhong Lee
In this paper, we propose an edge-guided video super-resolution (EGVSR) network that utilizes the edge information of the image to effectively recover high-frequency details for high-resolution frames. The reconstruction process consists of two stages. In the first stage, the Coarse Frame Reconstruction Network (CFRN) generates coarse SR frames. In addition, we propose the Edge-Prediction Network (EPN) to capture the edge details that help to supplement the missing high-frequency information. Unlike some prior SR works that tend to increase the depth of networks or use attention mechanisms to reconstruct large-size objects but ignore small-size objects, we propose the Attention Fusion Residual Block (AFRB) to process objects of different sizes. The AFRB, an enhanced version of the conventional residual block, performs fusion through a multi-scale channel attention mechanism and serves as the basic operation unit in the CFRN and the EPN. Then, in the second stage, we propose the Frame Refinement Network (FRN), which contains multiple convolution layers. Through the FRN, we fuse and refine the coarse SR frames and edge information learned from the first stage. Compared with the state-of-the-art methods, our SR model improves approximately 0.5% in PSNR and 1.8% in SSIM evaluation on the benchmark VID4 dataset when the number of parameters is reduced by 54%.
在本文中,我们提出了一种边缘引导视频超分辨率(EGVSR)网络,它利用图像的边缘信息来有效地恢复高分辨率帧的高频细节。重建过程包括两个阶段。在第一阶段,CFRN(粗帧重建网络)生成粗帧SR帧。此外,我们提出了边缘预测网络(EPN)来捕获边缘细节,有助于补充缺失的高频信息。不同于以往一些SR研究倾向于增加网络深度或使用注意机制重构大尺寸物体而忽略小尺寸物体,我们提出了注意融合残差块(attention Fusion Residual Block, AFRB)来处理不同尺寸的物体。AFRB是传统残余块的增强版本,通过多尺度通道注意机制进行融合,是CFRN和EPN的基本操作单元。然后,在第二阶段,我们提出了包含多个卷积层的帧细化网络(FRN)。通过FRN,我们融合和细化从第一阶段学习到的粗SR帧和边缘信息。与最先进的方法相比,当参数数量减少54%时,我们的SR模型在基准VID4数据集上的PSNR提高了约0.5%,SSIM评估提高了1.8%。
{"title":"Edge-Guided Video Super-Resolution Network","authors":"Haohsuan Tseng, Chih-Hung Kuo, Yiting Chen, Sinhong Lee","doi":"10.1109/RASSE54974.2022.9989570","DOIUrl":"https://doi.org/10.1109/RASSE54974.2022.9989570","url":null,"abstract":"In this paper, we propose an edge-guided video super-resolution (EGVSR) network that utilizes the edge information of the image to effectively recover high-frequency details for high-resolution frames. The reconstruction process consists of two stages. In the first stage, the Coarse Frame Reconstruction Network (CFRN) generates coarse SR frames. In addition, we propose the Edge-Prediction Network (EPN) to capture the edge details that help to supplement the missing high-frequency information. Unlike some prior SR works that tend to increase the depth of networks or use attention mechanisms to reconstruct large-size objects but ignore small-size objects, we propose the Attention Fusion Residual Block (AFRB) to process objects of different sizes. The AFRB, an enhanced version of the conventional residual block, performs fusion through a multi-scale channel attention mechanism and serves as the basic operation unit in the CFRN and the EPN. Then, in the second stage, we propose the Frame Refinement Network (FRN), which contains multiple convolution layers. Through the FRN, we fuse and refine the coarse SR frames and edge information learned from the first stage. Compared with the state-of-the-art methods, our SR model improves approximately 0.5% in PSNR and 1.8% in SSIM evaluation on the benchmark VID4 dataset when the number of parameters is reduced by 54%.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131559765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous Mover with Social Distance Respect 自主移动与社会距离尊重
I-hsiang Lai, Wei-Liang Lin
Modern robots need to interact with human and move around human environment, in places such as museums, restaurants, or supermarkets. Therefore, robots should have social navigation capability. This article uses object detection to detect pedestrians, fuses object detection result with lidar information to obtain the state of the pedestrian, and then changes the navigation path according to the calculated pedestrian state. When there are people face-to-face and talking to each other, the autonomous mover bypasses instead of passing through them. When pedestrian in front of the autonomous mover is crossing the autonomous mover from left to right, the autonomous mover turns left to pass the other side instead of going straight and blocking the pedestrian. Therefore, the autonomous mover can navigate without disturbing pedestrians and respect social distance.Our approach uses a single RGB camera and a one-line lidar to detect pedestrian and accomplish the two specific goals in the real world. We fuse lidar information and object detection result to obtain the position and face orientation of the pedestrian. We add a customized social layer to the cost map of an existing navigation system, and thus, change the original shortest path algorithm. The face-to-face and crossing scenarios are verified in the hall of a university department building.
现代机器人需要与人类互动,并在人类环境中移动,如博物馆、餐馆或超市等场所。因此,机器人应该具备社交导航能力。本文采用目标检测对行人进行检测,将目标检测结果与激光雷达信息融合得到行人的状态,然后根据计算出的行人状态改变导航路径。当有人面对面交谈时,自动移动器绕过而不是穿过他们。当前面的行人从左向右穿过自动行动车时,自动行动车不会直行挡住行人,而是向左转通过另一侧。因此,自动驾驶汽车可以在不打扰行人的情况下行驶,并尊重社会距离。我们的方法使用单个RGB摄像头和单线激光雷达来检测行人,并在现实世界中实现两个特定目标。我们将激光雷达信息与目标检测结果融合,得到行人的位置和面部方向。我们在现有导航系统的成本图中加入自定义的社会层,从而改变原有的最短路径算法。面对面和交叉的场景在大学系楼的大厅进行验证。
{"title":"Autonomous Mover with Social Distance Respect","authors":"I-hsiang Lai, Wei-Liang Lin","doi":"10.1109/RASSE54974.2022.9989723","DOIUrl":"https://doi.org/10.1109/RASSE54974.2022.9989723","url":null,"abstract":"Modern robots need to interact with human and move around human environment, in places such as museums, restaurants, or supermarkets. Therefore, robots should have social navigation capability. This article uses object detection to detect pedestrians, fuses object detection result with lidar information to obtain the state of the pedestrian, and then changes the navigation path according to the calculated pedestrian state. When there are people face-to-face and talking to each other, the autonomous mover bypasses instead of passing through them. When pedestrian in front of the autonomous mover is crossing the autonomous mover from left to right, the autonomous mover turns left to pass the other side instead of going straight and blocking the pedestrian. Therefore, the autonomous mover can navigate without disturbing pedestrians and respect social distance.Our approach uses a single RGB camera and a one-line lidar to detect pedestrian and accomplish the two specific goals in the real world. We fuse lidar information and object detection result to obtain the position and face orientation of the pedestrian. We add a customized social layer to the cost map of an existing navigation system, and thus, change the original shortest path algorithm. The face-to-face and crossing scenarios are verified in the hall of a university department building.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"274 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124261705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of Gabor Filter Based Convolution for Deep Learning on FPGA 基于卷积Gabor滤波器的深度学习FPGA实现
Yu-Wen Wang, Gwo Giun Chris Lee, Yu-Hsuan Chen, Shih-Yu Chen, Tai-Ping Wang
This paper implements an application specific design for calculating the two-dimensional convolution with given Gabor filters onto a Field Programmable Gate Array (FPGA). Nowadays, Convolutional Neural Network (CNN) is a widely used algorithm in the field of computer vision. However, the amount of computation it requires is immense, and therefore special algorithms and hardware are necessary to accelerate the process. We introduce the Eigen-transformation approach, which transforms the 16 Gabor filters into another 16 filters with increased symmetry. This reduces the number of operations, as well as allows us to pre-add the input pixels corresponding to the position of the repeated coefficients. Previous works from our lab analyze the symmetry properties of 7×7 Gabor filters and build the dataflow model of Gabor filter based convolution and use software to implement it. In this paper, we analyze the four models of processing units for the transformed filter bank proposed by the previous work in our lab and use the Xilinx XUPV5-LX110T Evaluation Platform for prototyping. The proposed four models each have unique advantages that make them suitable for different applications. In the experiment, we use a 224×224 image as input and the bit-width of data is 32. Finally, we use the Xilinx Chipscope as an integrated logic analyzer for verification.
本文实现了在现场可编程门阵列(FPGA)上计算给定Gabor滤波器的二维卷积的特定应用设计。卷积神经网络(Convolutional Neural Network, CNN)是目前计算机视觉领域中应用最为广泛的一种算法。然而,它需要的计算量是巨大的,因此需要特殊的算法和硬件来加速这一过程。我们介绍了特征变换方法,它将16个Gabor滤波器转换成另外16个对称性增加的滤波器。这减少了操作的次数,并允许我们预先添加对应于重复系数位置的输入像素。本实验室之前的工作分析了7×7 Gabor滤波器的对称性,建立了基于卷积的Gabor滤波器的数据流模型,并用软件实现。本文对前人提出的变换滤波器组处理单元的四种模型进行了分析,并利用Xilinx XUPV5-LX110T评估平台进行了原型设计。提出的四种模型各有其独特的优点,适合不同的应用。在实验中,我们使用224×224图像作为输入,数据的位宽为32。最后,我们使用Xilinx Chipscope作为集成逻辑分析仪进行验证。
{"title":"Implementation of Gabor Filter Based Convolution for Deep Learning on FPGA","authors":"Yu-Wen Wang, Gwo Giun Chris Lee, Yu-Hsuan Chen, Shih-Yu Chen, Tai-Ping Wang","doi":"10.1109/RASSE54974.2022.9989881","DOIUrl":"https://doi.org/10.1109/RASSE54974.2022.9989881","url":null,"abstract":"This paper implements an application specific design for calculating the two-dimensional convolution with given Gabor filters onto a Field Programmable Gate Array (FPGA). Nowadays, Convolutional Neural Network (CNN) is a widely used algorithm in the field of computer vision. However, the amount of computation it requires is immense, and therefore special algorithms and hardware are necessary to accelerate the process. We introduce the Eigen-transformation approach, which transforms the 16 Gabor filters into another 16 filters with increased symmetry. This reduces the number of operations, as well as allows us to pre-add the input pixels corresponding to the position of the repeated coefficients. Previous works from our lab analyze the symmetry properties of 7×7 Gabor filters and build the dataflow model of Gabor filter based convolution and use software to implement it. In this paper, we analyze the four models of processing units for the transformed filter bank proposed by the previous work in our lab and use the Xilinx XUPV5-LX110T Evaluation Platform for prototyping. The proposed four models each have unique advantages that make them suitable for different applications. In the experiment, we use a 224×224 image as input and the bit-width of data is 32. Finally, we use the Xilinx Chipscope as an integrated logic analyzer for verification.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129415917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Programmable CNN Accelerator with RISC-V Core in Real-Time Wearable Application 基于RISC-V内核的可编程CNN加速器在实时可穿戴应用中的应用
Sing-Yu Pan, Shuenn-Yuh Lee, Yi-Wen Hung, Chou-Ching K. Lin, G. Shieh
This paper has proposed an epilepsy detection algorithm to identify the seizure attack. The algorithm includes a simplified signal preprocessing process and an 8 layers Convolution Neural Network (CNN). This paper has also proposed an architecture, including a CNN accelerator and a 2-stage reduced instruction set computer-V (RISC-V) CPU, to implement the detection algorithm in real-time. The accelerator is implemented in SystemVerilog and validated on the Xilinx PYNQ-Z2. The implementation consumes 3411 LUTs, 2262 flip-flops, 84 KB block random access memory (BRAM), and only 6 DSPs. The total power consumption is 0.118 W in 10-MHz operation frequency. The detection algorithm provides 99.16% accuracy on fixed-point operations with detection latency of 0.137 ms/class. Moreover, the CNN accelerator has the programable ability, so the accelerator can execute different CNN models to fit various wearable applications for different biomedical acquisition systems.
本文提出了一种癫痫检测算法来识别癫痫发作。该算法包括一个简化的信号预处理过程和一个8层卷积神经网络(CNN)。本文还提出了一种包括CNN加速器和二级精简指令集计算机- v (RISC-V) CPU的架构,以实现实时检测算法。该加速器在SystemVerilog中实现,并在Xilinx PYNQ-Z2上进行了验证。该实现消耗3411个lut、2262个触发器、84 KB块随机存取存储器(BRAM)和仅6个dsp。在10mhz工作频率下,总功耗为0.118 W。该算法对定点运算的检测准确率为99.16%,检测延迟为0.137 ms/class。此外,CNN加速器具有可编程能力,因此加速器可以执行不同的CNN模型,以适应不同生物医学采集系统的各种可穿戴应用。
{"title":"A Programmable CNN Accelerator with RISC-V Core in Real-Time Wearable Application","authors":"Sing-Yu Pan, Shuenn-Yuh Lee, Yi-Wen Hung, Chou-Ching K. Lin, G. Shieh","doi":"10.1109/RASSE54974.2022.9989732","DOIUrl":"https://doi.org/10.1109/RASSE54974.2022.9989732","url":null,"abstract":"This paper has proposed an epilepsy detection algorithm to identify the seizure attack. The algorithm includes a simplified signal preprocessing process and an 8 layers Convolution Neural Network (CNN). This paper has also proposed an architecture, including a CNN accelerator and a 2-stage reduced instruction set computer-V (RISC-V) CPU, to implement the detection algorithm in real-time. The accelerator is implemented in SystemVerilog and validated on the Xilinx PYNQ-Z2. The implementation consumes 3411 LUTs, 2262 flip-flops, 84 KB block random access memory (BRAM), and only 6 DSPs. The total power consumption is 0.118 W in 10-MHz operation frequency. The detection algorithm provides 99.16% accuracy on fixed-point operations with detection latency of 0.137 ms/class. Moreover, the CNN accelerator has the programable ability, so the accelerator can execute different CNN models to fit various wearable applications for different biomedical acquisition systems.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129799081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithmic Trading and Short-term Forecast for Financial Time Series with Machine Learning Models; State of the Art and Perspectives 基于机器学习模型的金融时间序列算法交易与短期预测技术现状与展望
Dakota Joiner, Amy Vezeau, Albert Wong, Gaétan Hains, Y. Khmelevsky
Stock price prediction with machine learning is an oft-studied area where numerous unsolved problems still abound owing to the high complexity and volatility that technical-factors and sentiment-analysis models are trying to capture. Nearly all areas of machine learning (ML) have been tested as solutions to generate a truly accurate predictive model. The accuracy of most models hovers around 50%, highlighting the need for further increases in precision, data handling, forecasting, and ultimately prediction. This literature review aggregates and concludes the current state of the art (from 2018 onward) with specifically selected criteria to guide further research into algorithmic trading. The review targets academic papers on ML or deep learning (DL) with algorithmic trading or data sets used for algorithmic trading with minute to daily time scales. Systems that integrate and test sentiment and technical analysis are considered the best candidates for an eventual generalized trading algorithm that can be applied to any stock, future, or traded commodity. However, much work remains to be done in applying natural language processing and the choice of text sources to find the most effective mixture of sentiment and technical analysis. The best models being useless on themselves, we also search for publications about data warehousing systems aggregating financial factors impacting stock prices. A brief review in this area is included in this regard.
利用机器学习进行股票价格预测是一个经常被研究的领域,由于技术因素和情绪分析模型试图捕捉的高度复杂性和波动性,许多尚未解决的问题仍然存在。机器学习(ML)的几乎所有领域都已经作为生成真正准确的预测模型的解决方案进行了测试。大多数模型的准确性徘徊在50%左右,这突出了进一步提高精度、数据处理、预测和最终预测的必要性。本文献综述汇总并总结了当前的技术状况(从2018年开始),并特别选择了标准来指导对算法交易的进一步研究。该评论针对ML或深度学习(DL)的学术论文,这些论文带有算法交易或用于算法交易的数据集,时间尺度为分钟到每日。整合和测试情绪和技术分析的系统被认为是最终通用交易算法的最佳候选者,该算法可以应用于任何股票、期货或交易商品。然而,在应用自然语言处理和文本源的选择来找到最有效的情感和技术分析的混合方面,还有很多工作要做。最好的模型本身是无用的,我们还搜索了有关数据仓库系统汇总影响股票价格的金融因素的出版物。在这方面扼要地回顾一下这方面的情况。
{"title":"Algorithmic Trading and Short-term Forecast for Financial Time Series with Machine Learning Models; State of the Art and Perspectives","authors":"Dakota Joiner, Amy Vezeau, Albert Wong, Gaétan Hains, Y. Khmelevsky","doi":"10.1109/RASSE54974.2022.9989592","DOIUrl":"https://doi.org/10.1109/RASSE54974.2022.9989592","url":null,"abstract":"Stock price prediction with machine learning is an oft-studied area where numerous unsolved problems still abound owing to the high complexity and volatility that technical-factors and sentiment-analysis models are trying to capture. Nearly all areas of machine learning (ML) have been tested as solutions to generate a truly accurate predictive model. The accuracy of most models hovers around 50%, highlighting the need for further increases in precision, data handling, forecasting, and ultimately prediction. This literature review aggregates and concludes the current state of the art (from 2018 onward) with specifically selected criteria to guide further research into algorithmic trading. The review targets academic papers on ML or deep learning (DL) with algorithmic trading or data sets used for algorithmic trading with minute to daily time scales. Systems that integrate and test sentiment and technical analysis are considered the best candidates for an eventual generalized trading algorithm that can be applied to any stock, future, or traded commodity. However, much work remains to be done in applying natural language processing and the choice of text sources to find the most effective mixture of sentiment and technical analysis. The best models being useless on themselves, we also search for publications about data warehousing systems aggregating financial factors impacting stock prices. A brief review in this area is included in this regard.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129859841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Diversity, Equity, and Inclusion Model for Engineering Curriculums 工程课程的多样性、公平性和包容性模式
Holly A. H. Handley, A. Marnewick
The Accreditation Board for Engineering and Technology (ABET) has included the principles of diversity, equity, and inclusion (DEI) into the General Criteria for Accrediting Engineering Programs. The intent is for these professional competencies to be taught in tandem with the technical skills provided by the engineering curriculum, however there are few guidelines on how to do this. In this paper, the authors propose a model to help instructors incorporate DEI principles into existing engineering coursework; the approach is based on a competency building model previously developed to integrate international competencies into systems engineering courses by identifying opportunities in the student learning cycle. This paper adapts that model by identifying both the appropriate competencies to develop and the classroom context factors that support DEI. The competencies, cognitive style awareness and teamwork, are fostered using constructs from a learning model, while the classroom context factors include the technical content, interactions with other students, and the teaching environment. An example is provided that illustrates the use of the model in a systems engineering curriculum to improve an existing course module to better adhere to DEI principles. The DEI competencies identified by this model augment those advocated by the systems engineering community, i.e., the necessary characteristics to be successful as professionals in the systems engineering field. The integrated DEI model can be used to improve existing curriculum to meet these needs.
工程技术认证委员会(ABET)将多样性、公平和包容性(DEI)的原则纳入了工程项目认证的一般标准。其目的是将这些专业能力与工程课程提供的技术技能结合起来教授,然而关于如何做到这一点的指导方针很少。在本文中,作者提出了一个模型,以帮助教师将DEI原则纳入现有的工程课程;该方法基于先前开发的能力建设模型,通过识别学生学习周期中的机会,将国际能力整合到系统工程课程中。本文通过确定适当的能力发展和支持DEI的课堂环境因素来调整该模型。能力,认知风格意识和团队合作,是通过学习模型的构念来培养的,而课堂环境因素包括技术内容,与其他学生的互动以及教学环境。提供了一个示例,说明了在系统工程课程中使用该模型来改进现有的课程模块,以更好地遵守DEI原则。由该模型确定的DEI能力增强了系统工程社区所提倡的那些能力,即,作为系统工程领域的专业人员取得成功的必要特征。整合的DEI模式可以用来改善现有的课程,以满足这些需求。
{"title":"A Diversity, Equity, and Inclusion Model for Engineering Curriculums","authors":"Holly A. H. Handley, A. Marnewick","doi":"10.1109/RASSE54974.2022.9989693","DOIUrl":"https://doi.org/10.1109/RASSE54974.2022.9989693","url":null,"abstract":"The Accreditation Board for Engineering and Technology (ABET) has included the principles of diversity, equity, and inclusion (DEI) into the General Criteria for Accrediting Engineering Programs. The intent is for these professional competencies to be taught in tandem with the technical skills provided by the engineering curriculum, however there are few guidelines on how to do this. In this paper, the authors propose a model to help instructors incorporate DEI principles into existing engineering coursework; the approach is based on a competency building model previously developed to integrate international competencies into systems engineering courses by identifying opportunities in the student learning cycle. This paper adapts that model by identifying both the appropriate competencies to develop and the classroom context factors that support DEI. The competencies, cognitive style awareness and teamwork, are fostered using constructs from a learning model, while the classroom context factors include the technical content, interactions with other students, and the teaching environment. An example is provided that illustrates the use of the model in a systems engineering curriculum to improve an existing course module to better adhere to DEI principles. The DEI competencies identified by this model augment those advocated by the systems engineering community, i.e., the necessary characteristics to be successful as professionals in the systems engineering field. The integrated DEI model can be used to improve existing curriculum to meet these needs.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122220448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO-Based Deep-Learning Gaze Estimation Technology by Combining Geometric Feature and Appearance Based Technologies for Smart Advertising Displays 结合几何特征和外观技术的基于yolo的深度学习注视估计技术用于智能广告显示
Chin-Chieh Chang, Wei-Liang Ou, Hua-Luen Chen, Chih-Peng Fan
In this study, a YOLO-based deep-learning gaze estimation technology is developed for the application of non-contact smart advertising displays. By integrating the appearance and geometric-features technologies, the output coordinates of facial features inferred by YOLOv3-tiny based models can provide the training data for gaze estimation without the calibration process. In experiments, the input size of YOLOv3-tiny based models is arranged by 608x608 pixels, and the used models have good location performance to detect the facial directions and two facial features. By the YOLOv3-tiny based model with the cross-person test, the proposed method performs the averaged gaze estimation accuracies of nine, six, and four-block modes are 66.38%, 80.87%, 88.34%, respectively with no calibration process.
本研究针对非接触式智能广告显示器的应用,开发了一种基于yolo的深度学习注视估计技术。通过整合外观和几何特征技术,基于YOLOv3-tiny的模型推断出的面部特征输出坐标可以在不经过校准的情况下为注视估计提供训练数据。在实验中,基于YOLOv3-tiny的模型输入尺寸按608 × 608像素排列,所使用的模型在检测面部方向和两种面部特征方面具有良好的定位性能。通过基于YOLOv3-tiny模型的交叉人检验,该方法在不进行标定的情况下,对9、6、4块模式的平均注视估计精度分别为66.38%、80.87%、88.34%。
{"title":"YOLO-Based Deep-Learning Gaze Estimation Technology by Combining Geometric Feature and Appearance Based Technologies for Smart Advertising Displays","authors":"Chin-Chieh Chang, Wei-Liang Ou, Hua-Luen Chen, Chih-Peng Fan","doi":"10.1109/RASSE54974.2022.9989741","DOIUrl":"https://doi.org/10.1109/RASSE54974.2022.9989741","url":null,"abstract":"In this study, a YOLO-based deep-learning gaze estimation technology is developed for the application of non-contact smart advertising displays. By integrating the appearance and geometric-features technologies, the output coordinates of facial features inferred by YOLOv3-tiny based models can provide the training data for gaze estimation without the calibration process. In experiments, the input size of YOLOv3-tiny based models is arranged by 608x608 pixels, and the used models have good location performance to detect the facial directions and two facial features. By the YOLOv3-tiny based model with the cross-person test, the proposed method performs the averaged gaze estimation accuracies of nine, six, and four-block modes are 66.38%, 80.87%, 88.34%, respectively with no calibration process.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"403 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114938807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Convolutional Layers Acceleration By Exploring Optimal Filter Structures 探索最优滤波器结构的卷积层加速
Hsi-Ling Chen, J. Yang, Song-An Mao
CNN models are becoming more and more mature, many of them adopt deeper structures to better accomplish the task objectives, such that the increased computational and storage burdens are unfavorable for the implementation in edge devices. In this paper, we propose an approach to optimize the filter structure by starting from the convolutional filter and finding their minimum structure. The reductions of the filters for the minimum structure in terms of space and channels, the number of model parameters and the computational complexity are effectively reduced. Since the current channel pruning method prunes the same channel for each convolutional layer, which easily leads to a trade-off between the pruning rate and accuracy loss. Instead we propose a new channel pruning approach to find the most suitable required channels for each filter to provide a more detailed pruning method. Experiments conducted on the classification CNN models, such as VGG16 and ResNet56, show that the proposed method can successfully reduce the computations of the models without losing much model accuracy effectively. The proposed method performs well in compressing the model and reducing the number of parameters required by the models for real applications.
CNN模型越来越成熟,许多模型采用更深层次的结构来更好地完成任务目标,这增加了计算和存储负担,不利于在边缘设备中实现。本文提出了一种从卷积滤波器入手,寻找其最小结构来优化滤波器结构的方法。有效地减少了最小结构滤波器在空间和通道、模型参数数量和计算复杂度方面的缩减。由于目前的通道剪枝方法对每个卷积层都进行相同的通道剪枝,这容易导致在剪枝速率和精度损失之间进行权衡。相反,我们提出了一种新的通道修剪方法,为每个滤波器找到最合适的所需通道,以提供更详细的修剪方法。在VGG16和ResNet56等分类CNN模型上进行的实验表明,本文提出的方法可以有效地减少模型的计算量,而不会损失太多的模型精度。该方法在压缩模型和减少模型所需参数数量方面表现良好。
{"title":"Convolutional Layers Acceleration By Exploring Optimal Filter Structures","authors":"Hsi-Ling Chen, J. Yang, Song-An Mao","doi":"10.1109/RASSE54974.2022.9989667","DOIUrl":"https://doi.org/10.1109/RASSE54974.2022.9989667","url":null,"abstract":"CNN models are becoming more and more mature, many of them adopt deeper structures to better accomplish the task objectives, such that the increased computational and storage burdens are unfavorable for the implementation in edge devices. In this paper, we propose an approach to optimize the filter structure by starting from the convolutional filter and finding their minimum structure. The reductions of the filters for the minimum structure in terms of space and channels, the number of model parameters and the computational complexity are effectively reduced. Since the current channel pruning method prunes the same channel for each convolutional layer, which easily leads to a trade-off between the pruning rate and accuracy loss. Instead we propose a new channel pruning approach to find the most suitable required channels for each filter to provide a more detailed pruning method. Experiments conducted on the classification CNN models, such as VGG16 and ResNet56, show that the proposed method can successfully reduce the computations of the models without losing much model accuracy effectively. The proposed method performs well in compressing the model and reducing the number of parameters required by the models for real applications.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133756093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RASSE 2022 Cover Page RASSE 2022封面
{"title":"RASSE 2022 Cover Page","authors":"","doi":"10.1109/rasse54974.2022.9989913","DOIUrl":"https://doi.org/10.1109/rasse54974.2022.9989913","url":null,"abstract":"","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131438680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1