首页 > 最新文献

2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)最新文献

英文 中文
Stock Market Prediction using Recurrent Neural Network’s LSTM Architecture 基于递归神经网络LSTM结构的股票市场预测
Koushik Sutradhar, Sourav Sutradhar, Iqbal Ahmed Jhimel, S. Gupta, Mohammad Monirujjaman Khan
Stock market price prediction is a difficult undertaking that generally requires a lot of human-computer interaction. The stock market process is fraught with risk and is influenced by a variety of factors. Of all the market sectors, it is one of the most volatile and active. When buying and selling stocks from various corporations and businesses, more caution is required. As a result, stock market forecasting is an important endeavor in business and finance. This study analyzes one of the explicit forecasting tactics based on Machine Learning architectures and predictive algorithms and gives an independent model-based strategy for predicting stock prices. The predictor model is based on the Recurrent Neural Networks' LSTM (Long Short-Term Memory) architecture, which specializes in time series data classification and prediction. This model does rigorous mathematical analysis and estimates RMSE to improve forecast accuracy (Root Mean Square Error).All calculations and performance checks are done in Python 3. A number of machine learning libraries are used for prediction and visualization. This study demonstrates that stock performance, sentiment, and social data are all closely related to recent historical data, and it establishes a framework and predicts trading pattern linkages that are suited for High Frequency Stock Trading based on preset parameters using Machine Learning.
股票市场价格预测是一项困难的工作,通常需要大量的人机交互。股票市场过程充满风险,并受多种因素的影响。在所有的市场部门中,它是最不稳定和最活跃的部门之一。在买卖不同公司和企业的股票时,需要更加谨慎。因此,股票市场预测是商业和金融领域的一项重要工作。本研究分析了一种基于机器学习架构和预测算法的显式预测策略,并给出了一种独立的基于模型的股票价格预测策略。预测模型基于循环神经网络的LSTM (Long - Short-Term Memory,长短期记忆)架构,该架构专门用于时间序列数据的分类和预测。该模型进行了严格的数学分析,并估计RMSE以提高预测精度(均方根误差)。所有的计算和性能检查都在Python 3中完成。许多机器学习库用于预测和可视化。本研究表明,股票表现、情绪和社会数据都与最近的历史数据密切相关,并基于机器学习的预设参数建立了一个框架,并预测了适合高频股票交易的交易模式联系。
{"title":"Stock Market Prediction using Recurrent Neural Network’s LSTM Architecture","authors":"Koushik Sutradhar, Sourav Sutradhar, Iqbal Ahmed Jhimel, S. Gupta, Mohammad Monirujjaman Khan","doi":"10.1109/uemcon53757.2021.9666562","DOIUrl":"https://doi.org/10.1109/uemcon53757.2021.9666562","url":null,"abstract":"Stock market price prediction is a difficult undertaking that generally requires a lot of human-computer interaction. The stock market process is fraught with risk and is influenced by a variety of factors. Of all the market sectors, it is one of the most volatile and active. When buying and selling stocks from various corporations and businesses, more caution is required. As a result, stock market forecasting is an important endeavor in business and finance. This study analyzes one of the explicit forecasting tactics based on Machine Learning architectures and predictive algorithms and gives an independent model-based strategy for predicting stock prices. The predictor model is based on the Recurrent Neural Networks' LSTM (Long Short-Term Memory) architecture, which specializes in time series data classification and prediction. This model does rigorous mathematical analysis and estimates RMSE to improve forecast accuracy (Root Mean Square Error).All calculations and performance checks are done in Python 3. A number of machine learning libraries are used for prediction and visualization. This study demonstrates that stock performance, sentiment, and social data are all closely related to recent historical data, and it establishes a framework and predicts trading pattern linkages that are suited for High Frequency Stock Trading based on preset parameters using Machine Learning.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115478946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Multi-Memory Field-Programmable Custom Computing Machine for Accelerating Compute-Intensive Applications 用于加速计算密集型应用的多存储器现场可编程定制计算机
Shrikant S. Jadhav, C. Gloster, Jannatun Naher, C. Doss, Youngsoo Kim
In this paper, we present an FPGA-based multi-memory controller for accelerating computationally intensive applications. Our architecture accepts multiple inputs and produces multiple outputs for each clock cycle. The architecture includes processor cores with pipelined functional units tailored for each application. Additionally, we present an approach to achieve one to two orders-of-magnitude speedup over a traditional software implementation executing on a conventional multi-core processor. Even though the clock frequency of the Field-Programmable Custom Computing Machine (FCCM) is an order-of-magnitude slower than a conventional multi-core processor, the FCCM is significantly faster. We used the Power function as an application to demonstrate the merits of our FCCM. In our experiments, we executed the Power function in software and compared the software execution times with the execution time of an FCCM. Additionally, we also compared FCCM execution time with the OpenMP implementation of the function. Our experiments show that the results obtained using our multi-memory architecture are 57X faster than software implementation and 17X faster than OpenMP implementation executing the Power function, respectively.
在本文中,我们提出了一种基于fpga的多存储器控制器来加速计算密集型应用。我们的架构接受多个输入,并为每个时钟周期产生多个输出。该架构包括为每个应用量身定制的流水线功能单元的处理器内核。此外,我们提出了一种方法,可以比在传统多核处理器上执行的传统软件实现实现一到两个数量级的加速。尽管现场可编程自定义计算机(FCCM)的时钟频率比传统的多核处理器慢一个数量级,但FCCM的速度要快得多。我们使用Power函数作为一个应用来演示我们的FCCM的优点。在我们的实验中,我们在软件中执行了Power函数,并将软件执行时间与FCCM的执行时间进行了比较。此外,我们还比较了FCCM的执行时间与该函数的OpenMP实现。我们的实验表明,使用我们的多内存架构获得的结果比软件实现快57倍,比OpenMP实现执行Power函数快17倍。
{"title":"A Multi-Memory Field-Programmable Custom Computing Machine for Accelerating Compute-Intensive Applications","authors":"Shrikant S. Jadhav, C. Gloster, Jannatun Naher, C. Doss, Youngsoo Kim","doi":"10.1109/uemcon53757.2021.9666601","DOIUrl":"https://doi.org/10.1109/uemcon53757.2021.9666601","url":null,"abstract":"In this paper, we present an FPGA-based multi-memory controller for accelerating computationally intensive applications. Our architecture accepts multiple inputs and produces multiple outputs for each clock cycle. The architecture includes processor cores with pipelined functional units tailored for each application. Additionally, we present an approach to achieve one to two orders-of-magnitude speedup over a traditional software implementation executing on a conventional multi-core processor. Even though the clock frequency of the Field-Programmable Custom Computing Machine (FCCM) is an order-of-magnitude slower than a conventional multi-core processor, the FCCM is significantly faster. We used the Power function as an application to demonstrate the merits of our FCCM. In our experiments, we executed the Power function in software and compared the software execution times with the execution time of an FCCM. Additionally, we also compared FCCM execution time with the OpenMP implementation of the function. Our experiments show that the results obtained using our multi-memory architecture are 57X faster than software implementation and 17X faster than OpenMP implementation executing the Power function, respectively.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125075835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using EEG and fNIRS Measurements for Analysis on the Effects of Heat Stress on Short-term Memory Performance 用脑电图和近红外光谱分析热应激对短期记忆的影响
J. D. L. Cruz, Douglas Shimizu, K. George
Stress in various amounts has the potential to reduce the efficiency of one’s ability to perform various tasks. Heat stress specifically is a natural element that is often experienced by firefighters while they are on duty, due to both the environments they are exposed to, and the heavy protective gear that they wear. This study analyzed a subject’s stress levels using fNIRS and EEG while they played a PC game that tested their short-term memory. Trials were conducted and compared while subjects both wore and did not wear turnout firefighter gear. Heart rate, blood oxygen levels, and body temperature were also measured. EEG and fNIRS data were analyzed and processed via MATLAB. The data indicates that although stress was experienced when tested against a memory game, performance of short-term memory was not substantially impaired by it. The results of the gear and no gear trials were compared and indicated that wearing gear slightly amplified the amount of stress that was felt when testing short-term memory, although it also did not have a significantly detrimental impact on memory.
不同程度的压力有可能降低一个人执行各种任务的效率。由于他们所处的环境和他们所穿的厚重的防护装备,热应力是消防员在执勤时经常经历的一种自然因素。这项研究利用近红外光谱和脑电图分析了受试者在玩电脑游戏时的压力水平,测试了他们的短期记忆。在试验进行和比较时,受试者都穿着和不穿着消防装备。他们还测量了心率、血氧水平和体温。通过MATLAB对EEG和fNIRS数据进行分析和处理。数据表明,尽管在记忆游戏测试中,人们经历了压力,但短期记忆的表现并没有因此受到实质性损害。对戴齿轮和不戴齿轮试验的结果进行了比较,结果表明,在测试短期记忆时,戴齿轮略微放大了所感受到的压力,尽管它对记忆也没有明显的有害影响。
{"title":"Using EEG and fNIRS Measurements for Analysis on the Effects of Heat Stress on Short-term Memory Performance","authors":"J. D. L. Cruz, Douglas Shimizu, K. George","doi":"10.1109/uemcon53757.2021.9666525","DOIUrl":"https://doi.org/10.1109/uemcon53757.2021.9666525","url":null,"abstract":"Stress in various amounts has the potential to reduce the efficiency of one’s ability to perform various tasks. Heat stress specifically is a natural element that is often experienced by firefighters while they are on duty, due to both the environments they are exposed to, and the heavy protective gear that they wear. This study analyzed a subject’s stress levels using fNIRS and EEG while they played a PC game that tested their short-term memory. Trials were conducted and compared while subjects both wore and did not wear turnout firefighter gear. Heart rate, blood oxygen levels, and body temperature were also measured. EEG and fNIRS data were analyzed and processed via MATLAB. The data indicates that although stress was experienced when tested against a memory game, performance of short-term memory was not substantially impaired by it. The results of the gear and no gear trials were compared and indicated that wearing gear slightly amplified the amount of stress that was felt when testing short-term memory, although it also did not have a significantly detrimental impact on memory.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125864604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Requirements of Fog/Edge Computing-Based IoT Architecture 基于雾/边缘计算的物联网架构需求
Lama AlAwlaqi, Amaal AlDawod, Ray AlFowzan, Lamia Al-Braheem
Fog/Edge computing architectures have become hot research issues with the recent development in the Internet of Things field. Although several studies have been published in this field, there is a need to focus more on exploring the analytical techniques used with these architectures. The problem that needs to be addressed is that ignoring IoT requirements when selecting the analytical techniques may affect the performance of Fog/Edge computing. Therefore, this paper first briefly discusses the IoT requirements for Fog/Edge computing. Then, the studies related to Fog/Edge computing are presented. Moreover, a comparative analysis is conducted in order to know if the proposed architecture considers the IoT requirements or not. This can be considered as a step toward designing the efficient architecture of IoT Fog/Edge computing. In addition, highlighting the IoT requirements that are not considered may encourage researchers to contribute more to this field.
随着近年来物联网领域的发展,雾/边缘计算架构已成为研究的热点问题。尽管在这个领域已经发表了一些研究,但是有必要更多地关注于探索与这些体系结构一起使用的分析技术。需要解决的问题是,在选择分析技术时忽略物联网需求可能会影响雾/边缘计算的性能。因此,本文首先简要讨论了雾/边缘计算的物联网需求。然后,介绍了雾/边缘计算的相关研究。此外,还进行了比较分析,以了解所提出的架构是否考虑了物联网需求。这可以被认为是设计物联网雾/边缘计算高效架构的一步。此外,强调未被考虑的物联网需求可能会鼓励研究人员在这一领域做出更多贡献。
{"title":"The Requirements of Fog/Edge Computing-Based IoT Architecture","authors":"Lama AlAwlaqi, Amaal AlDawod, Ray AlFowzan, Lamia Al-Braheem","doi":"10.1109/UEMCON53757.2021.9666547","DOIUrl":"https://doi.org/10.1109/UEMCON53757.2021.9666547","url":null,"abstract":"Fog/Edge computing architectures have become hot research issues with the recent development in the Internet of Things field. Although several studies have been published in this field, there is a need to focus more on exploring the analytical techniques used with these architectures. The problem that needs to be addressed is that ignoring IoT requirements when selecting the analytical techniques may affect the performance of Fog/Edge computing. Therefore, this paper first briefly discusses the IoT requirements for Fog/Edge computing. Then, the studies related to Fog/Edge computing are presented. Moreover, a comparative analysis is conducted in order to know if the proposed architecture considers the IoT requirements or not. This can be considered as a step toward designing the efficient architecture of IoT Fog/Edge computing. In addition, highlighting the IoT requirements that are not considered may encourage researchers to contribute more to this field.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115528267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Machine Learning application lifecycle augmented with explanation and security 机器学习应用程序生命周期增强了解释和安全性
Saikat Das, Ph.D., S. Shiva
We have developed a Distributed Denial of Service (DDoS) intrusion detection framework that employs ML ensembles of both supervised and unsupervised classifiers that are complementary in reaching a corroborated classification decision. Our work has been limited to DDoS attack detection techniques. We propose to extend our framework to general ML system development, based on our review of current ML system development life cycles. We also propose to augment the general life cycle model to include security features to enable building security-in as the development progresses and bolt security-on as flaws are discovered after deployment. Most ML systems today operate in a black-box mode, providing users with only the predictions without associated reasoning as to how the predictions are brought about. There is heavy emphasis now to build mechanisms that help the user develop higher confidence in accepting the predictions of ML systems. Such explainability feature of ML model predictions is a must for critical systems. We also propose to augment our lifecycle model with explainability features. Thus, our ultimate goal is to develop a generic ML lifecycle process augmented with security and explainability features. Such an ML lifecycle process will be of immense use in ML systems development for all domains.
我们已经开发了一个分布式拒绝服务(DDoS)入侵检测框架,该框架采用了监督和无监督分类器的ML集成,这些分类器在达成经过证实的分类决策方面是互补的。我们的工作仅限于DDoS攻击检测技术。我们建议将我们的框架扩展到通用机器学习系统开发,基于我们对当前机器学习系统开发生命周期的回顾。我们还建议扩大一般生命周期模型,使其包括安全功能,以便在开发过程中建立安全功能,并在部署后发现缺陷时将安全功能连接起来。今天,大多数机器学习系统都以黑盒模式运行,只向用户提供预测,而不提供有关预测如何产生的相关推理。现在的重点是建立机制,帮助用户提高接受机器学习系统预测的信心。这种机器学习模型预测的可解释性特征是关键系统必须具备的。我们还建议用可解释性特征来增强我们的生命周期模型。因此,我们的最终目标是开发一个具有安全性和可解释性特性的通用ML生命周期过程。这样的机器学习生命周期过程将在所有领域的机器学习系统开发中具有巨大的用途。
{"title":"Machine Learning application lifecycle augmented with explanation and security","authors":"Saikat Das, Ph.D., S. Shiva","doi":"10.1109/uemcon53757.2021.9666619","DOIUrl":"https://doi.org/10.1109/uemcon53757.2021.9666619","url":null,"abstract":"We have developed a Distributed Denial of Service (DDoS) intrusion detection framework that employs ML ensembles of both supervised and unsupervised classifiers that are complementary in reaching a corroborated classification decision. Our work has been limited to DDoS attack detection techniques. We propose to extend our framework to general ML system development, based on our review of current ML system development life cycles. We also propose to augment the general life cycle model to include security features to enable building security-in as the development progresses and bolt security-on as flaws are discovered after deployment. Most ML systems today operate in a black-box mode, providing users with only the predictions without associated reasoning as to how the predictions are brought about. There is heavy emphasis now to build mechanisms that help the user develop higher confidence in accepting the predictions of ML systems. Such explainability feature of ML model predictions is a must for critical systems. We also propose to augment our lifecycle model with explainability features. Thus, our ultimate goal is to develop a generic ML lifecycle process augmented with security and explainability features. Such an ML lifecycle process will be of immense use in ML systems development for all domains.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116402782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Phishing Attacks Detection A Machine Learning-Based Approach 基于机器学习的网络钓鱼攻击检测方法
Fatima Salahdine, Zakaria El Mrabet, N. Kaabouch
Phishing attacks are one of the most common social engineering attacks targeting users’ emails to fraudulently steal confidential and sensitive information. They can be used as a part of more massive attacks launched to gain a foothold in corporate or government networks. Over the last decade, a number of anti-phishing techniques have been proposed to detect and mitigate these attacks. However, they are still inefficient and inaccurate. Thus, there is a great need for efficient and accurate detection techniques to cope with these attacks. In this paper, we proposed a phishing attack detection technique based on machine learning. We collected and analyzed more than 4000 phishing emails targeting the email service of the University of North Dakota. We modeled these attacks by selecting 10 relevant features and building a large dataset. This dataset was used to train, validate, and test the machine learning algorithms. For performance evaluation, four metrics have been used, namely probability of detection, probability of miss-detection, probability of false alarm, and accuracy. The experimental results show that better detection can be achieved using an artificial neural network.
网络钓鱼攻击是最常见的社会工程攻击之一,目标是用户的电子邮件,以欺诈性地窃取机密和敏感信息。它们可以被用作更大规模攻击的一部分,以在企业或政府网络中获得立足点。在过去的十年中,已经提出了许多反网络钓鱼技术来检测和减轻这些攻击。然而,它们仍然是低效和不准确的。因此,迫切需要高效、准确的检测技术来应对这些攻击。本文提出了一种基于机器学习的网络钓鱼攻击检测技术。我们收集并分析了4000多封针对北达科他州大学电子邮件服务的网络钓鱼邮件。我们通过选择10个相关特征并构建一个大型数据集来建模这些攻击。该数据集用于训练、验证和测试机器学习算法。对于性能评估,使用了四个指标,即检测概率、未检测概率、虚警概率和准确性。实验结果表明,使用人工神经网络可以达到更好的检测效果。
{"title":"Phishing Attacks Detection A Machine Learning-Based Approach","authors":"Fatima Salahdine, Zakaria El Mrabet, N. Kaabouch","doi":"10.1109/UEMCON53757.2021.9666627","DOIUrl":"https://doi.org/10.1109/UEMCON53757.2021.9666627","url":null,"abstract":"Phishing attacks are one of the most common social engineering attacks targeting users’ emails to fraudulently steal confidential and sensitive information. They can be used as a part of more massive attacks launched to gain a foothold in corporate or government networks. Over the last decade, a number of anti-phishing techniques have been proposed to detect and mitigate these attacks. However, they are still inefficient and inaccurate. Thus, there is a great need for efficient and accurate detection techniques to cope with these attacks. In this paper, we proposed a phishing attack detection technique based on machine learning. We collected and analyzed more than 4000 phishing emails targeting the email service of the University of North Dakota. We modeled these attacks by selecting 10 relevant features and building a large dataset. This dataset was used to train, validate, and test the machine learning algorithms. For performance evaluation, four metrics have been used, namely probability of detection, probability of miss-detection, probability of false alarm, and accuracy. The experimental results show that better detection can be achieved using an artificial neural network.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128730251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Atmospheric Turbulence Identification in a multi-user FSOC using Supervised Machine Learning 基于监督机器学习的多用户FSOC大气湍流识别
Federica Aveta, Siu Man Chan, Nabil Asfari, H. Refai
Atmospheric turbulence can heavily affect free space optical communication (FSOC) link reliability. This introduces random fluctuations of the received signal intensity, resulting in degraded system communication performance. While extensive research has been conducted to estimate atmospheric turbulence on single user FSOC, the effects of turbulent channel on multi-point FSOC has recently gained attention. In fact, latest results showed the feasibility of multi-user FSOC when users, sharing time and bandwidth resources, communicate with a single optical access node. This paper presents a machine learning (ML)-based methodology to identify how many users are concurrently transmitting and overlapping into a single receiver interfering within each other, and which one is propagating through a turbulent channel. The proposed methodology presents two different approaches based on: 1) traditional classification ML algorithms and 2) Convolutional Neural Network (CNN). Both methods employ amplitude distribution of the received mixed signals as input features. 100% validation accuracy was achieved by CNN employing an experimental data set of 900 images.
大气湍流严重影响自由空间光通信(FSOC)链路的可靠性。这引入了接收信号强度的随机波动,导致系统通信性能下降。大气湍流对单用户FSOC的影响已经进行了大量的研究,而湍流通道对多点FSOC的影响近年来得到了广泛的关注。事实上,最新的结果表明,当用户共享时间和带宽资源时,通过单个光接入节点进行通信,多用户FSOC是可行的。本文提出了一种基于机器学习(ML)的方法,以确定有多少用户同时传输并重叠到一个相互干扰的单个接收器,以及哪一个正在通过湍流信道传播。该方法提出了两种不同的方法:1)传统的ML分类算法和2)卷积神经网络(CNN)。两种方法都采用接收到的混合信号的幅值分布作为输入特征。CNN使用900张图像的实验数据集实现了100%的验证准确率。
{"title":"Atmospheric Turbulence Identification in a multi-user FSOC using Supervised Machine Learning","authors":"Federica Aveta, Siu Man Chan, Nabil Asfari, H. Refai","doi":"10.1109/UEMCON53757.2021.9666498","DOIUrl":"https://doi.org/10.1109/UEMCON53757.2021.9666498","url":null,"abstract":"Atmospheric turbulence can heavily affect free space optical communication (FSOC) link reliability. This introduces random fluctuations of the received signal intensity, resulting in degraded system communication performance. While extensive research has been conducted to estimate atmospheric turbulence on single user FSOC, the effects of turbulent channel on multi-point FSOC has recently gained attention. In fact, latest results showed the feasibility of multi-user FSOC when users, sharing time and bandwidth resources, communicate with a single optical access node. This paper presents a machine learning (ML)-based methodology to identify how many users are concurrently transmitting and overlapping into a single receiver interfering within each other, and which one is propagating through a turbulent channel. The proposed methodology presents two different approaches based on: 1) traditional classification ML algorithms and 2) Convolutional Neural Network (CNN). Both methods employ amplitude distribution of the received mixed signals as input features. 100% validation accuracy was achieved by CNN employing an experimental data set of 900 images.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124663411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research and Development of Multipurpose Unmanned Aerial Vehicle (Flying Drone) 多用途无人机(Flying Drone)的研究与开发
Imran Al Muneem, Sakif Md. Fahim, Fazle Rabby Khan, Tanjir Ahmed Emon, Md. Sabiul Islam, Mohammad Monirujjaman Khan
Technology in the unmanned aerial vehicle (UAV) can solve many emergency problems in civilian and military sectors by doing the proper implementation [1]. However, this is not commercially used on large scale till now in many countries because of many security factors. Proper implementation of the drone can utilize the problem of emergency medical goods delivery on inaccessible roads, quick surveillance for military and government law enforcement agencies, and much more. Traditional transportation infrastructure might be affected in the same way by delivery drones. This paper is about a new drone model. With the suggested design of a drone, multipurpose work including emergency delivery and surveillance network will facilitate more time efficiency and much more economical to potentially save lives. Moreover, in the current COVID-19 pandemic situation, it will be very helpful to supply medicine and goods to the lockdown areas.
无人机技术通过适当的实施,可以解决民用和军事领域的许多应急问题。然而,由于诸多安全因素,目前在许多国家尚未大规模商业化应用。无人机的正确使用可以解决在交通不便的道路上运送紧急医疗物资的问题,为军事和政府执法机构提供快速监视,等等。传统的交通基础设施也可能受到无人机送货的影响。本文是关于一种新型无人机模型。根据建议的无人机设计,包括紧急交付和监视网络在内的多用途工作将有助于提高时间效率,并更经济地挽救生命。此外,在当前新冠肺炎疫情大流行的情况下,向封锁地区提供药品和物资将非常有帮助。
{"title":"Research and Development of Multipurpose Unmanned Aerial Vehicle (Flying Drone)","authors":"Imran Al Muneem, Sakif Md. Fahim, Fazle Rabby Khan, Tanjir Ahmed Emon, Md. Sabiul Islam, Mohammad Monirujjaman Khan","doi":"10.1109/uemcon53757.2021.9666736","DOIUrl":"https://doi.org/10.1109/uemcon53757.2021.9666736","url":null,"abstract":"Technology in the unmanned aerial vehicle (UAV) can solve many emergency problems in civilian and military sectors by doing the proper implementation [1]. However, this is not commercially used on large scale till now in many countries because of many security factors. Proper implementation of the drone can utilize the problem of emergency medical goods delivery on inaccessible roads, quick surveillance for military and government law enforcement agencies, and much more. Traditional transportation infrastructure might be affected in the same way by delivery drones. This paper is about a new drone model. With the suggested design of a drone, multipurpose work including emergency delivery and surveillance network will facilitate more time efficiency and much more economical to potentially save lives. Moreover, in the current COVID-19 pandemic situation, it will be very helpful to supply medicine and goods to the lockdown areas.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130503247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Innovative Method for Automatic American Sign Language Interpretation using Machine Learning and Leap Motion Controller 基于机器学习和Leap运动控制器的美国手语自动翻译创新方法
Jon Jenkins, S. Rashad
Millions of people globally use some form of sign language in their everyday lives. There is a need for a method of gesture recognition that is as easy to use and ubiquitous as voice recognition is today. In this paper we explore a way to translate from sign language to speech using an innovative method, utilizing the Leap Motion Controller and machine learning algorithms to capture and analyze hand movements in real time, then converting the interpreted signs into spoken word. We seek to build a system that is easy to use, intuitive to understand, adaptable to the individual, and usable in everyday life. This system will be able to work in an adaptive way to learn new signs to expand the dictionary of the system and allow higher accuracy on an individual level. It will have a wide range of applications for healthcare, education, gamification, communication, and more. An optical hand tracking piece of hardware, the Leap Motion Controller will be used to capture hand movements and information to create supervised machine learning models that can be trained to accurately guess American Sign Language (ASL) symbols being signed in real time. Experimental results show that the proposed method is promising and provides a high level of accuracy in recognizing ASL.
全球数百万人在日常生活中使用某种形式的手语。我们需要一种像今天的语音识别一样易于使用和无处不在的手势识别方法。在本文中,我们探索了一种创新的方法,利用Leap运动控制器和机器学习算法实时捕获和分析手部运动,然后将解释的手势转换为口语。我们寻求建立一个易于使用,直观理解,适应个人,并在日常生活中可用的系统。该系统将能够以自适应的方式学习新符号,以扩展系统的字典,并在个人层面上提高准确性。它将在医疗保健、教育、游戏化、通信等领域有广泛的应用。Leap Motion Controller是一款光学手部跟踪硬件,它将用于捕捉手部运动和信息,以创建有监督的机器学习模型,该模型可以经过训练,准确地实时猜测正在签名的美国手语(ASL)符号。实验结果表明,该方法具有较高的识别准确率,具有较好的应用前景。
{"title":"An Innovative Method for Automatic American Sign Language Interpretation using Machine Learning and Leap Motion Controller","authors":"Jon Jenkins, S. Rashad","doi":"10.1109/UEMCON53757.2021.9666640","DOIUrl":"https://doi.org/10.1109/UEMCON53757.2021.9666640","url":null,"abstract":"Millions of people globally use some form of sign language in their everyday lives. There is a need for a method of gesture recognition that is as easy to use and ubiquitous as voice recognition is today. In this paper we explore a way to translate from sign language to speech using an innovative method, utilizing the Leap Motion Controller and machine learning algorithms to capture and analyze hand movements in real time, then converting the interpreted signs into spoken word. We seek to build a system that is easy to use, intuitive to understand, adaptable to the individual, and usable in everyday life. This system will be able to work in an adaptive way to learn new signs to expand the dictionary of the system and allow higher accuracy on an individual level. It will have a wide range of applications for healthcare, education, gamification, communication, and more. An optical hand tracking piece of hardware, the Leap Motion Controller will be used to capture hand movements and information to create supervised machine learning models that can be trained to accurately guess American Sign Language (ASL) symbols being signed in real time. Experimental results show that the proposed method is promising and provides a high level of accuracy in recognizing ASL.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123507926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Classifying Plastic Waste on River Surfaces utilising CNN and Tensorflow 利用CNN和Tensorflow对河流表面的塑料垃圾进行分类
J. McShane, Kevin Meehan, Eoghan Furey, M. McAfee
Waste in rivers is an ever-increasing problem. This paper will look at Deep Learning and Computer Vision technologies to determine if they can be applied to the problem domain. Usage of Deep Learning and Computer Vision technologies has grown massively in the last few years thanks to increased computational power, the availability of training data such as ImageNet, and the availability more complex and efficient algorithms. This research investigates two models to determine which one is more suited for the problem domain by evaluating their results based on performing training and testing on a developed waste dataset for the purposes of this research. The dataset is developed four times, each variant incurring more implementation of pre-processing techniques than the other. This resulted in the same dataset being tested four times on both models with varying levels of pre-processing. The first variant of the dataset had no pre-processing, the second with aspect ratio adjusting, the third dataset being augmented by the image data generator, and the fourth by way of an independent augmentation pipeline. The developed waste dataset has images of size 100x100 dimensions regardless of variant. Variant one of the waste datasets contained 1000 images and expanded all the way up to 19,973 images after pipeline augmentation in variant 4. Both VGG-16 and DenseNet-201 will have all four variants implemented on them to investigate which CNN best suits this research domain but also investigate the differences of applying different pre-processing techniques and how this affects results yielded by the two CNN models.
河流中的废物是一个日益严重的问题。本文将着眼于深度学习和计算机视觉技术,以确定它们是否可以应用于问题领域。深度学习和计算机视觉技术的使用在过去几年中大幅增长,这要归功于计算能力的提高、ImageNet等训练数据的可用性以及更复杂、更高效的算法的可用性。本研究调查了两个模型,以确定哪一个模型更适合问题领域,方法是基于对开发的废物数据集进行训练和测试来评估它们的结果。数据集开发了四次,每个变体都比其他变体需要更多的预处理技术。这导致同一数据集在两种模型上以不同的预处理水平测试了四次。数据集的第一种变体没有进行预处理,第二种具有宽高比调整,第三种数据集由图像数据生成器增强,第四种数据集通过独立的增强管道增强。开发的废物数据集具有大小为100x100的图像,而不考虑变量。变体1中的废弃数据集包含1000张图像,并且在变体4中的管道增强后一直扩展到19,973张图像。VGG-16和DenseNet-201都将在其上实现所有四种变体,以研究哪种CNN最适合该研究领域,同时还研究应用不同预处理技术的差异以及这如何影响两种CNN模型产生的结果。
{"title":"Classifying Plastic Waste on River Surfaces utilising CNN and Tensorflow","authors":"J. McShane, Kevin Meehan, Eoghan Furey, M. McAfee","doi":"10.1109/UEMCON53757.2021.9666556","DOIUrl":"https://doi.org/10.1109/UEMCON53757.2021.9666556","url":null,"abstract":"Waste in rivers is an ever-increasing problem. This paper will look at Deep Learning and Computer Vision technologies to determine if they can be applied to the problem domain. Usage of Deep Learning and Computer Vision technologies has grown massively in the last few years thanks to increased computational power, the availability of training data such as ImageNet, and the availability more complex and efficient algorithms. This research investigates two models to determine which one is more suited for the problem domain by evaluating their results based on performing training and testing on a developed waste dataset for the purposes of this research. The dataset is developed four times, each variant incurring more implementation of pre-processing techniques than the other. This resulted in the same dataset being tested four times on both models with varying levels of pre-processing. The first variant of the dataset had no pre-processing, the second with aspect ratio adjusting, the third dataset being augmented by the image data generator, and the fourth by way of an independent augmentation pipeline. The developed waste dataset has images of size 100x100 dimensions regardless of variant. Variant one of the waste datasets contained 1000 images and expanded all the way up to 19,973 images after pipeline augmentation in variant 4. Both VGG-16 and DenseNet-201 will have all four variants implemented on them to investigate which CNN best suits this research domain but also investigate the differences of applying different pre-processing techniques and how this affects results yielded by the two CNN models.","PeriodicalId":127072,"journal":{"name":"2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121305096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1