首页 > 最新文献

2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)最新文献

英文 中文
Design of Automated Solar Floor Cleaner using IOT 基于物联网的自动化太阳能地板清洁器设计
K. Maniraj, Kiran Dasari, B. Ravi, Pallavi Madamanchi, Meghana Lanka, B. Kumar
Technology makes cleaning more intelligent and accessible. Future energy sources will become saturated and run out. Instead of using nonrenewable energies, consider solar power. Today, practically every field uses solar energy. Cleaning is one household task that never becomes obsolete and welcomes new technology. Floors are cleaned with broomsticks, vacuum cleaners, and advanced robot cleaners like Roomba. Middle-age vacuum cleaners and even advanced robotic cleaners are too expensive for low and middle-class consumers. Traditional vacuum cleaners reduce the amount of human energy needed to clean floors, but the user must remain behind the machine to direct the suction pipe to dusty areas. These vacuum cleaners are likewise plugins, meaning they can only be used while plugged in. Solar energy is used to charge the battery, which powers the driving circuit. This cleaner uses Arduino-Uno and Motor driver L293D. This designed solar floor cleaner is driven autonomously with sensor communication by recognizing obstructions and avoiding them. Another Bluetooth module lets the user steer the cleaner to any desired area. This module accepts commands and drives the model. This household and outdoor cleaner provide easy and rapid cleaning. It avoids regular vacuum cleaners ‘plugin and use’ method by self-moving and cleaning concurrently. Thus, automated solar floor cleaners have efficient cleaning benefits and uses.
技术使清洁更加智能和方便。未来的能源将趋于饱和并耗尽。与其使用不可再生能源,不如考虑使用太阳能。今天,几乎每个领域都在使用太阳能。清洁是一项永远不会过时的家庭任务,欢迎新技术。他们用扫帚、吸尘器和先进的机器人清洁地板,比如Roomba。中年吸尘器,甚至是先进的机器人吸尘器,对于中低阶层消费者来说都太贵了。传统的真空吸尘器减少了清洁地板所需的人力,但用户必须留在机器后面,将吸尘管引导到尘土飞扬的地方。这些吸尘器同样是外接式的,这意味着它们只能在插入时使用。太阳能被用来给电池充电,为驱动电路供电。这个清洁剂使用Arduino-Uno和电机驱动程序L293D。这种设计的太阳能地板清洁器通过传感器通信自动驱动,通过识别障碍物并避开它们。另一个蓝牙模块可以让用户操纵吸尘器到任何想要的地方。该模块接受命令并驱动模型。这个家用和户外清洁剂提供简单和快速的清洁。它避免了常规吸尘器的“插件和使用”的方法,自动移动和清洁同时进行。因此,自动化太阳能地板清洁器具有高效的清洁效益和用途。
{"title":"Design of Automated Solar Floor Cleaner using IOT","authors":"K. Maniraj, Kiran Dasari, B. Ravi, Pallavi Madamanchi, Meghana Lanka, B. Kumar","doi":"10.1109/ASSIC55218.2022.10088311","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088311","url":null,"abstract":"Technology makes cleaning more intelligent and accessible. Future energy sources will become saturated and run out. Instead of using nonrenewable energies, consider solar power. Today, practically every field uses solar energy. Cleaning is one household task that never becomes obsolete and welcomes new technology. Floors are cleaned with broomsticks, vacuum cleaners, and advanced robot cleaners like Roomba. Middle-age vacuum cleaners and even advanced robotic cleaners are too expensive for low and middle-class consumers. Traditional vacuum cleaners reduce the amount of human energy needed to clean floors, but the user must remain behind the machine to direct the suction pipe to dusty areas. These vacuum cleaners are likewise plugins, meaning they can only be used while plugged in. Solar energy is used to charge the battery, which powers the driving circuit. This cleaner uses Arduino-Uno and Motor driver L293D. This designed solar floor cleaner is driven autonomously with sensor communication by recognizing obstructions and avoiding them. Another Bluetooth module lets the user steer the cleaner to any desired area. This module accepts commands and drives the model. This household and outdoor cleaner provide easy and rapid cleaning. It avoids regular vacuum cleaners ‘plugin and use’ method by self-moving and cleaning concurrently. Thus, automated solar floor cleaners have efficient cleaning benefits and uses.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130261735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Challenges In Applying Artificial Intelligence In Banking Sector: A Scientometric Review 人工智能在银行业应用的挑战:科学计量学综述
Esha Jain
Artificial Intelligence is getting expanding consideration in the corporates and humanity. In banking, the principal practices of AI were operative; nonetheless, AI is essentially applied in speculation backend and banking administrations without client interaction. Presenting AI in business banking might modify commercial cycles and collaborations with clients, which might set out research and open doors for conducting finance. The current study focuses on challenges in applying artificial intelligence in the banking sector by following a scientometric assessment and showed that innovations drastically change the idea of work. It was also found that web application weaknesses are security openings, which aggressors might endeavor to take advantage of, henceforth possibly making genuine harm to business, like taking touchy information and compromising business assets. It was concluded from the study that since web applications are currently broadly utilized, basic business conditions, for example, web banking, correspondence of touchy information, and internet shopping require powerful defensive measures against a wide scope of weaknesses.
人工智能在企业和人类中得到越来越多的关注。在银行业,人工智能的主要实践是可操作的;然而,人工智能基本上应用于投机后端和银行管理,没有客户交互。在商业银行中展示人工智能可能会改变商业周期和与客户的合作,这可能会启动研究并为开展金融业务打开大门。目前的研究主要关注人工智能在银行业应用中的挑战,并通过科学计量评估表明,创新极大地改变了工作的观念。我们还发现,web应用程序的弱点是安全漏洞,攻击者可能会试图利用这些漏洞,从而可能对业务造成真正的伤害,比如获取敏感信息和损害业务资产。研究得出的结论是,由于web应用程序目前被广泛使用,基本的商业条件,如网上银行、敏感信息的通信和网上购物,需要强大的防御措施来应对广泛的弱点。
{"title":"Challenges In Applying Artificial Intelligence In Banking Sector: A Scientometric Review","authors":"Esha Jain","doi":"10.1109/ASSIC55218.2022.10088355","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088355","url":null,"abstract":"Artificial Intelligence is getting expanding consideration in the corporates and humanity. In banking, the principal practices of AI were operative; nonetheless, AI is essentially applied in speculation backend and banking administrations without client interaction. Presenting AI in business banking might modify commercial cycles and collaborations with clients, which might set out research and open doors for conducting finance. The current study focuses on challenges in applying artificial intelligence in the banking sector by following a scientometric assessment and showed that innovations drastically change the idea of work. It was also found that web application weaknesses are security openings, which aggressors might endeavor to take advantage of, henceforth possibly making genuine harm to business, like taking touchy information and compromising business assets. It was concluded from the study that since web applications are currently broadly utilized, basic business conditions, for example, web banking, correspondence of touchy information, and internet shopping require powerful defensive measures against a wide scope of weaknesses.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131542402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Healthcare Industry: Embracing Potential of Big Data across Value Chain 医疗保健行业:跨价值链挖掘大数据潜力
Ravi Shankar Jha, P. R. Sahoo, Shaktimaya Mohapatra
In the fast-moving era of the Industrial Revolution (Industry 4.0), digitally fueled devices and technologies are paramount for driving innovation and creating values across a myriad of industries. A case in point is - Healthcare Industry. Healthcare insurance companies, hospitals, and other providers around the world are belligerently leveraging digital tools and technologies such as Big Data analytics, Lake, Machine Learning, Artificial Intelligence, Internet of Things (IoT), Natural Language Processing, smart sensors, and the Internet of Things (IoT), for improving the overall quality of care and overall process efficiency and effectiveness. The Healthcare industry has been a center of discussion for embracing Big Data practice across the value chain for the past couple of decades due to the prodigious potential that is concealed in it. With so much abundant information, there have been numerous provocations related to the apiece stage of maneuvering big data that can only be amplified by leveraging high-end computer science results for big data analytics, as mentioned above. Well-organized healthcare ecosystem, analysis, and magnification of big data can influence the course of the game by opening new paths in terms of offering unique yet innovative products and services for the modern age technology-propelled healthcare value chain. This paper emphasizes the impetus of Big Data across the healthcare value chain, which involves the amalgamation of technology, data, and business, yielding better decisions and improving the experience across all touch points.
在快速发展的工业革命(工业4.0)时代,数字驱动的设备和技术对于推动无数行业的创新和创造价值至关重要。一个典型的例子是——医疗保健行业。世界各地的医疗保险公司、医院和其他供应商都在积极利用数字工具和技术,如大数据分析、Lake、机器学习、人工智能、物联网(IoT)、自然语言处理、智能传感器和物联网(IoT),以提高整体护理质量和整体流程效率和有效性。在过去的几十年里,医疗保健行业一直是整个价值链中采用大数据实践的讨论中心,因为它隐藏着巨大的潜力。有了如此丰富的信息,就有了许多与操纵大数据的每个阶段相关的挑衅,这些挑衅只能通过利用高端计算机科学成果进行大数据分析来放大,如上所述。组织良好的医疗保健生态系统、大数据的分析和放大可以通过为现代技术推动的医疗保健价值链提供独特而创新的产品和服务开辟新的路径,从而影响游戏的进程。本文强调了大数据在整个医疗价值链中的推动作用,它涉及技术、数据和业务的融合,从而产生更好的决策,并改善所有接触点的体验。
{"title":"Healthcare Industry: Embracing Potential of Big Data across Value Chain","authors":"Ravi Shankar Jha, P. R. Sahoo, Shaktimaya Mohapatra","doi":"10.1109/ASSIC55218.2022.10088406","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088406","url":null,"abstract":"In the fast-moving era of the Industrial Revolution (Industry 4.0), digitally fueled devices and technologies are paramount for driving innovation and creating values across a myriad of industries. A case in point is - Healthcare Industry. Healthcare insurance companies, hospitals, and other providers around the world are belligerently leveraging digital tools and technologies such as Big Data analytics, Lake, Machine Learning, Artificial Intelligence, Internet of Things (IoT), Natural Language Processing, smart sensors, and the Internet of Things (IoT), for improving the overall quality of care and overall process efficiency and effectiveness. The Healthcare industry has been a center of discussion for embracing Big Data practice across the value chain for the past couple of decades due to the prodigious potential that is concealed in it. With so much abundant information, there have been numerous provocations related to the apiece stage of maneuvering big data that can only be amplified by leveraging high-end computer science results for big data analytics, as mentioned above. Well-organized healthcare ecosystem, analysis, and magnification of big data can influence the course of the game by opening new paths in terms of offering unique yet innovative products and services for the modern age technology-propelled healthcare value chain. This paper emphasizes the impetus of Big Data across the healthcare value chain, which involves the amalgamation of technology, data, and business, yielding better decisions and improving the experience across all touch points.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133960298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technological Empowerment: Applications of Machine Learning in Oral Healthcare 技术授权:机器学习在口腔保健中的应用
Rupsa Rani Sahu, A. Raut, S. Samantaray
In the era of Artificial Intelligence the old paradigm of oral healthcare has got augmented with automation. Combining thinking abilities of human mind with the cutting edge technology of machine learning can aid the clinicians meet the growing needs and ensure cordial patient-doctor partnership. Advanced software and computing tools are being used to identify problem areas with lesser reporting time and appropriate clinical decision support system to track clinical outcomes. The perceptive abilities of machine learning is directly proportional to information obtained from patients, images, material applications and treatments done. The specialized algorithms are able to predict unexpected complications likely to be encountered and under-diagnosis of rare pathologies that otherwise might be missed due to limitations of clinicians expertise in that area. Today it is essential to embrace machine learning programmes to evolve age old working practices for greater performance and better outcomes by bridging the existing gap between diagnosis and treatment planning. The paper discusses and acknowledges the performance and futuristic applications of machine learning in various subareas of oral health and research.
在人工智能时代,口腔保健的旧模式被自动化所增强。将人类思维能力与机器学习的前沿技术相结合,可以帮助临床医生满足日益增长的需求,并确保亲切的医患合作关系。正在使用先进的软件和计算工具来识别问题区域,减少报告时间,并使用适当的临床决策支持系统来跟踪临床结果。机器学习的感知能力与从患者、图像、材料应用和治疗中获得的信息成正比。专门的算法能够预测可能遇到的意外并发症和罕见病理的诊断不足,否则由于临床医生在该领域的专业知识的限制,可能会错过。今天,必须采用机器学习程序,通过弥合诊断和治疗计划之间的现有差距,改进陈旧的工作实践,以获得更高的绩效和更好的结果。本文讨论并承认机器学习在口腔健康和研究的各个子领域的性能和未来应用。
{"title":"Technological Empowerment: Applications of Machine Learning in Oral Healthcare","authors":"Rupsa Rani Sahu, A. Raut, S. Samantaray","doi":"10.1109/ASSIC55218.2022.10088392","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088392","url":null,"abstract":"In the era of Artificial Intelligence the old paradigm of oral healthcare has got augmented with automation. Combining thinking abilities of human mind with the cutting edge technology of machine learning can aid the clinicians meet the growing needs and ensure cordial patient-doctor partnership. Advanced software and computing tools are being used to identify problem areas with lesser reporting time and appropriate clinical decision support system to track clinical outcomes. The perceptive abilities of machine learning is directly proportional to information obtained from patients, images, material applications and treatments done. The specialized algorithms are able to predict unexpected complications likely to be encountered and under-diagnosis of rare pathologies that otherwise might be missed due to limitations of clinicians expertise in that area. Today it is essential to embrace machine learning programmes to evolve age old working practices for greater performance and better outcomes by bridging the existing gap between diagnosis and treatment planning. The paper discusses and acknowledges the performance and futuristic applications of machine learning in various subareas of oral health and research.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"149 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113999005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding of Imagined Speech Neural EEG Signals Using Deep Reinforcement Learning Technique 基于深度强化学习技术的想象语音脑电信号解码
Nrushingh Charan Mahapatra, Prachet Bhuyan
The basic objective of the study is to establish the reinforcement learning technique in the decoding of imagined speech neural signals. The purpose of imagined speech neural computational studies is to give people who are unable to communicate due to physical or neurological limitations of speech generation alternative natural communication pathways. The advanced human-computer interface based on imagined speech decoding based on measurable neural activity could enable natural interactions and significantly improve quality of life, especially for people with few communication alternatives. Recent advances in signal processing and reinforcement learning based on deep learning algorithms have enabled high-quality imagined speech decoding from noninvasively recorded neural activity. Most of the prior research focused on the supervised classification of collected signals, with no naturalistic feedback-based training of imagined speech models for brain-computer interfaces. We employ deep reinforcement learning in this study to create an imagined speech decoder artificial agent based on the deep Q-network (DQN), so that the artificial agent could indeed learn effective policies directly from multidimensional neural electroencephalography (EEG) signal inputs adopting end-to-end reinforcement learning. We show that the artificial agent, supplied only with neural signals and rewards as inputs, was able to decode the imagined speech neural signals efficiently with 81.6947% overall accuracy.
本研究的基本目的是建立一种用于想象语音神经信号解码的强化学习技术。想象语音神经计算研究的目的是给那些由于语音产生的身体或神经限制而无法交流的人提供替代的自然交流途径。基于可测量的神经活动的想象语音解码的先进人机界面可以实现自然交互,显着提高生活质量,特别是对于很少有交流选择的人。基于深度学习算法的信号处理和强化学习的最新进展使得从非侵入性记录的神经活动中解码高质量的想象语音成为可能。先前的研究大多集中在对收集到的信号进行监督分类,缺乏基于自然反馈的脑机接口想象语音模型的训练。本研究采用深度强化学习的方法,基于深度q网络(deep Q-network, DQN)构建了一个想象的语音解码器人工智能体,使人工智能体能够通过端到端的强化学习,直接从多维脑电图(EEG)信号输入中学习有效的策略。研究表明,人工智能体在只提供神经信号和奖励作为输入的情况下,能够有效地解码想象的语音神经信号,总准确率为81.6947%。
{"title":"Decoding of Imagined Speech Neural EEG Signals Using Deep Reinforcement Learning Technique","authors":"Nrushingh Charan Mahapatra, Prachet Bhuyan","doi":"10.1109/ASSIC55218.2022.10088387","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088387","url":null,"abstract":"The basic objective of the study is to establish the reinforcement learning technique in the decoding of imagined speech neural signals. The purpose of imagined speech neural computational studies is to give people who are unable to communicate due to physical or neurological limitations of speech generation alternative natural communication pathways. The advanced human-computer interface based on imagined speech decoding based on measurable neural activity could enable natural interactions and significantly improve quality of life, especially for people with few communication alternatives. Recent advances in signal processing and reinforcement learning based on deep learning algorithms have enabled high-quality imagined speech decoding from noninvasively recorded neural activity. Most of the prior research focused on the supervised classification of collected signals, with no naturalistic feedback-based training of imagined speech models for brain-computer interfaces. We employ deep reinforcement learning in this study to create an imagined speech decoder artificial agent based on the deep Q-network (DQN), so that the artificial agent could indeed learn effective policies directly from multidimensional neural electroencephalography (EEG) signal inputs adopting end-to-end reinforcement learning. We show that the artificial agent, supplied only with neural signals and rewards as inputs, was able to decode the imagined speech neural signals efficiently with 81.6947% overall accuracy.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121824391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Evaluation of LSTM Optimizers for Long-Term Electricity Consumption Prediction LSTM优化器的长期用电量预测性能评价
Kwabena Appiah Ampofo, E. Owusu, J. K. Appati
Electricity consumption is an important economic index, and it plays a significant role in drawing up an energy development policy for every country. Thus, having reliable information regarding the prediction of electricity consumption in a country is imperative to policy and decision-makers to plan and schedule the operation of power systems. Studies have shown that the Long Short-Term Memory (LSTM) neural network model is capable of learning long term temporary dependencies and nonlinear characteristic of a time series phenomenon and it can be a better alternative to the traditional neural networks and statistical methods for predicting electricity consumption. The LSTM neural network model has many hyperparameters, and one of the important hyperparameters is the optimization method. The optimization method plays a significant role in the performance of an LSTM neural network model, but selecting it is not a trivial task to end-users as there is no particular approach for selecting an appropriate method for a particular task. In this study, the LSTM neural network model was used to predict long term electricity consumption using socioeconomic data as predictors to analyze six popular optimization methods that have been implemented in the Keras machine learning library. The motivation is to determine which optimization method will be most suitable for electricity consumption prediction using LSTM neural network model. The results of the study show that the Stochastic Gradient Descent (SGD) optimizer is the most outstanding optimization method.
用电量是一项重要的经济指标,对各国能源发展政策的制定起着重要的作用。因此,掌握有关一个国家电力消耗预测的可靠信息对于政策和决策者规划和安排电力系统的运行是必不可少的。研究表明,长短期记忆(LSTM)神经网络模型能够学习时间序列现象的长期临时依赖关系和非线性特征,可以替代传统的神经网络和统计方法来预测用电量。LSTM神经网络模型有许多超参数,其中一个重要的超参数就是优化方法。优化方法在LSTM神经网络模型的性能中起着重要的作用,但对于最终用户来说,选择优化方法并不是一项简单的任务,因为对于特定的任务,没有特定的方法来选择合适的方法。在本研究中,使用LSTM神经网络模型预测长期电力消耗,使用社会经济数据作为预测因子,分析了在Keras机器学习库中实现的六种流行的优化方法。动机是确定哪种优化方法最适合使用LSTM神经网络模型进行用电量预测。研究结果表明,随机梯度下降(SGD)优化器是最优秀的优化方法。
{"title":"Performance Evaluation of LSTM Optimizers for Long-Term Electricity Consumption Prediction","authors":"Kwabena Appiah Ampofo, E. Owusu, J. K. Appati","doi":"10.1109/ASSIC55218.2022.10088353","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088353","url":null,"abstract":"Electricity consumption is an important economic index, and it plays a significant role in drawing up an energy development policy for every country. Thus, having reliable information regarding the prediction of electricity consumption in a country is imperative to policy and decision-makers to plan and schedule the operation of power systems. Studies have shown that the Long Short-Term Memory (LSTM) neural network model is capable of learning long term temporary dependencies and nonlinear characteristic of a time series phenomenon and it can be a better alternative to the traditional neural networks and statistical methods for predicting electricity consumption. The LSTM neural network model has many hyperparameters, and one of the important hyperparameters is the optimization method. The optimization method plays a significant role in the performance of an LSTM neural network model, but selecting it is not a trivial task to end-users as there is no particular approach for selecting an appropriate method for a particular task. In this study, the LSTM neural network model was used to predict long term electricity consumption using socioeconomic data as predictors to analyze six popular optimization methods that have been implemented in the Keras machine learning library. The motivation is to determine which optimization method will be most suitable for electricity consumption prediction using LSTM neural network model. The results of the study show that the Stochastic Gradient Descent (SGD) optimizer is the most outstanding optimization method.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121921700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microstrip Patch Antenna Array With Gain Enhancement for WLAN Applications 无线局域网增益增强微带贴片天线阵列
Kanuri Naveen, Kiran Dasari, G. Swapnasri, R. Swetha, S. Nishitha, B. Anusha
The high speed 5G network requires the more gain, the micro strip patch antenna array is the better solution for the high speed data network system. The novel proposed antenna array has the 2x4 structure with the dimension of (28.3 mm x 30 mm) at 5 GHz simulated and the results observed as the gain of 17.6dB S11 reported that as - 24.7,radiation efficiency of 67%.directivity of 17.4801. This novel proposed design has the application of vehicle to vehicle communication and vehicle to other communication and internet of things and modern communication systems. This novel proposed 2x4 antenna array design overcome the above mentioned literature and the gain enhancement is achieved as 17.6dB
高速5G网络对增益的要求更高,微带贴片天线阵列是高速数据网络系统较好的解决方案。该天线阵列具有2x4结构,尺寸为(28.3 mm x 30 mm),在5 GHz时进行仿真,结果显示增益为17.6dB, S11报道为- 24.7,辐射效率为67%。指向性为17.4801。本设计具有车对车通信、车对其他通信以及物联网和现代通信系统的应用。本文提出的2x4天线阵列设计克服了上述文献的缺陷,实现了17.6dB的增益增强
{"title":"Microstrip Patch Antenna Array With Gain Enhancement for WLAN Applications","authors":"Kanuri Naveen, Kiran Dasari, G. Swapnasri, R. Swetha, S. Nishitha, B. Anusha","doi":"10.1109/ASSIC55218.2022.10088312","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088312","url":null,"abstract":"The high speed 5G network requires the more gain, the micro strip patch antenna array is the better solution for the high speed data network system. The novel proposed antenna array has the 2x4 structure with the dimension of (28.3 mm x 30 mm) at 5 GHz simulated and the results observed as the gain of 17.6dB S11 reported that as - 24.7,radiation efficiency of 67%.directivity of 17.4801. This novel proposed design has the application of vehicle to vehicle communication and vehicle to other communication and internet of things and modern communication systems. This novel proposed 2x4 antenna array design overcome the above mentioned literature and the gain enhancement is achieved as 17.6dB","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127371519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Analysis of Medical Images using Transfer Learning Based Deep Learning Models 基于迁移学习的深度学习模型的医学图像对比分析
Debasis Prasad Sahoo, M. Rout, P. Mallick, Sasmita Rani Samanta
Deep learning is becoming more popular in practically every industry, but especially in medical imaging for better diagnostics of various deadly diseases. Deep learning is used to explain difficulties based on medical image processing as part of machine learning artificial intelligence. Most commonly used machine learning algorithm named Convolutional Neural Network (CNN) grasps a resilient position for image recognition tasks. In this paper, we compared the performance of basic CNN and three state of the art transfer-learning models namely, VGG-16, ResNet50 and GoogleNet (Inception-v3) by extracting features from pre-trained CNN architecture. Small datasets of three fatal diseases, which are brain tumor, breast cancer and skin cancer are used. The determination of this study is to discover the finest trade-off between accuracy.
深度学习在几乎每个行业都越来越受欢迎,尤其是在医学成像领域,它可以更好地诊断各种致命疾病。作为机器学习人工智能的一部分,深度学习用于解释基于医学图像处理的困难。最常用的机器学习算法卷积神经网络(CNN)在图像识别任务中占据了弹性地位。在本文中,我们通过从预训练的CNN架构中提取特征,比较了基本CNN和三种最先进的迁移学习模型(VGG-16, ResNet50和GoogleNet (Inception-v3))的性能。研究使用了三种致命疾病的小数据集,分别是脑肿瘤、乳腺癌和皮肤癌。本研究的目的是发现准确性之间的最佳权衡。
{"title":"Comparative Analysis of Medical Images using Transfer Learning Based Deep Learning Models","authors":"Debasis Prasad Sahoo, M. Rout, P. Mallick, Sasmita Rani Samanta","doi":"10.1109/ASSIC55218.2022.10088373","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088373","url":null,"abstract":"Deep learning is becoming more popular in practically every industry, but especially in medical imaging for better diagnostics of various deadly diseases. Deep learning is used to explain difficulties based on medical image processing as part of machine learning artificial intelligence. Most commonly used machine learning algorithm named Convolutional Neural Network (CNN) grasps a resilient position for image recognition tasks. In this paper, we compared the performance of basic CNN and three state of the art transfer-learning models namely, VGG-16, ResNet50 and GoogleNet (Inception-v3) by extracting features from pre-trained CNN architecture. Small datasets of three fatal diseases, which are brain tumor, breast cancer and skin cancer are used. The determination of this study is to discover the finest trade-off between accuracy.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127442487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain assisted Supply Chain Management System for Secure Data Management 区块链辅助供应链安全数据管理系统
M. Kandpal, Chandramouli Das, C. Misra, Abhaya Kumar Sahoo, Jagannath Singh, R. K. Barik
Blockchain offers decentralized and immutable data storage. In recent years, logistics and supply chain management are slowly realizing Blockchain's impact. Leading-edge companies are trying to fight supply chain network complexity with block chain. Blockchain helps in enabling steady and cost-efficient delivery of products and improving traceability of products, coordination between the consumer's, partners, and financial aid. By considering this, the main objective of the proposed work is to merge decentralized behavior of blockchain with supply chain management to make it more protective, secure and transparent. For the implementation of the proposed framework, it uses Ganache, Metamask, MySQL, PHP, NodeJS, Solidity and JavaScript. Adding blockchain also helps in minimizing the interference of middle man attack in the processes. This technology helps in discarding forged products flowing in the marketplace. Hence, it overall maintains the integrity and authentication among all, the stages in between producer and consumer.
区块链提供去中心化和不可变的数据存储。近年来,物流和供应链管理正在慢慢意识到区块链的影响。领先的公司正试图用区块链来对抗供应链网络的复杂性。区块链有助于实现稳定和经济高效的产品交付,提高产品的可追溯性,以及消费者、合作伙伴和金融援助之间的协调。考虑到这一点,拟议工作的主要目标是将区块链的去中心化行为与供应链管理相结合,使其更具保护性、安全性和透明度。为了实现这个框架,它使用了Ganache、Metamask、MySQL、PHP、NodeJS、Solidity和JavaScript。添加区块链还有助于最大限度地减少中间人攻击对流程的干扰。这项技术有助于丢弃在市场上流通的伪造产品。因此,它总体上维护了生产者和消费者之间所有阶段的完整性和身份验证。
{"title":"Blockchain assisted Supply Chain Management System for Secure Data Management","authors":"M. Kandpal, Chandramouli Das, C. Misra, Abhaya Kumar Sahoo, Jagannath Singh, R. K. Barik","doi":"10.1109/ASSIC55218.2022.10088404","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088404","url":null,"abstract":"Blockchain offers decentralized and immutable data storage. In recent years, logistics and supply chain management are slowly realizing Blockchain's impact. Leading-edge companies are trying to fight supply chain network complexity with block chain. Blockchain helps in enabling steady and cost-efficient delivery of products and improving traceability of products, coordination between the consumer's, partners, and financial aid. By considering this, the main objective of the proposed work is to merge decentralized behavior of blockchain with supply chain management to make it more protective, secure and transparent. For the implementation of the proposed framework, it uses Ganache, Metamask, MySQL, PHP, NodeJS, Solidity and JavaScript. Adding blockchain also helps in minimizing the interference of middle man attack in the processes. This technology helps in discarding forged products flowing in the marketplace. Hence, it overall maintains the integrity and authentication among all, the stages in between producer and consumer.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Pose Estimation Using GNN 基于GNN的人体姿态估计
Tridiv Swain, Suravi Sinha, Awantika Singh, Khushali Verma, S. Das
Human Pose Estimation is a method of capturing a collection of coordinates for each joint (arm, head, torso, etc.) that may be used to characterize a person's pose. The initial goal is to create a skeleton-like depiction of a human body, which will then be processed for task-specific applications. The ability to identify and estimate the position of a human body is valuable in a wide range of applications and conditions like action recognition, animation, gaming, and so on. It is a crucial first step toward understanding people through images and media. In this study, graph neural networks were utilised to predict human poses by modelling the human skeleton as an unordered list, greatly enhancing 3D human pose estimation. This paper describes the approach as an efficient way to determine the 3D posture of many persons in a picture. Our model gives a validation accuracy of 92%.
人体姿态估计是一种捕获每个关节(手臂,头部,躯干等)坐标集合的方法,可用于表征一个人的姿态。最初的目标是创建一个类似骨骼的人体描述,然后将其处理为特定任务的应用程序。识别和估计人体位置的能力在动作识别、动画、游戏等广泛的应用和条件中都很有价值。这是通过图像和媒体了解人的关键的第一步。在本研究中,通过将人体骨骼建模为无序列表,利用图神经网络来预测人体姿势,大大增强了三维人体姿势估计。本文将该方法描述为一种确定图像中许多人的三维姿态的有效方法。我们的模型给出了92%的验证精度。
{"title":"Human Pose Estimation Using GNN","authors":"Tridiv Swain, Suravi Sinha, Awantika Singh, Khushali Verma, S. Das","doi":"10.1109/ASSIC55218.2022.10088410","DOIUrl":"https://doi.org/10.1109/ASSIC55218.2022.10088410","url":null,"abstract":"Human Pose Estimation is a method of capturing a collection of coordinates for each joint (arm, head, torso, etc.) that may be used to characterize a person's pose. The initial goal is to create a skeleton-like depiction of a human body, which will then be processed for task-specific applications. The ability to identify and estimate the position of a human body is valuable in a wide range of applications and conditions like action recognition, animation, gaming, and so on. It is a crucial first step toward understanding people through images and media. In this study, graph neural networks were utilised to predict human poses by modelling the human skeleton as an unordered list, greatly enhancing 3D human pose estimation. This paper describes the approach as an efficient way to determine the 3D posture of many persons in a picture. Our model gives a validation accuracy of 92%.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125369295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1