首页 > 最新文献

2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)最新文献

英文 中文
Secure Chatroom Application using Advanced Encryption Standard Algorithm 使用高级加密标准算法的安全聊天室应用程序
Rohitha Pasumarty, R. K N
This paper, in all its essence, is a group chat Application, between a variable number of users, who are connected to each other, by virtue of being a member of the same network. A central server acts as a service to accept messages from a user and forward it to all other users, who are participating in the group chat by being connected to the server at that point of time. Given that the server is up and running, new users can join the group chat server at any given time by providing the current local IP address and the port on which the server is running, although, this can only be done if the active members in a group chat does not exceed the specified and declared capacity of the server. If a large number of users is expected, the server can be configured to listen for a higher number of incoming connection requests, which should be carefully determined, since, performing an alteration in the capacity of the server requires the server to be restarted for the new capacity to come into effect. The main aim of this group chat server is that it is a secure group chat server which uses cryptographic algorithms Advances Encryption Standard for encrypting message. When the server is started, a random cipher key is generated for encryption and decryption of messages. This cipher key is the secret key that is confidential within the boundaries of the system. When a message originates from a user, it is encrypted before being sent to the server. The server receives this encrypted message and forwards it to the other users that are currently connected to the group chat server.
从本质上讲,本文是一个群组聊天应用程序,在可变数量的用户之间,通过成为同一网络的成员而相互连接。中央服务器充当服务,接收来自用户的消息,并将其转发给所有其他用户,这些用户通过在那个时间点连接到服务器来参与组聊天。假设服务器已经启动并运行,新用户可以在任何给定时间通过提供当前本地IP地址和服务器运行的端口来加入组聊天服务器,但是,这只能在组聊天中的活动成员不超过服务器的指定和声明容量的情况下完成。如果预期有大量用户,则可以将服务器配置为侦听更多数量的传入连接请求,这应该仔细确定,因为在服务器容量中执行更改需要重新启动服务器以使新容量生效。本群聊服务器的主要目标是它是一个安全的群聊服务器,使用先进的加密标准对消息进行加密。当服务器启动时,将生成一个用于消息加密和解密的随机密码密钥。该密码密钥是在系统边界内保密的秘密密钥。当消息来自用户时,它在发送到服务器之前被加密。服务器接收此加密消息并将其转发给当前连接到组聊天服务器的其他用户。
{"title":"Secure Chatroom Application using Advanced Encryption Standard Algorithm","authors":"Rohitha Pasumarty, R. K N","doi":"10.1109/CENTCON52345.2021.9688060","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9688060","url":null,"abstract":"This paper, in all its essence, is a group chat Application, between a variable number of users, who are connected to each other, by virtue of being a member of the same network. A central server acts as a service to accept messages from a user and forward it to all other users, who are participating in the group chat by being connected to the server at that point of time. Given that the server is up and running, new users can join the group chat server at any given time by providing the current local IP address and the port on which the server is running, although, this can only be done if the active members in a group chat does not exceed the specified and declared capacity of the server. If a large number of users is expected, the server can be configured to listen for a higher number of incoming connection requests, which should be carefully determined, since, performing an alteration in the capacity of the server requires the server to be restarted for the new capacity to come into effect. The main aim of this group chat server is that it is a secure group chat server which uses cryptographic algorithms Advances Encryption Standard for encrypting message. When the server is started, a random cipher key is generated for encryption and decryption of messages. This cipher key is the secret key that is confidential within the boundaries of the system. When a message originates from a user, it is encrypted before being sent to the server. The server receives this encrypted message and forwards it to the other users that are currently connected to the group chat server.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123266139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Intelligent Automated Negotiation System in Business to Consumer E-Commerce 企业对消费者电子商务智能自动谈判系统
Dhanishtha Patil, Shubham Gaud
E-Commerce is one of the world's most fast-paced industries where the significant aspect of these industries is that they are lacking Customer-Retailer Interaction. Due to the conventional human psychology of bargaining, a product with a lower price is still popular, and some of the products in this sector lack this kind of bargaining, which would be a cause for some of the products. With the advancement of machine learning, automated and Intelligent Agent negotiating system has become a prominent tool in E-Commerce. This paper presents a negotiation technique for establishing a mutually acceptable agreement between the negotiation system which represents supplier and customers, built using Minimum Profit Algorithm designed as per seller requirements and trained on UCI machine learning repository's online retailer dataset using XG Boost regressor for intelligence. This system outperforms the traditional way of negotiation and the model was able to achieve an accuracy of 91.53 percent.
电子商务是世界上发展最快的行业之一,这些行业的重要方面是它们缺乏客户与零售商的互动。由于人类传统的讨价还价心理,价格较低的产品仍然受欢迎,而该行业的一些产品缺乏这种讨价还价,这将是导致某些产品出现的原因。随着机器学习技术的进步,自动化、智能化的代理谈判系统已经成为电子商务中一个重要的工具。本文提出了一种谈判技术,用于在代表供应商和客户的谈判系统之间建立相互接受的协议,该谈判系统使用根据卖方要求设计的最小利润算法构建,并使用XG Boost回归器对UCI机器学习存储库的在线零售商数据集进行训练。该系统优于传统的协商方式,模型的准确率达到91.53%。
{"title":"Intelligent Automated Negotiation System in Business to Consumer E-Commerce","authors":"Dhanishtha Patil, Shubham Gaud","doi":"10.1109/CENTCON52345.2021.9688037","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9688037","url":null,"abstract":"E-Commerce is one of the world's most fast-paced industries where the significant aspect of these industries is that they are lacking Customer-Retailer Interaction. Due to the conventional human psychology of bargaining, a product with a lower price is still popular, and some of the products in this sector lack this kind of bargaining, which would be a cause for some of the products. With the advancement of machine learning, automated and Intelligent Agent negotiating system has become a prominent tool in E-Commerce. This paper presents a negotiation technique for establishing a mutually acceptable agreement between the negotiation system which represents supplier and customers, built using Minimum Profit Algorithm designed as per seller requirements and trained on UCI machine learning repository's online retailer dataset using XG Boost regressor for intelligence. This system outperforms the traditional way of negotiation and the model was able to achieve an accuracy of 91.53 percent.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129420239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient usage of spectrum by using joint optimization channel allocation method 采用联合优化信道分配方法,有效利用频谱
M. Mounisha, Chirunomula Sai Kowshik, M. Reethika, A. Dhanush
Traditionally cognitive radio can be accessed by secondary user only when foremost user is absent, but the subordinate client needs to evacuate the idle spectrum when existence of foremost user is detected. Hence, the bandwidth is reduced in traditional scheme. To overcome the problem, non-orthogonal multiple access is used to increase the efficiency of spectrum in 5G communications. Non-orthogonal multiple access is used in this method to allow the subordinate user to enter into the gamut even when forecast client is attending or not attending the conduit. Foremost client decryption technique and subordinate client decryption technique are introduced to decrypt the non-orthogonal signs. Hence, through the decoding techniques secondary user throughput can be achieved, to increase the primary user throughput duct channel energy must be in a limit. However due to the disturbance caused by the foremost client the subordinate efficiency may be decreased. Orienting towards Foremost user first deciphering and subordinate user first decoding, Here, we come up with two enhancement problems to enhance the efficiency of both primary client and secondary client. This is done using jointly optimizing spectrum resource. This citation embracing how much amount of sub channel transmission power is used and also, it enhances the number of sub channels present in it. This type of citation is to enhance optimization problems. Jointly optimization algorithm is introduced to eliminate the existing problem. This is achieved by accepting the signs and calculating the time needed to sense gamut and the forecast client attending or not attending the conduit to decrypt the data sent while forecast client is absent. The miniature outcomes will be shown the non-orthogonal based multiple access cognitive radio's predominant transmission efficiency.
传统的认知无线电只有在最重要用户不存在的情况下才能被从用户访问,但当检测到最重要用户存在时,从属客户端需要疏散空闲频谱。因此,传统的方案降低了带宽。为了克服这个问题,在5G通信中采用非正交多址来提高频谱效率。在这种方法中使用非正交多址访问,即使在预测客户端参加或不参加管道时,也允许从属用户进入域。介绍了第一客户端解密技术和从属客户端解密技术对非正交符号进行解密。因此,通过解码技术可以实现二次用户吞吐量,要增加主用户吞吐量信道能量必须在一定限度内。然而,由于最重要的客户的干扰,下级的效率可能会降低。针对第一用户优先译码和从属用户优先译码的问题,我们提出了两个增强问题,以提高主客户端和从客户端的效率。这是通过联合优化频谱资源来实现的。这个引用包含了使用的子信道传输功率的多少,并且它增加了其中存在的子信道的数量。这类引文是优化提升的问题。引入联合优化算法来消除存在的问题。这是通过接受信号和计算感知色域所需的时间来实现的,预测客户端出席或不参加管道,以在预测客户端缺席时解密发送的数据。微缩结果将显示基于非正交的多址认知无线电的优势传输效率。
{"title":"Efficient usage of spectrum by using joint optimization channel allocation method","authors":"M. Mounisha, Chirunomula Sai Kowshik, M. Reethika, A. Dhanush","doi":"10.1109/CENTCON52345.2021.9688089","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9688089","url":null,"abstract":"Traditionally cognitive radio can be accessed by secondary user only when foremost user is absent, but the subordinate client needs to evacuate the idle spectrum when existence of foremost user is detected. Hence, the bandwidth is reduced in traditional scheme. To overcome the problem, non-orthogonal multiple access is used to increase the efficiency of spectrum in 5G communications. Non-orthogonal multiple access is used in this method to allow the subordinate user to enter into the gamut even when forecast client is attending or not attending the conduit. Foremost client decryption technique and subordinate client decryption technique are introduced to decrypt the non-orthogonal signs. Hence, through the decoding techniques secondary user throughput can be achieved, to increase the primary user throughput duct channel energy must be in a limit. However due to the disturbance caused by the foremost client the subordinate efficiency may be decreased. Orienting towards Foremost user first deciphering and subordinate user first decoding, Here, we come up with two enhancement problems to enhance the efficiency of both primary client and secondary client. This is done using jointly optimizing spectrum resource. This citation embracing how much amount of sub channel transmission power is used and also, it enhances the number of sub channels present in it. This type of citation is to enhance optimization problems. Jointly optimization algorithm is introduced to eliminate the existing problem. This is achieved by accepting the signs and calculating the time needed to sense gamut and the forecast client attending or not attending the conduit to decrypt the data sent while forecast client is absent. The miniature outcomes will be shown the non-orthogonal based multiple access cognitive radio's predominant transmission efficiency.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127886448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Text-Based Humor Detection 基于文本的在线幽默检测
T. Trueman, Gopi K, Ashok Kumar J
Artificial intelligence is replacing humans and their employment in different fields in today's technological environment. Researchers are trying to create virtual assistants and robots to mimic human characters as much as possible. Out of many impressive human characters, a sense of humor is a charming one. A virtual assistant or a robot with a great sense of humor will be a better replacement for an actual human. Moreover, natural language processing plays a vital role to capture the sense of humor from online texts. In this paper, we detect humor text from online media with help of a generalized autoregressive model. In specific, we fine-tuned the XLNet base to outperform other models in the same humor detection task with a 200k formal texts dataset. The proposed model applies context dependent features to capture the sense of humor. Our result analysis shows that our proposed work achieved an accuracy of 98.6% which is higher than existing models.
在当今的技术环境中,人工智能正在取代人类及其在不同领域的就业。研究人员正试图创造尽可能模仿人类角色的虚拟助手和机器人。在许多令人印象深刻的人类品质中,幽默感是一种迷人的品质。一个具有幽默感的虚拟助手或机器人将会是一个真正的人类的更好替代品。此外,自然语言处理在从网络文本中捕捉幽默感方面起着至关重要的作用。本文利用广义自回归模型对网络媒体中的幽默文本进行检测。具体来说,我们对XLNet库进行了微调,使其在使用200k正式文本数据集的相同幽默检测任务中优于其他模型。该模型应用上下文依赖特征来捕捉幽默感。我们的结果分析表明,我们提出的工作达到了98.6%的准确率,高于现有的模型。
{"title":"Online Text-Based Humor Detection","authors":"T. Trueman, Gopi K, Ashok Kumar J","doi":"10.1109/CENTCON52345.2021.9687930","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9687930","url":null,"abstract":"Artificial intelligence is replacing humans and their employment in different fields in today's technological environment. Researchers are trying to create virtual assistants and robots to mimic human characters as much as possible. Out of many impressive human characters, a sense of humor is a charming one. A virtual assistant or a robot with a great sense of humor will be a better replacement for an actual human. Moreover, natural language processing plays a vital role to capture the sense of humor from online texts. In this paper, we detect humor text from online media with help of a generalized autoregressive model. In specific, we fine-tuned the XLNet base to outperform other models in the same humor detection task with a 200k formal texts dataset. The proposed model applies context dependent features to capture the sense of humor. Our result analysis shows that our proposed work achieved an accuracy of 98.6% which is higher than existing models.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"258263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116419155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design And Analysis Of Gate-All-Around (GAA) Triple Material Gate Charge Plasma Nanowire FET 栅极全能(GAA)三重材料栅极电荷等离子体纳米线场效应管的设计与分析
Leo Raj Solay, S. Anand, S. Amin, Pradeep Kumar
In this paper, Gate-All-Around (GAA) Charge Plasma (CP) Nanowire Field Effect Transistor (NW FET) structure design and analysis using Triple Material Gate (TMG) technique making it as a Gate-All-Around Triple Material Gate Charge Plasma Nanowire Field Effect Transistor (GAA TMG CP NW FET) is proposed. The proposed structure GAA TMG CP NW FET is compared with GAA Single Material Gate CP NW FET (GAA SMG CP NW FET) and GAA Double Material Gate CP NW FET (GAA DMG CP NW FET) structures. With the contrast made in between three structures i.e., SMG, DMG & TMG, the proposed structure GAA TMG CP NW FET resulted with promising outcomes in terms of ON-state current (ION), OFF-state current (IOFF) and their current ratios (ION/IOFF). The Analog and RF analysis were made for the proposed structure and compared with SMG & DMG which gave improved results such as Drain current with gate voltage (ID-VGS), Drain current with drain voltage (ID-VDS), Transconductance (gm), Output conductance (gd), Total gate capacitance (CGG) etc. The proposed device is then compared with the structure results such as Band energy, potential, electric field etc. A fair comparison is drawn from the SMG, DMG and TMG structures to prove its ability towards the Nanoscale device structures.
本文采用三材料栅极(TMG)技术对栅极全能(GAA)电荷等离子体(CP)纳米线场效应晶体管(NW FET)进行结构设计和分析,使其成为一种栅极全能三材料栅极电荷等离子体纳米线场效应晶体管(GAA TMG CP NW FET)。并与GAA单材料栅CP NW FET (GAA SMG CP NW FET)和GAA双材料栅CP NW FET (GAA DMG CP NW FET)结构进行了比较。通过对SMG、DMG和TMG三种结构的对比,本文提出的GAA TMG CP NW FET结构在导通电流(ION)、关断电流(IOFF)及其电流比(ION/IOFF)方面取得了令人满意的结果。对所提出的结构进行了模拟和射频分析,并与SMG和DMG进行了比较,得到了改进的结果,如漏极电流与栅极电压(ID-VGS)、漏极电流与漏极电压(ID-VDS)、跨导率(gm)、输出导率(gd)、总栅电容(CGG)等。然后将所提出的器件与结构结果进行了比较,如能带、电位、电场等。通过对SMG结构、DMG结构和TMG结构的比较,证明了SMG结构在纳米器件结构方面的能力。
{"title":"Design And Analysis Of Gate-All-Around (GAA) Triple Material Gate Charge Plasma Nanowire FET","authors":"Leo Raj Solay, S. Anand, S. Amin, Pradeep Kumar","doi":"10.1109/CENTCON52345.2021.9688087","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9688087","url":null,"abstract":"In this paper, Gate-All-Around (GAA) Charge Plasma (CP) Nanowire Field Effect Transistor (NW FET) structure design and analysis using Triple Material Gate (TMG) technique making it as a Gate-All-Around Triple Material Gate Charge Plasma Nanowire Field Effect Transistor (GAA TMG CP NW FET) is proposed. The proposed structure GAA TMG CP NW FET is compared with GAA Single Material Gate CP NW FET (GAA SMG CP NW FET) and GAA Double Material Gate CP NW FET (GAA DMG CP NW FET) structures. With the contrast made in between three structures i.e., SMG, DMG & TMG, the proposed structure GAA TMG CP NW FET resulted with promising outcomes in terms of ON-state current (ION), OFF-state current (IOFF) and their current ratios (ION/IOFF). The Analog and RF analysis were made for the proposed structure and compared with SMG & DMG which gave improved results such as Drain current with gate voltage (ID-VGS), Drain current with drain voltage (ID-VDS), Transconductance (gm), Output conductance (gd), Total gate capacitance (CGG) etc. The proposed device is then compared with the structure results such as Band energy, potential, electric field etc. A fair comparison is drawn from the SMG, DMG and TMG structures to prove its ability towards the Nanoscale device structures.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127140955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Comparative Analysis of Progressive Loss Functions with Multi Layered Resnet Model 逐级损失函数与多层Resnet模型的比较分析
Sravanthi Kantamaneni, Charles, T. Babu
In this paper, one of the proposed ResNet model is used for denoising of RB noise. In fact, ResNet is one of the advanced deep learning methods for analysing and improving various 1D and 2D signals. Accuracy decreases due to the vanishing gradients in plain networks. The model Mozilla common speech data set is used. These are 48kHz recordings of all short sentence speaking subjects. They are all fixed at the same length and the same sampling frequency. The training course for this model uses an Adam optimizer/solver. This model is implemented in scheduling the learning rate “with a division” of 0.9 drop factor and a period of one. About 50 noise samples are available in the data set. Similarly, noise signals are acquired under various environmental conditions. Therefore, one separate data set is prepared for the T&T of the signal. When the T&T data set is small, the problem of overcompliance arises. In other words, since we are only trying to collect all data points from our dataset, we have used one proposed model to manage this dataset more efficiently. In the RMSE and precision validation values, you can feel the over- compliance issues here. Overfitting means that by 1 point of travel, the learning plot starts to deteriorate after loss and an increase in accuracy in terms of the identification. Similarly, if we are trying to pick a simple model for denoising, i.e. there is another problem - underfitting. Underfitting means that the model is either oversized or this model is oversized so that it doesn't learn enough about the dataset using that model. Each time various types of noises tries to rip off the amount added to the voice signal. Improvements in terms of denoising, RMSE and validation precision with the help of this model was given in the following sections.
本文采用所提出的一种ResNet模型对RB噪声进行去噪。事实上,ResNet是一种先进的深度学习方法,用于分析和改进各种一维和二维信号。在平面网络中,由于梯度消失导致精度降低。使用Mozilla通用语音数据集模型。这些是所有短句说话对象的48kHz录音。它们都固定在相同的长度和相同的采样频率上。该模型的训练课程使用Adam优化/求解器。该模型实现在学习率“除”为0.9下降因子,周期为1的调度中。数据集中大约有50个噪声样本。同样,噪声信号是在各种环境条件下采集的。因此,为信号的T&T准备了一个单独的数据集。当T&T数据集很小时,就会出现过度遵从的问题。换句话说,由于我们只是试图从数据集中收集所有数据点,因此我们使用了一个提议的模型来更有效地管理该数据集。在RMSE和精度验证值中,您可以感受到这里的过度遵从性问题。过拟合是指每走1个点,学习图在丢失后开始恶化,在识别方面的准确性增加。同样,如果我们试图选择一个简单的模型去噪,也就是说,存在另一个问题——欠拟合。欠拟合意味着模型要么过大,要么这个模型过大,以至于它没有充分了解使用该模型的数据集。每次都有各种各样的噪音试图把添加到语音信号中的量扯掉。在此模型的帮助下,在去噪、RMSE和验证精度方面的改进将在以下章节中给出。
{"title":"A Comparative Analysis of Progressive Loss Functions with Multi Layered Resnet Model","authors":"Sravanthi Kantamaneni, Charles, T. Babu","doi":"10.1109/CENTCON52345.2021.9687884","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9687884","url":null,"abstract":"In this paper, one of the proposed ResNet model is used for denoising of RB noise. In fact, ResNet is one of the advanced deep learning methods for analysing and improving various 1D and 2D signals. Accuracy decreases due to the vanishing gradients in plain networks. The model Mozilla common speech data set is used. These are 48kHz recordings of all short sentence speaking subjects. They are all fixed at the same length and the same sampling frequency. The training course for this model uses an Adam optimizer/solver. This model is implemented in scheduling the learning rate “with a division” of 0.9 drop factor and a period of one. About 50 noise samples are available in the data set. Similarly, noise signals are acquired under various environmental conditions. Therefore, one separate data set is prepared for the T&T of the signal. When the T&T data set is small, the problem of overcompliance arises. In other words, since we are only trying to collect all data points from our dataset, we have used one proposed model to manage this dataset more efficiently. In the RMSE and precision validation values, you can feel the over- compliance issues here. Overfitting means that by 1 point of travel, the learning plot starts to deteriorate after loss and an increase in accuracy in terms of the identification. Similarly, if we are trying to pick a simple model for denoising, i.e. there is another problem - underfitting. Underfitting means that the model is either oversized or this model is oversized so that it doesn't learn enough about the dataset using that model. Each time various types of noises tries to rip off the amount added to the voice signal. Improvements in terms of denoising, RMSE and validation precision with the help of this model was given in the following sections.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134325157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Processing Based Vehicle Detection and Tracking: A Comparative Study 基于图像处理的车辆检测与跟踪的比较研究
A. V, S. Kulkarni
With the increasing economical-developments and urban population, the number of vehicles on road is increasing as well and hence the traffic. There comes the need to lower the congestion of roads caused due to vehicular traffic. Out of numerous vehicle detection and tracking techniques, this paper deals with Image- processing-based methods simulated using MATLAB Simulink. Few such methods are Background subtraction with gaussian and Kalman Filter, Blob analysis, Horn- Schunck, Particle Filter and Monte Carlo method. Background subtraction is the most familiar one these days followed by the Morphological operations. It depends on certain parameters like accuracy, time for processing, segmenting, and complexity. The various traffic parameters like the speed of the car, count, and its tracking are calculated using its threshold values in some of the detection methods. The work proposed is done in real-time taking the challenging examples. The results mentioned throw light on the lustiness of the study proposed.
随着经济的发展和城市人口的增加,道路上的车辆数量也在增加,交通也在增加。有必要减少由于车辆交通造成的道路拥堵。在众多的车辆检测和跟踪技术中,本文研究了基于图像处理的方法,并利用MATLAB Simulink进行了仿真。这类方法有高斯滤波和卡尔曼滤波的背景减法、Blob分析、Horn- Schunck法、粒子滤波法和蒙特卡罗法。背景减法是目前最常见的一种,其次是形态学操作。它取决于某些参数,如准确性、处理时间、分段和复杂性。在某些检测方法中,各种交通参数(如车速、计数和跟踪)使用其阈值来计算。所提出的工作是通过具有挑战性的实例实时完成的。上述结果阐明了所提出的研究的可行性。
{"title":"Image Processing Based Vehicle Detection and Tracking: A Comparative Study","authors":"A. V, S. Kulkarni","doi":"10.1109/CENTCON52345.2021.9687988","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9687988","url":null,"abstract":"With the increasing economical-developments and urban population, the number of vehicles on road is increasing as well and hence the traffic. There comes the need to lower the congestion of roads caused due to vehicular traffic. Out of numerous vehicle detection and tracking techniques, this paper deals with Image- processing-based methods simulated using MATLAB Simulink. Few such methods are Background subtraction with gaussian and Kalman Filter, Blob analysis, Horn- Schunck, Particle Filter and Monte Carlo method. Background subtraction is the most familiar one these days followed by the Morphological operations. It depends on certain parameters like accuracy, time for processing, segmenting, and complexity. The various traffic parameters like the speed of the car, count, and its tracking are calculated using its threshold values in some of the detection methods. The work proposed is done in real-time taking the challenging examples. The results mentioned throw light on the lustiness of the study proposed.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121106597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Machine Learning Model to Identify Insulin Resistance in Humans 识别人类胰岛素抵抗的机器学习模型
Madam Chakradar, Alok Aggarwal
T2DM is a large challenge because it's predicted to affect 693 million people by 2045. There is currently no simple or non-invasive method to measure and quantify insulin resistance. Following the release of non-invasive devices that track glucose levels, one might be able to identify insulin resistance without having to use invasive medical tests. In this work, insulin resistance is recognized based on non-invasive techniques. Eighteen parameters are used to identify a person with a high likelihood of insulin resistance: consisting of age, gender, waist size, height, etc., and an aggregate of those parameters. Each output of a function choices technique is modeled using a range of algorithms, including logistic regression, CARTs, SVM, LDA, KNN, etc on CALERIE study dataset and the findings are verified over stratified cross-validation. And in comparison, to 66% Bernardini et al & Stawiski et al, 61% Zheng et al, and 83% Farran et al, the accuracy of different variations for the identification of insulin resistance. Another advantage of the proposed approach is that an individual can also predict insulin resistance daily, which in turn will allow physicians to monitor diabetes risk more accurately. While the identical isn't always almost feasible with medical procedures.
2型糖尿病是一个巨大的挑战,因为预计到2045年将有6.93亿人受到影响。目前还没有简单或无创的方法来测量和量化胰岛素抵抗。随着追踪血糖水平的非侵入性设备的发布,人们可能能够在不使用侵入性医学测试的情况下识别胰岛素抵抗。在这项工作中,胰岛素抵抗是基于非侵入性技术识别的。18个参数用于识别胰岛素抵抗可能性高的人:包括年龄、性别、腰围大小、身高等,以及这些参数的总和。函数选择技术的每个输出都使用一系列算法建模,包括CALERIE研究数据集上的逻辑回归,cart, SVM, LDA, KNN等,并通过分层交叉验证验证结果。相比之下,66% Bernardini et al & Stawiski et al, 61% Zheng et al, 83% Farran et al,不同变异对胰岛素抵抗识别的准确性。该方法的另一个优点是,个人还可以每天预测胰岛素抵抗,从而使医生能够更准确地监测糖尿病风险。然而,在医疗过程中,这并不总是可行的。
{"title":"A Machine Learning Model to Identify Insulin Resistance in Humans","authors":"Madam Chakradar, Alok Aggarwal","doi":"10.1109/CENTCON52345.2021.9688098","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9688098","url":null,"abstract":"T2DM is a large challenge because it's predicted to affect 693 million people by 2045. There is currently no simple or non-invasive method to measure and quantify insulin resistance. Following the release of non-invasive devices that track glucose levels, one might be able to identify insulin resistance without having to use invasive medical tests. In this work, insulin resistance is recognized based on non-invasive techniques. Eighteen parameters are used to identify a person with a high likelihood of insulin resistance: consisting of age, gender, waist size, height, etc., and an aggregate of those parameters. Each output of a function choices technique is modeled using a range of algorithms, including logistic regression, CARTs, SVM, LDA, KNN, etc on CALERIE study dataset and the findings are verified over stratified cross-validation. And in comparison, to 66% Bernardini et al & Stawiski et al, 61% Zheng et al, and 83% Farran et al, the accuracy of different variations for the identification of insulin resistance. Another advantage of the proposed approach is that an individual can also predict insulin resistance daily, which in turn will allow physicians to monitor diabetes risk more accurately. While the identical isn't always almost feasible with medical procedures.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129725444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task Allocation in Distributed Agile Software Development using Machine Learning Approach 基于机器学习方法的分布式敏捷软件开发任务分配
P. William, Pardeep Kumar, Gurpreet Singh Chhabra, K. Vengatesan
In the 21st century, agile software development (ASD) has emerged as one of the prominent software development techniques. Every major global company has moved to ASD as a means of reducing costs. In pursuit of huge markets and cheap cost of labour, the industry has shifted to a Distributed Agile Software Development (DASD) environment. As a consequence of improper job allocation, clients may refuse to accept the project, team members may be demonized, and the project may collapse. Numerous scholars have spent the past decade researching different techniques for work allocation in Distributed Agile settings, and the results have been promising. Ontologies and Bayesian networks were among the techniques they employed. This is a list of brute force techniques that may be useful in certain situations. Additionally, these methods have not been used to distributed Agile software development job allocation. The purpose of this article is to design and implement a method for job allocation in distributed Agile software development that is based on machine learning. The findings indicate that the suggested model is more accurate in terms of task assignment.
在21世纪,敏捷软件开发(ASD)已经成为突出的软件开发技术之一。每个主要的全球公司都将ASD作为降低成本的一种手段。为了追求巨大的市场和廉价的劳动力成本,业界已经转向分布式敏捷软件开发(DASD)环境。由于工作分配不当,客户可能拒绝接受项目,团队成员可能被妖魔化,项目可能崩溃。在过去的十年里,许多学者都在研究分布式敏捷环境下工作分配的不同技术,并取得了令人鼓舞的成果。本体论和贝叶斯网络是他们使用的技术之一。这是在某些情况下可能有用的暴力破解技术列表。此外,这些方法还没有被用于分布式敏捷软件开发的任务分配。本文的目的是设计和实现一种基于机器学习的分布式敏捷软件开发中的任务分配方法。研究结果表明,该模型在任务分配方面更为准确。
{"title":"Task Allocation in Distributed Agile Software Development using Machine Learning Approach","authors":"P. William, Pardeep Kumar, Gurpreet Singh Chhabra, K. Vengatesan","doi":"10.1109/CENTCON52345.2021.9688114","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9688114","url":null,"abstract":"In the 21st century, agile software development (ASD) has emerged as one of the prominent software development techniques. Every major global company has moved to ASD as a means of reducing costs. In pursuit of huge markets and cheap cost of labour, the industry has shifted to a Distributed Agile Software Development (DASD) environment. As a consequence of improper job allocation, clients may refuse to accept the project, team members may be demonized, and the project may collapse. Numerous scholars have spent the past decade researching different techniques for work allocation in Distributed Agile settings, and the results have been promising. Ontologies and Bayesian networks were among the techniques they employed. This is a list of brute force techniques that may be useful in certain situations. Additionally, these methods have not been used to distributed Agile software development job allocation. The purpose of this article is to design and implement a method for job allocation in distributed Agile software development that is based on machine learning. The findings indicate that the suggested model is more accurate in terms of task assignment.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128904562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A Novel Approach to Onboarding Secure Cloud-Native Acquisitions into Enterprise Solutions 将安全的云原生收购纳入企业解决方案的新方法
S. Vadlamudi, Jenifer Sam
Large enterprises work upon countless strategies for creating value with acquisitions as they enter a transformational merger. Cloud-native applications, being loosely coupled and designed to deliver user requirements at the pace a business needs are the natural choice for enterprise acquisitions. Culture change management and process compliance are some key areas where acquisitions find it difficult to adapt to the enterprise standards. To make this journey smooth, it is important to have a guided journey methodology with simplified engagement between both parties. One of the key dimensions where such a collaboration-centric approach is required is in security. In this paper, we examine the current challenges in onboarding cloud-native acquisitions to bring their security compliance posture at par with the current enterprise standards. We explore areas such as secure software development lifecycle management, tools and processes followed and provide recommendations around improving the overall security stance of the acquired product.
大型企业在进行转型合并时,会制定无数通过收购创造价值的策略。云原生应用程序是松散耦合的,旨在以业务需求的速度交付用户需求,这是企业收购的自然选择。文化变更管理和过程遵从性是收购发现难以适应企业标准的一些关键领域。为了使这段旅程顺利进行,重要的是要有一个引导的旅程方法,简化双方之间的接触。需要这种以协作为中心的方法的一个关键方面是安全性。在本文中,我们研究了云原生收购中当前面临的挑战,以使其安全合规状态与当前的企业标准保持一致。我们探讨了安全软件开发生命周期管理、工具和过程等领域,并围绕改进所获得产品的整体安全状况提供了建议。
{"title":"A Novel Approach to Onboarding Secure Cloud-Native Acquisitions into Enterprise Solutions","authors":"S. Vadlamudi, Jenifer Sam","doi":"10.1109/CENTCON52345.2021.9688193","DOIUrl":"https://doi.org/10.1109/CENTCON52345.2021.9688193","url":null,"abstract":"Large enterprises work upon countless strategies for creating value with acquisitions as they enter a transformational merger. Cloud-native applications, being loosely coupled and designed to deliver user requirements at the pace a business needs are the natural choice for enterprise acquisitions. Culture change management and process compliance are some key areas where acquisitions find it difficult to adapt to the enterprise standards. To make this journey smooth, it is important to have a guided journey methodology with simplified engagement between both parties. One of the key dimensions where such a collaboration-centric approach is required is in security. In this paper, we examine the current challenges in onboarding cloud-native acquisitions to bring their security compliance posture at par with the current enterprise standards. We explore areas such as secure software development lifecycle management, tools and processes followed and provide recommendations around improving the overall security stance of the acquired product.","PeriodicalId":103865,"journal":{"name":"2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133925704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1