首页 > 最新文献

2022 OITS International Conference on Information Technology (OCIT)最新文献

英文 中文
Effort Estimation of Software products by using UML Sequence models with Regression Analysis 用UML序列模型和回归分析进行软件产品的工作量估算
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00028
P. Sahoo, D. K. Behera, J. Mohanty, C. S. K. Dash
Software product development is an indispensible part of the society we live in. In order to produce quality products economically, efficiently and within targeted completion date, estimation for development needs to be fairly precise. This work comes up with quite a viable estimation of the development efforts for the current day web applications. The modus operandi in this work collects facts existing in the Unified Modeling Language Sequence models generated for Object based systems. These facts, in combination with customized regression analysis programs specifically written for this work were used for the required estimation. To be specific: Decision Tree, Support Vector, Extreme Gradient Boosting and Bayesian Ridge Regression methods were used to estimate the efforts. The outcomes obtained by these methodologies, established its preciseness. As per the observations from experiments conducted, it was quite evident that the Bayesian Ridge Regression is providing the best accuracy compared to other Machine Learning models.
软件产品开发是我们所生活的社会不可缺少的一部分。为了经济、高效地在目标完成日期内生产高质量的产品,开发评估需要相当精确。这项工作对当前web应用程序的开发工作进行了相当可行的估计。这项工作的操作方法收集了为基于对象的系统生成的统一建模语言序列模型中存在的事实。这些事实,结合为这项工作专门编写的定制回归分析程序,被用于所需的估计。具体而言:使用决策树,支持向量,极端梯度增强和贝叶斯岭回归方法来估计努力。这些方法所得到的结果,证实了其精确性。根据所进行的实验观察,很明显,与其他机器学习模型相比,贝叶斯岭回归提供了最好的准确性。
{"title":"Effort Estimation of Software products by using UML Sequence models with Regression Analysis","authors":"P. Sahoo, D. K. Behera, J. Mohanty, C. S. K. Dash","doi":"10.1109/OCIT56763.2022.00028","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00028","url":null,"abstract":"Software product development is an indispensible part of the society we live in. In order to produce quality products economically, efficiently and within targeted completion date, estimation for development needs to be fairly precise. This work comes up with quite a viable estimation of the development efforts for the current day web applications. The modus operandi in this work collects facts existing in the Unified Modeling Language Sequence models generated for Object based systems. These facts, in combination with customized regression analysis programs specifically written for this work were used for the required estimation. To be specific: Decision Tree, Support Vector, Extreme Gradient Boosting and Bayesian Ridge Regression methods were used to estimate the efforts. The outcomes obtained by these methodologies, established its preciseness. As per the observations from experiments conducted, it was quite evident that the Bayesian Ridge Regression is providing the best accuracy compared to other Machine Learning models.","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129069867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural Question Generation using Transformers and Reinforcement Learning 使用变压器和强化学习的自然问题生成
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00061
Dipshikha Biswas, Suneel Nadipalli, B. Sneha, Deepa Gupta, J. Amudha
Natural Question Generation (NQG) is among the most popular open research problems in Natural Language Processing (NLP) alongside Neural Machine Translation, Open Domain Chatbots, etc. Among the many approaches taken up to solve this problem, neural networks have been deemed the benchmark in this particular research area. This paper aims at adopting a generator - evaluator framework in a neural network architecture to allow additional focus on the context of the content used for framing a question. The generator uses NLP architectures like transformers (T5) to generate a question given a context while the evaluator uses Reinforcement Learning (RL) to check the correctness of the generated question. The involvement of RL has improved the results (as shown in Table 2), and there is increased computational efficiency as the training is coupled with the policy of RL. This turns the problem into a reinforcement learning task and allows for the generation of a wide range of questions for the same context-answer pair. The given algorithm is tested on the benchmark dataset - SQuAD with BLEU score as the evaluation metric
自然问题生成(NQG)是自然语言处理(NLP)中最受欢迎的开放研究问题之一,与神经机器翻译、开放域聊天机器人等一起。在解决这一问题的许多方法中,神经网络被认为是这一特定研究领域的基准。本文旨在采用神经网络架构中的生成器-评估器框架,以允许额外关注用于构建问题的内容的上下文。生成器使用像变压器(T5)这样的NLP架构来生成给定上下文的问题,而评估器使用强化学习(RL)来检查生成问题的正确性。RL的参与改善了结果(如表2所示),并且随着训练与RL策略的结合,计算效率也有所提高。这将问题转化为强化学习任务,并允许为相同的上下文-答案对生成广泛的问题。以BLEU分数作为评价指标,在基准数据集SQuAD上对给定算法进行了测试
{"title":"Natural Question Generation using Transformers and Reinforcement Learning","authors":"Dipshikha Biswas, Suneel Nadipalli, B. Sneha, Deepa Gupta, J. Amudha","doi":"10.1109/OCIT56763.2022.00061","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00061","url":null,"abstract":"Natural Question Generation (NQG) is among the most popular open research problems in Natural Language Processing (NLP) alongside Neural Machine Translation, Open Domain Chatbots, etc. Among the many approaches taken up to solve this problem, neural networks have been deemed the benchmark in this particular research area. This paper aims at adopting a generator - evaluator framework in a neural network architecture to allow additional focus on the context of the content used for framing a question. The generator uses NLP architectures like transformers (T5) to generate a question given a context while the evaluator uses Reinforcement Learning (RL) to check the correctness of the generated question. The involvement of RL has improved the results (as shown in Table 2), and there is increased computational efficiency as the training is coupled with the policy of RL. This turns the problem into a reinforcement learning task and allows for the generation of a wide range of questions for the same context-answer pair. The given algorithm is tested on the benchmark dataset - SQuAD with BLEU score as the evaluation metric","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114827562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Transfer Learning Approach for Diagnosis of COVID-19 Cases from Chest Radiography Images 基于迁移学习的胸片影像诊断新冠肺炎
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00049
A. Dash, Puspanjali Mohapatra, N. Ray
The new coronavirus disease 2019 (COVID-19) pandemic completely changed individuals' daily lives and created economic disruption across the world. Many countries are using movement restrictions and physical distancing as their measures to slow down this transmission. Effective screening of COVID-19 cases is needed to stop the spreading of these diseases. In the first phases of clinical assessment, it was seen that patients with deformities in chest X-ray images show the signs of COVID-19 infection. Inspired from this, in this study, a novel framework is designed to detect the COVID-19 cases from chest radiography images. Here, a pre-trained deep convolutional neural network VGG-16 is used to extract discriminating features from the radiography images. These extracted features are given as an input to the Logistic regression classifier for automatic detection of COVID-19 cases. The suggested framework obtained a remarkable accuracy of 99.1% with a 100% sensitivity rate in comparison with other state-of-the-art classifier.
2019年新型冠状病毒病(COVID-19)大流行彻底改变了人们的日常生活,并在全球范围内造成了经济混乱。许多国家正在使用行动限制和保持身体距离作为减缓这种传播的措施。需要对COVID-19病例进行有效筛查,以阻止这些疾病的传播。在第一阶段的临床评估中,胸部x线图像显示畸形患者有新冠病毒感染的迹象。受此启发,本研究设计了一个新的框架,用于从胸部x线摄影图像中检测COVID-19病例。在这里,使用预训练的深度卷积神经网络VGG-16从x线摄影图像中提取判别特征。这些提取的特征作为逻辑回归分类器的输入,用于自动检测COVID-19病例。与其他最先进的分类器相比,建议的框架获得了99.1%的显着准确率和100%的灵敏度。
{"title":"A Transfer Learning Approach for Diagnosis of COVID-19 Cases from Chest Radiography Images","authors":"A. Dash, Puspanjali Mohapatra, N. Ray","doi":"10.1109/OCIT56763.2022.00049","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00049","url":null,"abstract":"The new coronavirus disease 2019 (COVID-19) pandemic completely changed individuals' daily lives and created economic disruption across the world. Many countries are using movement restrictions and physical distancing as their measures to slow down this transmission. Effective screening of COVID-19 cases is needed to stop the spreading of these diseases. In the first phases of clinical assessment, it was seen that patients with deformities in chest X-ray images show the signs of COVID-19 infection. Inspired from this, in this study, a novel framework is designed to detect the COVID-19 cases from chest radiography images. Here, a pre-trained deep convolutional neural network VGG-16 is used to extract discriminating features from the radiography images. These extracted features are given as an input to the Logistic regression classifier for automatic detection of COVID-19 cases. The suggested framework obtained a remarkable accuracy of 99.1% with a 100% sensitivity rate in comparison with other state-of-the-art classifier.","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Regression Tree 混合回归树
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00055
M. Jena, Asit Patra, B. Sahoo, Satchidananda Dehuri
A regression tree is one of the most popular machine learning-based decision models. Unlike a decision tree it predicts a continuous value. Many regression models have emerged to handle the regression problems, where most of them faced difficulties while capturing the non-linear patterns. Some regression models are sensitive to outliers, like regression trees. In this paper, a hybrid regression model is proposed, which combines the features of regression tree and ridge regression to improve the performance of regression problem. In the proposed model, the leaf nodes of the regression tree are modified. Rather than storing the mean of the corresponding targeted output values, the proposed hybrid model stores the suitable tuples in its leaf nodes. When some predictor values are inserted, the control transfers to the corresponding leaf node, and ridge regression is applied to the leaf node to predict the required values. In this method, the threshold value plays a vital role in deciding the number of tuples in the leaf nodes, which further affects the time complexity and mean squared error. Extensive comparative analysis has been made by comparing the performance of the proposed model with other regression models using four real-world datasets. The experimental results show that the proposed method outperforms the regression tree and ridge regression when applied individually.
回归树是最流行的基于机器学习的决策模型之一。与决策树不同,它预测的是一个连续的值。为了处理回归问题,出现了许多回归模型,其中大多数模型在捕获非线性模式时面临困难。一些回归模型对异常值很敏感,比如回归树。本文提出了一种结合回归树和脊回归特征的混合回归模型,以提高回归问题的性能。在该模型中,对回归树的叶节点进行了修改。提出的混合模型不是存储相应目标输出值的平均值,而是将合适的元组存储在其叶节点中。当插入一些预测值时,控制转移到相应的叶节点,并将脊回归应用于叶节点以预测所需的值。在该方法中,阈值对于决定叶节点中元组的数量起着至关重要的作用,从而影响到时间复杂度和均方误差。通过使用四个实际数据集,将所提出的模型与其他回归模型的性能进行了广泛的比较分析。实验结果表明,该方法在单独应用时优于回归树和脊回归。
{"title":"Hybrid Regression Tree","authors":"M. Jena, Asit Patra, B. Sahoo, Satchidananda Dehuri","doi":"10.1109/OCIT56763.2022.00055","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00055","url":null,"abstract":"A regression tree is one of the most popular machine learning-based decision models. Unlike a decision tree it predicts a continuous value. Many regression models have emerged to handle the regression problems, where most of them faced difficulties while capturing the non-linear patterns. Some regression models are sensitive to outliers, like regression trees. In this paper, a hybrid regression model is proposed, which combines the features of regression tree and ridge regression to improve the performance of regression problem. In the proposed model, the leaf nodes of the regression tree are modified. Rather than storing the mean of the corresponding targeted output values, the proposed hybrid model stores the suitable tuples in its leaf nodes. When some predictor values are inserted, the control transfers to the corresponding leaf node, and ridge regression is applied to the leaf node to predict the required values. In this method, the threshold value plays a vital role in deciding the number of tuples in the leaf nodes, which further affects the time complexity and mean squared error. Extensive comparative analysis has been made by comparing the performance of the proposed model with other regression models using four real-world datasets. The experimental results show that the proposed method outperforms the regression tree and ridge regression when applied individually.","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122732454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Resource Provisioning for Smart Home Using Fog Computing 基于雾计算的智能家居自适应资源配置
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00102
A. Chandak, N. Ray
IoT devices in the smart home eases human life and they can be controlled from remote locations. Proper utilization of end devices and other resources is a basic requirements in smart home. In smart home for faster processing of data fog nodes are used. They are deployed to minimize processing delay and are most suitable when an appropriate number of resources are available. Resource provisioning refers to the optimal allocation of resources to improve resource utilization and response time. It also avoid situation where some fog node is overloaded and some are underloaded. Fog nodes are dynamic and they can leave or join the network anytime. In the same time any malicious fog node can also join the fog network and can tamper the data and other resources. In this article, an efficient resource provisioning mechanism for smart home is proposed. The proposed scheme uses an authentication mechanism in which fog nodes authenticate themselves before providing services. There are mainly two types of requests by IoT devices viz. data and computational. To improve response, it is necessary to categorize requests and allocate fog nodes in proportion of requests type. The proposed scheme assess the performance of adaptive resource provisioning with static and random provisioning based on makespan, average execution time, and response time.
智能家居中的物联网设备简化了人们的生活,它们可以从远程位置进行控制。合理利用终端设备和其他资源是智能家居的基本要求。在智能家居中,为了更快地处理数据,使用了雾节点。它们的部署是为了最大限度地减少处理延迟,并且在有适当数量的资源可用时最适合。资源调配是指对资源进行最优分配,以提高资源利用率和响应时间。它还避免了一些雾节点过载而另一些雾节点负载不足的情况。雾节点是动态的,它们可以随时离开或加入网络。同时,任何恶意雾节点也可以加入雾网络,篡改数据和其他资源。本文提出了一种高效的智能家居资源配置机制。该方案使用了一种认证机制,其中雾节点在提供服务之前对自己进行认证。物联网设备主要有两种类型的请求,即数据和计算。为了提高响应速度,需要对请求进行分类,并按请求类型的比例分配雾节点。该方案基于makespan、平均执行时间和响应时间,对静态和随机自适应资源配置的性能进行评估。
{"title":"Adaptive Resource Provisioning for Smart Home Using Fog Computing","authors":"A. Chandak, N. Ray","doi":"10.1109/OCIT56763.2022.00102","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00102","url":null,"abstract":"IoT devices in the smart home eases human life and they can be controlled from remote locations. Proper utilization of end devices and other resources is a basic requirements in smart home. In smart home for faster processing of data fog nodes are used. They are deployed to minimize processing delay and are most suitable when an appropriate number of resources are available. Resource provisioning refers to the optimal allocation of resources to improve resource utilization and response time. It also avoid situation where some fog node is overloaded and some are underloaded. Fog nodes are dynamic and they can leave or join the network anytime. In the same time any malicious fog node can also join the fog network and can tamper the data and other resources. In this article, an efficient resource provisioning mechanism for smart home is proposed. The proposed scheme uses an authentication mechanism in which fog nodes authenticate themselves before providing services. There are mainly two types of requests by IoT devices viz. data and computational. To improve response, it is necessary to categorize requests and allocate fog nodes in proportion of requests type. The proposed scheme assess the performance of adaptive resource provisioning with static and random provisioning based on makespan, average execution time, and response time.","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123185602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Long Non-Coding RNAs Responsible for Cancer Development 检测致癌的长链非编码rna
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00040
Mitra Datta Ganapaneni, Kundhana Harshitha Paruchuru, J. Ambati, Mahesh Valavala, C.C Sobin
Long noncoding RNAs (lncRNA) have a vital role in tumor development. Variation in expressions of IncRNAs affect several target genes related to tumor initiation and development. Recent studies in Carcinogenesis have indicated the importance of IncRNA in cancer progression, diagnosis, and treatment. The purpose of our research is to identify the key cancer-related IncRNAs. It is considered a complex task to identify key IncRNAs in cancer with existing cancer data of tumor patients due to the high dimensionality nature of expression profiles. LncRNA expression profiles of 12309 IncRNAs and 2221 patients are gathered from TCGA. A Computational framework is proposed considering 5 cancer types (Bladder, Colon, Cervical, Liver, Head, and Neck) comprising four Machine learning classification models named K-Nearest Neighbor, Naive Bayes, Random Forest, and Support Vector Machine. An essential component in the framework is to use models along with the state-of-the-art Variance threshold, L1-based, and Tree-based feature selection algorithms for differential analysis. The study resulted in identifying 234 key IncRNAs capable of differentiating 5 cancer types. The capability of identified key IncRNAs is observed by the performance of classification models resulting in the highest 98.2% accuracy by SVM. Furthermore, the correlation analysis of 234 IncRNAs experimentally validated the results.
长链非编码rna (lncRNA)在肿瘤发生发展中具有重要作用。IncRNAs表达的变化影响与肿瘤发生和发展相关的几个靶基因。最近关于癌变的研究表明,IncRNA在癌症进展、诊断和治疗中的重要性。我们研究的目的是鉴定关键的癌症相关的incrna。由于表达谱的高维性,利用肿瘤患者现有的癌症数据识别癌症中的关键incrna被认为是一项复杂的任务。从TCGA收集了12309例incrna和2221例患者的LncRNA表达谱。提出了一种考虑5种癌症类型(膀胱癌、结肠癌、宫颈癌、肝癌、头颈癌)的计算框架,包括k -近邻、朴素贝叶斯、随机森林和支持向量机四种机器学习分类模型。该框架的一个重要组成部分是使用模型以及最先进的方差阈值、基于l1和基于树的特征选择算法进行差异分析。这项研究确定了234种能够区分5种癌症类型的关键incrna。通过分类模型的性能来观察识别关键incrna的能力,SVM的准确率最高达到98.2%。此外,对234个incrna的相关性分析实验验证了结果。
{"title":"Detecting Long Non-Coding RNAs Responsible for Cancer Development","authors":"Mitra Datta Ganapaneni, Kundhana Harshitha Paruchuru, J. Ambati, Mahesh Valavala, C.C Sobin","doi":"10.1109/OCIT56763.2022.00040","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00040","url":null,"abstract":"Long noncoding RNAs (lncRNA) have a vital role in tumor development. Variation in expressions of IncRNAs affect several target genes related to tumor initiation and development. Recent studies in Carcinogenesis have indicated the importance of IncRNA in cancer progression, diagnosis, and treatment. The purpose of our research is to identify the key cancer-related IncRNAs. It is considered a complex task to identify key IncRNAs in cancer with existing cancer data of tumor patients due to the high dimensionality nature of expression profiles. LncRNA expression profiles of 12309 IncRNAs and 2221 patients are gathered from TCGA. A Computational framework is proposed considering 5 cancer types (Bladder, Colon, Cervical, Liver, Head, and Neck) comprising four Machine learning classification models named K-Nearest Neighbor, Naive Bayes, Random Forest, and Support Vector Machine. An essential component in the framework is to use models along with the state-of-the-art Variance threshold, L1-based, and Tree-based feature selection algorithms for differential analysis. The study resulted in identifying 234 key IncRNAs capable of differentiating 5 cancer types. The capability of identified key IncRNAs is observed by the performance of classification models resulting in the highest 98.2% accuracy by SVM. Furthermore, the correlation analysis of 234 IncRNAs experimentally validated the results.","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"117 20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126410096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-based Block Identification and Classification in the Blockchain Integrated IoT 区块链集成物联网中基于人工智能的区块识别与分类
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00084
Joydeb Dutta, Deepak Puthal, E. Damiani
Artificial Intelligence (AI) is gaining popularity in the Internet of Things (IoT) based application-based solution development. Whereas, Blockchain is become unavoidable in IoT for maintaining the end-to-end process in the decentralized approach. Combining these two current-age technologies, this paper details a brief comparative study with the implementations and further analyzes the adaptability of the AI-based solution in the Blockchain-integrated IoT architecture. This work focuses on identifying the of block data in the block validation stage using AI-based approaches. Several supervised, unsupervised, and semi-supervised learning algorithms are analyzed to determine a block's data sensitivity. It is identified that machine learning techniques can identify a block's data with very high accuracy. By utilizing this, the block's sensitivity can be identified, which can help the system to reduce the energy consumption of the block validation stage by dynamically choosing an appropriate consensus mechanism.
人工智能(AI)在基于物联网(IoT)的应用程序解决方案开发中越来越受欢迎。然而,区块链在物联网中以分散的方式维护端到端流程是不可避免的。结合这两种当前时代的技术,本文详细介绍了与实现的简要比较研究,并进一步分析了基于ai的解决方案在区块链集成物联网架构中的适应性。这项工作的重点是使用基于人工智能的方法识别区块验证阶段的区块数据。分析了几种有监督、无监督和半监督学习算法,以确定块的数据敏感性。机器学习技术可以非常准确地识别一个区块的数据。利用这一点,可以识别区块的敏感性,通过动态选择合适的共识机制,帮助系统减少区块验证阶段的能量消耗。
{"title":"AI-based Block Identification and Classification in the Blockchain Integrated IoT","authors":"Joydeb Dutta, Deepak Puthal, E. Damiani","doi":"10.1109/OCIT56763.2022.00084","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00084","url":null,"abstract":"Artificial Intelligence (AI) is gaining popularity in the Internet of Things (IoT) based application-based solution development. Whereas, Blockchain is become unavoidable in IoT for maintaining the end-to-end process in the decentralized approach. Combining these two current-age technologies, this paper details a brief comparative study with the implementations and further analyzes the adaptability of the AI-based solution in the Blockchain-integrated IoT architecture. This work focuses on identifying the of block data in the block validation stage using AI-based approaches. Several supervised, unsupervised, and semi-supervised learning algorithms are analyzed to determine a block's data sensitivity. It is identified that machine learning techniques can identify a block's data with very high accuracy. By utilizing this, the block's sensitivity can be identified, which can help the system to reduce the energy consumption of the block validation stage by dynamically choosing an appropriate consensus mechanism.","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116448833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Prediction of Unemployment using Machine Learning Approach 用机器学习方法预测失业
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00072
Moupali Sen, Shreya V. Basu, A. Chatterjee, Anwesha Banerjee, Saheli Pal, Pritam Kumar Mukhopadhyay, Stobak Dutta, Arunabha Tarafdar
Unemployment is a circumstance which arises when people above a specific age are not engaged in any kind of activities which contribute to the economic welfare of the individual and country. Unemployment is becoming a rising concern which is making the daily life of people difficult. Unemployment causes poverty and depression among the citizens. Nowadays there are different opportunities in different sectors. But people are not aware of those opportunities. Different states are there where there is a lack of skilled labour whereas many states are there that have skilled labour but less opportunities. Another reason for unemployment since 2020 is the COVID-19 pandemic. We have selected this topic to spread awareness among the citizens. This work attempts to detect the states of India which are in serious need of increasing employment opportunities. We have applied the concept of Supervised Machine Learning algorithms to detect the states with the lowest employment rate. The data visualization gives a better picture of the trends in unemployment rate over years. There has been a use of different popular algorithms like Logistic Regression, Support Vector Machine, K-nearest neighbors (kNN) Algorithm and Decision Tree. In the end we have tried to find the algorithm which is going to give us more accuracy so that necessary steps can be taken for the employment of the eligible and deserving people.
失业是当超过特定年龄的人不从事任何有助于个人和国家经济福利的活动时出现的情况。失业问题日益引起人们的关注,使人们的日常生活变得困难。失业在公民中造成贫穷和抑郁。如今,不同的行业有不同的机会。但人们并没有意识到这些机会。不同的州缺乏熟练劳动力,而许多州有熟练劳动力,但机会较少。自2020年以来失业的另一个原因是COVID-19大流行。我们选择这个话题是为了提高市民的意识。这项工作试图发现迫切需要增加就业机会的印度各邦。我们已经应用了监督机器学习算法的概念来检测就业率最低的状态。数据可视化能更好地反映多年来失业率的趋势。已经使用了不同的流行算法,如逻辑回归,支持向量机,k近邻(kNN)算法和决策树。最后,我们试图找到一种算法,它将给我们提供更多的准确性,以便采取必要的步骤来雇用合格和值得的人。
{"title":"Prediction of Unemployment using Machine Learning Approach","authors":"Moupali Sen, Shreya V. Basu, A. Chatterjee, Anwesha Banerjee, Saheli Pal, Pritam Kumar Mukhopadhyay, Stobak Dutta, Arunabha Tarafdar","doi":"10.1109/OCIT56763.2022.00072","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00072","url":null,"abstract":"Unemployment is a circumstance which arises when people above a specific age are not engaged in any kind of activities which contribute to the economic welfare of the individual and country. Unemployment is becoming a rising concern which is making the daily life of people difficult. Unemployment causes poverty and depression among the citizens. Nowadays there are different opportunities in different sectors. But people are not aware of those opportunities. Different states are there where there is a lack of skilled labour whereas many states are there that have skilled labour but less opportunities. Another reason for unemployment since 2020 is the COVID-19 pandemic. We have selected this topic to spread awareness among the citizens. This work attempts to detect the states of India which are in serious need of increasing employment opportunities. We have applied the concept of Supervised Machine Learning algorithms to detect the states with the lowest employment rate. The data visualization gives a better picture of the trends in unemployment rate over years. There has been a use of different popular algorithms like Logistic Regression, Support Vector Machine, K-nearest neighbors (kNN) Algorithm and Decision Tree. In the end we have tried to find the algorithm which is going to give us more accuracy so that necessary steps can be taken for the employment of the eligible and deserving people.","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133798152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrity Constraint Verification of Structured Query Language by Abstract Interpretation 基于抽象解释的结构化查询语言完整性约束验证
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00091
Anwesha Kashyap, Angshuman Jana
Over the decades and now-a-days the data-driven applications are playing a pivotal role in every aspect of our daily lives by providing an easy interface to store, access and process crucial data with the help of Database Management System (DBMS). However, it is always necessary to ensure the data integrity for every operation on a database. In this paper, we propose a novel framework for Structured Query Language (SQL), aiming at automatically and formally verifying integrity constraints in terms of enterprise policy specifications on data in the underlying database. To this aim, we extend the abstract interpretation theory to the case of structured query languages.
在过去的几十年里,数据驱动的应用程序在我们日常生活的各个方面都发挥着关键作用,通过数据库管理系统(DBMS)提供一个简单的接口来存储、访问和处理关键数据。但是,始终有必要确保数据库上的每个操作的数据完整性。在本文中,我们提出了一个新的结构化查询语言(SQL)框架,旨在根据企业策略规范自动形式化地验证底层数据库中数据的完整性约束。为此,我们将抽象解释理论扩展到结构化查询语言。
{"title":"Integrity Constraint Verification of Structured Query Language by Abstract Interpretation","authors":"Anwesha Kashyap, Angshuman Jana","doi":"10.1109/OCIT56763.2022.00091","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00091","url":null,"abstract":"Over the decades and now-a-days the data-driven applications are playing a pivotal role in every aspect of our daily lives by providing an easy interface to store, access and process crucial data with the help of Database Management System (DBMS). However, it is always necessary to ensure the data integrity for every operation on a database. In this paper, we propose a novel framework for Structured Query Language (SQL), aiming at automatically and formally verifying integrity constraints in terms of enterprise policy specifications on data in the underlying database. To this aim, we extend the abstract interpretation theory to the case of structured query languages.","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115082577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance evaluation of DP-QPSK modulation for underwater optical wireless communication using a green light propagation 利用绿光传播的DP-QPSK调制水下无线光通信性能评价
Pub Date : 2022-12-01 DOI: 10.1109/OCIT56763.2022.00086
Narayan Nayak, B. Keswani, Dipak Ranjan Nayak, Pramod Sharma, A. G. Mohapatra, Ashish Khanna
In comparison to radio frequency and acoustic communication, underwater optical wireless communication (UOWC) has garnered greater attention recently due to its higher data rate and low latency. In this paper, we proposed underwater optical communication by using free-space optical communication (FSO) system to propagate the green light waves through the water. The application of three different modulation schemes, including quadrature phase shift keying(QPSK), dual-polarization quadrature phase shift keying (DP-QPSK), and 4-quadrature amplitude modulation (4-QAM) are designed not only to focus on the analysis of problems like attenuation, absorption, scattering, and turbulence but also investigates spectral efficiency. The Performance and physical aspects of the above modulation techniques with UOWC are studied and a comparison of multiple criteria, including the maximum quality factor, the minimum bit error rate (BER), and the eye diagram.
与射频通信和声学通信相比,水下光无线通信(UOWC)由于其更高的数据速率和低延迟而受到越来越多的关注。本文提出了利用自由空间光通信(FSO)系统在水中传播绿光波的水下光通信方案。采用正交相移键控(QPSK)、双极化正交相移键控(DP-QPSK)和四正交调幅(4-QAM)三种不同的调制方案,不仅分析了衰减、吸收、散射和湍流等问题,而且还研究了频谱效率。研究了上述具有UOWC的调制技术的性能和物理方面,并对多个标准进行了比较,包括最大质量因子、最小误码率(BER)和眼图。
{"title":"Performance evaluation of DP-QPSK modulation for underwater optical wireless communication using a green light propagation","authors":"Narayan Nayak, B. Keswani, Dipak Ranjan Nayak, Pramod Sharma, A. G. Mohapatra, Ashish Khanna","doi":"10.1109/OCIT56763.2022.00086","DOIUrl":"https://doi.org/10.1109/OCIT56763.2022.00086","url":null,"abstract":"In comparison to radio frequency and acoustic communication, underwater optical wireless communication (UOWC) has garnered greater attention recently due to its higher data rate and low latency. In this paper, we proposed underwater optical communication by using free-space optical communication (FSO) system to propagate the green light waves through the water. The application of three different modulation schemes, including quadrature phase shift keying(QPSK), dual-polarization quadrature phase shift keying (DP-QPSK), and 4-quadrature amplitude modulation (4-QAM) are designed not only to focus on the analysis of problems like attenuation, absorption, scattering, and turbulence but also investigates spectral efficiency. The Performance and physical aspects of the above modulation techniques with UOWC are studied and a comparison of multiple criteria, including the maximum quality factor, the minimum bit error rate (BER), and the eye diagram.","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132400378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 OITS International Conference on Information Technology (OCIT)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1