首页 > 最新文献

Artificial intelligence and applications (Commerce, Calif.)最新文献

英文 中文
Spatiotemporal Edges for Arbitrarily Moving Video Classification in Protected and Sensitive Scenes 保护敏感场景下任意移动视频分类的时空边缘
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia320526
Maryam Asadzadehkaljahi, Arnab Halder, U. Pal, P. Shivakumara
Classification of arbitrary moving objects including vehicles and human beings in a real environment (such as protected and sensitive areas) is challenging due to arbitrary deformation and directions caused by shaky camera and wind. This work aims at adopting a spatio-temporal approach for classifying arbitrarily moving objects. The intuition to propose the approach is that the behavior of the arbitrary moving objects caused by wind and shaky camera are inconsistent and unstable while for static objects, the behavior is consistent and stable. The proposed method segments foreground objects from background using the frame difference between median frame and individual frame. This step outputs several different foreground information. The method finds static and dynamic edges by subtracting Canny of foreground information from the Canny edges of respective input frames. The ratio of the number of static and dynamic edges of each frame is considered as features. The features are normalized to avoid the problems of imbalanced feature size and irrelevant features. For classification, the work uses 10-fold cross-validation to choose the number of training and testing samples and the random forest classifier is used for the final classification of frames with static objects and arbitrary movement objects. For evaluating the proposed method, we construct our own dataset, which contains video of static and arbitrarily moving objects caused by shaky camera and wind. The results on the video dataset show that the proposed method achieves the state-of-the-art performance (76% classification rate) which is 14% better than the best existing method.
在真实环境(如受保护和敏感区域)中,由于相机和风的晃动导致的任意变形和方向,对包括车辆和人类在内的任意移动物体进行分类是具有挑战性的。本工作旨在采用一种时空方法对任意运动物体进行分类。提出该方法的直觉是,由风和相机抖动引起的任意运动物体的行为是不一致和不稳定的,而对于静态物体,行为是一致和稳定的。该方法利用中值帧与单个帧之间的帧差分割前景目标和背景目标。这一步输出几个不同的前景信息。该方法通过从各自输入帧的Canny边缘中减去前景信息的Canny来找到静态和动态边缘。将每帧的静态边缘和动态边缘的数量之比作为特征。对特征进行归一化处理,避免了特征大小不平衡和特征不相关的问题。对于分类,工作使用10倍交叉验证来选择训练和测试样本的数量,并使用随机森林分类器对具有静态对象和任意运动对象的帧进行最终分类。为了评估所提出的方法,我们构建了自己的数据集,其中包含由摄像机抖动和风引起的静态和任意移动物体的视频。在视频数据集上的结果表明,该方法达到了最先进的性能(76%的分类率),比现有的最佳方法提高了14%。
{"title":"Spatiotemporal Edges for Arbitrarily Moving Video Classification in Protected and Sensitive Scenes","authors":"Maryam Asadzadehkaljahi, Arnab Halder, U. Pal, P. Shivakumara","doi":"10.47852/bonviewaia320526","DOIUrl":"https://doi.org/10.47852/bonviewaia320526","url":null,"abstract":"Classification of arbitrary moving objects including vehicles and human beings in a real environment (such as protected and sensitive areas) is challenging due to arbitrary deformation and directions caused by shaky camera and wind. This work aims at adopting a spatio-temporal approach for classifying arbitrarily moving objects. The intuition to propose the approach is that the behavior of the arbitrary moving objects caused by wind and shaky camera are inconsistent and unstable while for static objects, the behavior is consistent and stable. The proposed method segments foreground objects from background using the frame difference between median frame and individual frame. This step outputs several different foreground information. The method finds static and dynamic edges by subtracting Canny of foreground information from the Canny edges of respective input frames. The ratio of the number of static and dynamic edges of each frame is considered as features. The features are normalized to avoid the problems of imbalanced feature size and irrelevant features. For classification, the work uses 10-fold cross-validation to choose the number of training and testing samples and the random forest classifier is used for the final classification of frames with static objects and arbitrary movement objects. For evaluating the proposed method, we construct our own dataset, which contains video of static and arbitrarily moving objects caused by shaky camera and wind. The results on the video dataset show that the proposed method achieves the state-of-the-art performance (76% classification rate) which is 14% better than the best existing method.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82801898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Salary Prediction Model for Non-Academic Staff Using Polynomial Regression Technique 基于多项式回归技术的非学术人员薪酬预测模型
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia3202795
Samuel Iorhemen Ayua, Yusuf Musa Malgwi, James Afrifa
The idea of regression has increased rapidly and significantly in the machine-learning domain. This paper builds a salary prediction model to predict a justifiable salary of an employee commensurate to the increase or decrease in exchange rate using polynomial regression techniques of degree 2 in Jupyter Notebook on Annaconda Navigator tool. Predicting a feasible salary for an employee by the employer is a challenging task since every employee has a high goal and hope as the standard of leaving increases without a corresponding increase in salary. This model uses a salary dataset from Taraba State University Jalingo, Nigeria in building and training the model and exchange rate dataset for the prediction of employee salary. The result of the research shows that since the distribution of the dataset was non-linear and the major feature significant in determining employee’s salary from the in-salary dataset was grade level and exchange rate, this fully confirmed the use of polynomial regression algorithm. The research has immensely contributed to the knowledge and understanding of regression techniques. The researcher recommended other machine learning algorithms explored with various salary datasets and the potential applicability of machine learning fully incorporated in the financial department on the large dataset for better performance. The model performance was evaluated using R2 scores accuracy and the value of 97.2% realized, indicating how well the data points fit the line of regression and unseen dataset in the developed model.
回归的概念在机器学习领域得到了迅速而显著的发展。本文利用Jupyter Notebook和Annaconda Navigator工具上的2度多项式回归技术,构建了一个薪酬预测模型,预测出与汇率涨跌相称的员工合理薪酬。因为每个员工都有很高的目标和希望,因为离职的标准提高了,而工资却没有相应的提高。因此,雇主为员工预测一个可行的工资是一项具有挑战性的任务。该模型使用来自尼日利亚Jalingo塔拉巴州立大学的工资数据集来构建和训练用于预测员工工资的模型和汇率数据集。研究结果表明,由于数据集的分布是非线性的,并且从工资内数据集确定员工工资的主要显著特征是年级水平和汇率,这充分证实了多项式回归算法的使用。该研究极大地促进了对回归技术的认识和理解。研究人员推荐了其他机器学习算法,探索了各种工资数据集,并将机器学习的潜在适用性完全纳入了金融部门的大型数据集,以获得更好的性能。使用R2评分准确性评估模型性能,实现了97.2%的值,表明数据点在开发的模型中与回归线和未见数据集的拟合程度。
{"title":"Salary Prediction Model for Non-Academic Staff Using Polynomial Regression Technique","authors":"Samuel Iorhemen Ayua, Yusuf Musa Malgwi, James Afrifa","doi":"10.47852/bonviewaia3202795","DOIUrl":"https://doi.org/10.47852/bonviewaia3202795","url":null,"abstract":"The idea of regression has increased rapidly and significantly in the machine-learning domain. This paper builds a salary prediction model to predict a justifiable salary of an employee commensurate to the increase or decrease in exchange rate using polynomial regression techniques of degree 2 in Jupyter Notebook on Annaconda Navigator tool. Predicting a feasible salary for an employee by the employer is a challenging task since every employee has a high goal and hope as the standard of leaving increases without a corresponding increase in salary. This model uses a salary dataset from Taraba State University Jalingo, Nigeria in building and training the model and exchange rate dataset for the prediction of employee salary. The result of the research shows that since the distribution of the dataset was non-linear and the major feature significant in determining employee’s salary from the in-salary dataset was grade level and exchange rate, this fully confirmed the use of polynomial regression algorithm. The research has immensely contributed to the knowledge and understanding of regression techniques. The researcher recommended other machine learning algorithms explored with various salary datasets and the potential applicability of machine learning fully incorporated in the financial department on the large dataset for better performance. The model performance was evaluated using R2 scores accuracy and the value of 97.2% realized, indicating how well the data points fit the line of regression and unseen dataset in the developed model.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89103600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
University Auto Reply FAQ Chatbot Using NLP and Neural Networks 使用NLP和神经网络的大学自动回复常见问题聊天机器人
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia3202631
Harshita Mangotra, Vibhuti Dabas, Bhanu Khetharpal, Abhigya Verma, S. Singhal, A. K. Mohapatra
When new students enter college, they often have similar questions - ”Where to study for this subject?”, ”How to prepare Data Struc- tures and Algorithms?”, ”How to connect with seniors?” and so on. The use of chatbots can help them get answers to their questions quickly and efficiently. This study proposes a Deep Learning (DL) chatbot for addressing common doubts of university students, pro- viding efficient and accurate responses to college-specific questions. A self-curated dataset is used for the purpose of building the chat- bot and natural language processing techniques (NLP) are utilized for the pre-processing of raw data gathered. The study compares two deep learning models - a bidirectional Long Short Term Memory (LSTM) network and a simple feed-forward neural network model.
当新学生进入大学,他们经常有类似的问题——“在哪里学习这门课程?”、“如何准备数据结构和算法?”、“如何与老年人沟通?”等等。使用聊天机器人可以帮助他们快速有效地得到问题的答案。本研究提出了一种深度学习(DL)聊天机器人,用于解决大学生的常见疑问,为大学特定问题提供有效和准确的回答。在构建聊天机器人的过程中,使用了自管理数据集,并利用自然语言处理技术(NLP)对收集到的原始数据进行预处理。该研究比较了两种深度学习模型——双向长短期记忆(LSTM)网络和简单的前馈神经网络模型。
{"title":"University Auto Reply FAQ Chatbot Using NLP and Neural Networks","authors":"Harshita Mangotra, Vibhuti Dabas, Bhanu Khetharpal, Abhigya Verma, S. Singhal, A. K. Mohapatra","doi":"10.47852/bonviewaia3202631","DOIUrl":"https://doi.org/10.47852/bonviewaia3202631","url":null,"abstract":"When new students enter college, they often have similar questions - ”Where to study for this subject?”, ”How to prepare Data Struc- tures and Algorithms?”, ”How to connect with seniors?” and so on. The use of chatbots can help them get answers to their questions quickly and efficiently. This study proposes a Deep Learning (DL) chatbot for addressing common doubts of university students, pro- viding efficient and accurate responses to college-specific questions. A self-curated dataset is used for the purpose of building the chat- bot and natural language processing techniques (NLP) are utilized for the pre-processing of raw data gathered. The study compares two deep learning models - a bidirectional Long Short Term Memory (LSTM) network and a simple feed-forward neural network model.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77479739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supportive Environment for Better Data Management Stage in the Cycle of ML Process 机器学习过程周期中更好的数据管理阶段的支持环境
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia32021224
Lama Alkhaled, Taha Khamis
The objective of this study is to explore the process of developing Artificial Intelligence (AI) and machine learning (ML) applications to establish an optimal support environment. The primary stages of ML include problem understanding, data management, model building, model deployment, and maintenance. This paper specifically focuses on examining the data management stage of ML development and the challenges it presents, as it is crucial for achieving accurate end models. During this stage, the major obstacle encountered was the scarcity of adequate data for model training, particularly in domains where data confidentiality is a concern. The work aimed to construct and enhance a framework that would assist researchers and developers in addressing the insufficiency of data during the data management stage. The framework incorporates various data augmentation techniques, enabling the generation of new data from the original dataset along with all the required files for detection challenges. This augmentation process improves the overall performance of ML applications by increasing both the quantity and quality of available data, thereby providing the model with the best possible input. The tool can be accessed using the following link https://github.com/TahaKh99/Image_Augmentor.
本研究的目的是探索开发人工智能(AI)和机器学习(ML)应用程序的过程,以建立最佳的支持环境。ML的主要阶段包括问题理解、数据管理、模型构建、模型部署和维护。本文特别关注机器学习开发的数据管理阶段及其带来的挑战,因为它对于实现准确的最终模型至关重要。在这一阶段,遇到的主要障碍是缺乏足够的数据用于模型训练,特别是在数据保密性需要关注的领域。该工作旨在构建和增强一个框架,以帮助研究人员和开发人员在数据管理阶段解决数据不足的问题。该框架结合了各种数据增强技术,能够从原始数据集生成新数据以及检测挑战所需的所有文件。这种增强过程通过增加可用数据的数量和质量来提高ML应用程序的整体性能,从而为模型提供最佳输入。该工具可通过以下链接https://github.com/TahaKh99/Image_Augmentor访问。
{"title":"Supportive Environment for Better Data Management Stage in the Cycle of ML Process","authors":"Lama Alkhaled, Taha Khamis","doi":"10.47852/bonviewaia32021224","DOIUrl":"https://doi.org/10.47852/bonviewaia32021224","url":null,"abstract":"The objective of this study is to explore the process of developing Artificial Intelligence (AI) and machine learning (ML) applications to establish an optimal support environment. The primary stages of ML include problem understanding, data management, model building, model deployment, and maintenance. This paper specifically focuses on examining the data management stage of ML development and the challenges it presents, as it is crucial for achieving accurate end models. During this stage, the major obstacle encountered was the scarcity of adequate data for model training, particularly in domains where data confidentiality is a concern. The work aimed to construct and enhance a framework that would assist researchers and developers in addressing the insufficiency of data during the data management stage. The framework incorporates various data augmentation techniques, enabling the generation of new data from the original dataset along with all the required files for detection challenges. This augmentation process improves the overall performance of ML applications by increasing both the quantity and quality of available data, thereby providing the model with the best possible input. The tool can be accessed using the following link https://github.com/TahaKh99/Image_Augmentor.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87299780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Critical Historic Overview of Artificial Intelligence: Issues, Challenges, Opportunities and Threats 人工智能的关键历史概述:问题、挑战、机遇和威胁
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia3202689
P. Groumpos
Artificial Intelligence (AI) has been considered a revolutionary and world-changing science, although it is still a young field and has a long way to go before it can be established as a viable theory. Every day, new knowledge is created at an unthinkable speed, and the Big Data Driven World is already upon us. AI has developed a wide range of theories and software tools that have shown remarkable success in addressing difficult and challenging societal problems. However, the field also faces many challenges and drawbacks that have led some people to view AI with skepticism. One of the main challenges facing AI is the difference between correlation and causation, which plays an important role in AI studies. Additionally, although the term Cybernetics should be a part of AI, it was ignored for many years in AI studies. To address these issues, the Cybernetic Artificial Intelligence (CAI) field has been proposed and analyzed here for the first time. Despite the optimism and enthusiasm surrounding AI, its future may turn out to be a "catastrophic Winter" for the whole world, depending on who controls its development. The only hope for the survival of the planet lies in the quick development of Cybernetic Artificial Intelligence and the Wise Anthropocentric Revolution. The text proposes specific solutions for achieving these two goals. Furthermore, the importance of differentiating between professional/personal ethics and eternal values is highlighted, and their importance in future AI applications is emphasized for solving challenging societal problems. Ultimately, the future of AI heavily depends on accepting certain ethical values.
人工智能(AI)被认为是一门革命性的、改变世界的科学,尽管它仍然是一个年轻的领域,在成为一个可行的理论之前还有很长的路要走。每天,新知识都以难以想象的速度被创造出来,大数据驱动的世界已经向我们走来。人工智能已经发展出一系列广泛的理论和软件工具,在解决困难和具有挑战性的社会问题方面取得了显著的成功。然而,该领域也面临着许多挑战和缺陷,导致一些人对人工智能持怀疑态度。人工智能面临的主要挑战之一是相关性和因果关系的区别,这在人工智能研究中起着重要作用。此外,尽管术语控制论应该是人工智能的一部分,但它在人工智能研究中被忽视了很多年。为了解决这些问题,本文首次提出并分析了控制论人工智能(Artificial Intelligence, CAI)领域。尽管人们对人工智能充满了乐观和热情,但它的未来对整个世界来说可能是一个“灾难性的冬天”,这取决于谁来控制它的发展。地球生存的唯一希望在于控制论人工智能的快速发展和智慧人类中心革命。本文提出了实现这两个目标的具体解决办法。此外,还强调了区分职业/个人道德和永恒价值观的重要性,并强调了它们在未来人工智能应用中解决具有挑战性的社会问题的重要性。最终,人工智能的未来在很大程度上取决于人们是否接受某些道德价值观。
{"title":"A Critical Historic Overview of Artificial Intelligence: Issues, Challenges, Opportunities and Threats","authors":"P. Groumpos","doi":"10.47852/bonviewaia3202689","DOIUrl":"https://doi.org/10.47852/bonviewaia3202689","url":null,"abstract":"Artificial Intelligence (AI) has been considered a revolutionary and world-changing science, although it is still a young field and has a long way to go before it can be established as a viable theory. Every day, new knowledge is created at an unthinkable speed, and the Big Data Driven World is already upon us. AI has developed a wide range of theories and software tools that have shown remarkable success in addressing difficult and challenging societal problems. However, the field also faces many challenges and drawbacks that have led some people to view AI with skepticism. One of the main challenges facing AI is the difference between correlation and causation, which plays an important role in AI studies. Additionally, although the term Cybernetics should be a part of AI, it was ignored for many years in AI studies. To address these issues, the Cybernetic Artificial Intelligence (CAI) field has been proposed and analyzed here for the first time. Despite the optimism and enthusiasm surrounding AI, its future may turn out to be a \"catastrophic Winter\" for the whole world, depending on who controls its development. The only hope for the survival of the planet lies in the quick development of Cybernetic Artificial Intelligence and the Wise Anthropocentric Revolution. The text proposes specific solutions for achieving these two goals. Furthermore, the importance of differentiating between professional/personal ethics and eternal values is highlighted, and their importance in future AI applications is emphasized for solving challenging societal problems. Ultimately, the future of AI heavily depends on accepting certain ethical values.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88185811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Attention Enhanced Siamese Neural Network for Face Validation 基于Siamese神经网络的人脸验证
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia32021018
Hongqing Yu
Few-shot computer vision algorithms have enormous potential to produce promised results for innovative applications which only have a small volume of example data for training. Currently, the few-shot algorithm research focuses on applying transfer learning on deep neural networks that are pre-trained on big datasets. However, adapting the transformers requires highly cost computation resources. In addition, the overfitting or underfitting problems and low accuracy on large classes in the face validation domain are identified in our research. Thus, this paper proposed an alternative enhancement solution by adding contrasted attention to the negative face pairs and positive pairs to the training process. Extra attention is created through clustering-based face pair creation algorithms. The evaluation results show that the proposed approach sufficiently addressed the problems without requiring high-cost resources.
对于只有少量示例数据用于训练的创新应用,少量计算机视觉算法具有巨大的潜力,可以产生预期的结果。目前,对迁移学习算法的研究主要集中在对大数据集进行预训练的深度神经网络上。然而,改造变压器需要耗费大量的计算资源。此外,我们的研究还发现了人脸验证领域中存在的过拟合或欠拟合问题以及在大类别上的低准确率问题。因此,本文提出了一种替代的增强方案,即在训练过程中增加对消极面孔对和积极面孔对的对比注意。通过基于聚类的人脸对创建算法创建额外的注意力。评价结果表明,该方法在不需要高成本资源的情况下充分解决了问题。
{"title":"Attention Enhanced Siamese Neural Network for Face Validation","authors":"Hongqing Yu","doi":"10.47852/bonviewaia32021018","DOIUrl":"https://doi.org/10.47852/bonviewaia32021018","url":null,"abstract":"Few-shot computer vision algorithms have enormous potential to produce promised results for innovative applications which only have a small volume of example data for training. Currently, the few-shot algorithm research focuses on applying transfer learning on deep neural networks that are pre-trained on big datasets. However, adapting the transformers requires highly cost computation resources. In addition, the overfitting or underfitting problems and low accuracy on large classes in the face validation domain are identified in our research. Thus, this paper proposed an alternative enhancement solution by adding contrasted attention to the negative face pairs and positive pairs to the training process. Extra attention is created through clustering-based face pair creation algorithms. The evaluation results show that the proposed approach sufficiently addressed the problems without requiring high-cost resources.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86291560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Capabilities and Limitations of ChatGPT and Alternative Big Language Models 探索ChatGPT和其他大语言模型的功能和局限性
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia3202820
Shadi AlZu'bi, Ala Mughaid, Fatima Quiam, Samar Hendawi
ChatGPT, an AI-powered chatbot developed by OpenAI, has gained immense popularity since its public launch in November 2022. With its ability to write essays, emails, poems, and even computer code, it has become a useful tool for professionals in various fields. However, ChatGPT’s responses are not always rooted in reality and are instead generated by a GAN. This paper aims to build a text classification model for a chatbot using Python. The model is trained on a dataset consisting of customer responses to a survey and their corresponding class labels. Many classifiers are trained and tested, such as Naive Bayes, Random Forest, Extra Trees, and Decision Trees. The results show that the Extra Trees classifier performs the best with an accuracy of 90%. The system demonstrates the importance of text preprocessing and selecting appropriate classifiers for text classification tasks in building an effective chatbot. In this paper, we also explore the capabilities and limitations of ChatGPT and its alternatives in 2023. We present a comprehensive overview of the alternative’s performance. The work here, concludes with a discussion of the future directions of large language models and their impact on society and technology.
ChatGPT是由OpenAI开发的人工智能聊天机器人,自2022年11月公开推出以来,受到了极大的欢迎。它可以写文章、电子邮件、诗歌,甚至计算机代码,已经成为各个领域专业人士的有用工具。然而,ChatGPT的响应并不总是根植于现实,而是由GAN生成的。本文旨在使用Python为聊天机器人构建文本分类模型。该模型在一个数据集上进行训练,该数据集由客户对调查的响应及其相应的类别标签组成。许多分类器都经过训练和测试,如朴素贝叶斯、随机森林、额外树和决策树。结果表明,Extra Trees分类器的准确率最高,达到90%。该系统证明了文本预处理和为文本分类任务选择合适的分类器对于构建一个有效的聊天机器人的重要性。在本文中,我们还探讨了ChatGPT及其替代品在2023年的功能和局限性。我们对替代方案的性能进行了全面的概述。本文最后讨论了大型语言模型的未来发展方向及其对社会和技术的影响。
{"title":"Exploring the Capabilities and Limitations of ChatGPT and Alternative Big Language Models","authors":"Shadi AlZu'bi, Ala Mughaid, Fatima Quiam, Samar Hendawi","doi":"10.47852/bonviewaia3202820","DOIUrl":"https://doi.org/10.47852/bonviewaia3202820","url":null,"abstract":"ChatGPT, an AI-powered chatbot developed by OpenAI, has gained immense popularity since its public launch in November 2022. With its ability to write essays, emails, poems, and even computer code, it has become a useful tool for professionals in various fields. However, ChatGPT’s responses are not always rooted in reality and are instead generated by a GAN. This paper aims to build a text classification model for a chatbot using Python. The model is trained on a dataset consisting of customer responses to a survey and their corresponding class labels. Many classifiers are trained and tested, such as Naive Bayes, Random Forest, Extra Trees, and Decision Trees. The results show that the Extra Trees classifier performs the best with an accuracy of 90%. The system demonstrates the importance of text preprocessing and selecting appropriate classifiers for text classification tasks in building an effective chatbot. In this paper, we also explore the capabilities and limitations of ChatGPT and its alternatives in 2023. We present a comprehensive overview of the alternative’s performance. The work here, concludes with a discussion of the future directions of large language models and their impact on society and technology.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136297927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Review on Discrimination of Hazardous Gases by Smart Sensing Technology 基于智能传感技术的有害气体识别研究进展
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia3202434
Gangadhar Bandewad, Kunal P. Datta, Bharti W. Gawali, Sunil N. Pawar
Real-time detection of hazardous gases in the ambient and indoors has become the prime motive for curbing the problem of air pollution. Keeping the concentration of hazardous gases in control is the main task before human society so as to keep environmental balance. Researchers are concentrating on smart sensors because they can detect and forecast the presence of gas in real-time, provide correct information about gas concentration, and detect a target gas from a mixture of gases. This smart gas sensor system can have applications in the field of military, space, underwater, indoor, outdoors, factories, vehicles, and wearable smart devices. This study reviews recent advances in smart sensor technology with respect to the material structure, sensing technique, and discrimination algorithm. Focus is given on reducing the power consumption and area of a sensor circuitry with the help of different techniques.
环境和室内有害气体的实时检测已成为遏制空气污染问题的主要动机。控制有害气体的浓度是人类社会面临的主要任务,以保持环境平衡。研究人员专注于智能传感器,因为它们可以实时检测和预测气体的存在,提供有关气体浓度的正确信息,并从气体混合物中检测目标气体。该智能气体传感器系统可应用于军事、太空、水下、室内、室外、工厂、车辆、可穿戴智能设备等领域。本文综述了智能传感器技术在材料结构、传感技术和识别算法方面的最新进展。重点是在不同技术的帮助下降低传感器电路的功耗和面积。
{"title":"Review on Discrimination of Hazardous Gases by Smart Sensing Technology","authors":"Gangadhar Bandewad, Kunal P. Datta, Bharti W. Gawali, Sunil N. Pawar","doi":"10.47852/bonviewaia3202434","DOIUrl":"https://doi.org/10.47852/bonviewaia3202434","url":null,"abstract":"Real-time detection of hazardous gases in the ambient and indoors has become the prime motive for curbing the problem of air pollution. Keeping the concentration of hazardous gases in control is the main task before human society so as to keep environmental balance. Researchers are concentrating on smart sensors because they can detect and forecast the presence of gas in real-time, provide correct information about gas concentration, and detect a target gas from a mixture of gases. This smart gas sensor system can have applications in the field of military, space, underwater, indoor, outdoors, factories, vehicles, and wearable smart devices. This study reviews recent advances in smart sensor technology with respect to the material structure, sensing technique, and discrimination algorithm. Focus is given on reducing the power consumption and area of a sensor circuitry with the help of different techniques.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135534328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Cloud Gaming Approach To Learn Programming Concepts 学习编程概念的云游戏方法
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia22021378
Daniyal Baig, Waseem Akram, H. Burhan ul Haq, Muhammad Asif
Computer science and programming subjects can be overwhelming for new students, presenting them with significant challenges. As programming is considered one of the most important and complex subjects to grasp, it necessitates a fresh teaching methodology that can make the learning process more enjoyable and accessible. One approach that has gained attraction is the integration of gaming elements, which not only makes programming more engaging but also enhances understanding and retention. In our research, we adopted an innovative educational strategy that utilized a Role-Playing Game (RPG) centered on programming concepts. The aim of the research is to create an interactive and enjoyable learning experience for students by leveraging the immersive nature of gaming. The RPG provided a platform for students to actively participate in programming challenges, where they will apply their knowledge and skills to complete tasks and advance through the game. Our teaching methodology focuses on embedding programming concepts within the game's missions and quests. Additionally, we considered students' overall experience and engagement throughout the research study. Capturing both objective and subjective measures, we gained insights into the impact of our teaching methodology on student learning outcomes and their overall perception of the educational experience. In the RPG, each student is required to complete a series of tasks within the game in order to advance to the next mission. The sequential nature of the tasks ensured a structured learning process, gradually introducing new concepts and challenges to the students. The game mechanics provides an immersive environment for students to play different missions and answer the questions and learn programming. Through our research, we aim to present a compelling teaching methodology that effectively addresses the challenges facing new students in learning computer science and programming subjects. Harnessing the power of gaming, we strive to make programming more accessible, enjoyable, and engaging, ultimately empowering students to become proficient programmers. The evaluation of student performance, task accomplishment, and overall experience will provide valuable insights into the effectiveness and potential impact of this innovative approach.
计算机科学和编程科目对新生来说可能是压倒性的,给他们带来了重大的挑战。由于编程被认为是最重要和最复杂的学科之一,它需要一种新的教学方法,使学习过程更加愉快和容易理解。一种吸引人的方法是整合游戏元素,这不仅能让编程更具吸引力,还能提高理解度和留存率。在我们的研究中,我们采用了一种创新的教育策略,即利用以编程概念为中心的角色扮演游戏(RPG)。这项研究的目的是利用游戏的沉浸性,为学生创造一种互动和愉快的学习体验。RPG为学生提供了一个积极参与编程挑战的平台,他们将运用自己的知识和技能来完成任务并在游戏中前进。我们的教学方法侧重于在游戏的任务和任务中嵌入编程概念。此外,我们还考虑了学生在整个研究过程中的整体体验和参与度。通过客观和主观的衡量,我们深入了解了我们的教学方法对学生学习成果和他们对教育体验的整体看法的影响。在RPG中,每个学生都需要完成游戏中的一系列任务才能进入下一个任务。任务的顺序性确保了一个结构化的学习过程,逐渐向学生介绍新的概念和挑战。游戏机制为学生提供了一个身临其境的环境,让他们玩不同的任务,回答问题,学习编程。通过我们的研究,我们的目标是提出一种引人注目的教学方法,有效地解决新学生在学习计算机科学和编程科目时面临的挑战。利用游戏的力量,我们努力使编程更容易,更愉快,更吸引人,最终使学生成为熟练的程序员。对学生表现、任务完成情况和整体经验的评估将为这种创新方法的有效性和潜在影响提供有价值的见解。
{"title":"Cloud Gaming Approach To Learn Programming Concepts","authors":"Daniyal Baig, Waseem Akram, H. Burhan ul Haq, Muhammad Asif","doi":"10.47852/bonviewaia22021378","DOIUrl":"https://doi.org/10.47852/bonviewaia22021378","url":null,"abstract":"Computer science and programming subjects can be overwhelming for new students, presenting them with significant challenges. As programming is considered one of the most important and complex subjects to grasp, it necessitates a fresh teaching methodology that can make the learning process more enjoyable and accessible. One approach that has gained attraction is the integration of gaming elements, which not only makes programming more engaging but also enhances understanding and retention. In our research, we adopted an innovative educational strategy that utilized a Role-Playing Game (RPG) centered on programming concepts. The aim of the research is to create an interactive and enjoyable learning experience for students by leveraging the immersive nature of gaming. The RPG provided a platform for students to actively participate in programming challenges, where they will apply their knowledge and skills to complete tasks and advance through the game. Our teaching methodology focuses on embedding programming concepts within the game's missions and quests. Additionally, we considered students' overall experience and engagement throughout the research study. Capturing both objective and subjective measures, we gained insights into the impact of our teaching methodology on student learning outcomes and their overall perception of the educational experience. In the RPG, each student is required to complete a series of tasks within the game in order to advance to the next mission. The sequential nature of the tasks ensured a structured learning process, gradually introducing new concepts and challenges to the students. The game mechanics provides an immersive environment for students to play different missions and answer the questions and learn programming. Through our research, we aim to present a compelling teaching methodology that effectively addresses the challenges facing new students in learning computer science and programming subjects. Harnessing the power of gaming, we strive to make programming more accessible, enjoyable, and engaging, ultimately empowering students to become proficient programmers. The evaluation of student performance, task accomplishment, and overall experience will provide valuable insights into the effectiveness and potential impact of this innovative approach.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135557386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ERNIE and Multi-Feature Fusion for News Topic Classification 基于ERNIE和多特征融合的新闻主题分类
Pub Date : 2023-01-01 DOI: 10.47852/bonviewaia32021743
Weisong Chen, Boting Liu, Weili Guan
Traditional news topic classification methods suffer from inaccurate text semantics, sparse text features and low classification accuracy. Based on this, this paper proposes a news topic classification method based on Enhanced Language Representation with Informative Entities (ERNIE) and multi-feature fusion. A semantically more accurate representation of text embedding is obtained by ERNIE. In addition, this paper extracts word, context and key sentence based on the news text. The key sentences of the news are obtained through the TextRank algorithm, which enables the model to focus on the content points of the news. Finally, this paper uses the attention mechanism to realize the fusion of multiple features. The proposed method is experimented on BBCNews. The experimental results show that we achieve classification accuracies superior to those of the compared methods, while validating the structural validity of the proposed method. The method in this paper has a positive effect on promoting the research of news topic classification.
传统的新闻主题分类方法存在文本语义不准确、文本特征稀疏、分类准确率低等问题。在此基础上,提出了一种基于信息实体增强语言表示(ERNIE)和多特征融合的新闻主题分类方法。ERNIE在语义上获得了更准确的文本嵌入表示。此外,本文还根据新闻文本提取词、语境和关键句。通过TextRank算法获取新闻的关键句子,使模型能够专注于新闻的内容点。最后,利用注意机制实现多特征的融合。该方法在BBCNews上进行了实验。实验结果表明,该方法的分类精度优于其他方法,同时验证了该方法的结构有效性。本文的方法对新闻主题分类的研究具有积极的推动作用。
{"title":"ERNIE and Multi-Feature Fusion for News Topic Classification","authors":"Weisong Chen, Boting Liu, Weili Guan","doi":"10.47852/bonviewaia32021743","DOIUrl":"https://doi.org/10.47852/bonviewaia32021743","url":null,"abstract":"Traditional news topic classification methods suffer from inaccurate text semantics, sparse text features and low classification accuracy. Based on this, this paper proposes a news topic classification method based on Enhanced Language Representation with Informative Entities (ERNIE) and multi-feature fusion. A semantically more accurate representation of text embedding is obtained by ERNIE. In addition, this paper extracts word, context and key sentence based on the news text. The key sentences of the news are obtained through the TextRank algorithm, which enables the model to focus on the content points of the news. Finally, this paper uses the attention mechanism to realize the fusion of multiple features. The proposed method is experimented on BBCNews. The experimental results show that we achieve classification accuracies superior to those of the compared methods, while validating the structural validity of the proposed method. The method in this paper has a positive effect on promoting the research of news topic classification.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134884965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial intelligence and applications (Commerce, Calif.)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1