首页 > 最新文献

Int. J. Softw. Sci. Comput. Intell.最新文献

英文 中文
The Role of Machine Learning in Creating and Capturing Value 机器学习在创造和获取价值中的作用
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.312229
Ricardo Costa-Climent
The use of machine learning technologies by the world's most profitable companies to personalise their offerings is commonplace. However, not all companies using machine learning technologies succeed in creating and capturing value. Academic research has studied value creation through the use of information technologies, but this field of research tends to consider information technology as a homogeneous phenomenon, not considering the unique characteristics of machine learning technologies. This literature review aims to study the extent to which value creation and value capture through machine learning technologies are being investigated in the field of information systems. Evidence is found of a paucity of publications focusing on value creation through the use of ML in the enterprise, and none on value capture. This study's contribution is to provide a better understanding of the use of machine learning technologies in information systems as a social and business practice.
世界上最赚钱的公司使用机器学习技术来个性化他们的产品是司空见惯的。然而,并非所有使用机器学习技术的公司都能成功创造和获取价值。学术研究已经研究了通过使用信息技术创造价值,但这一研究领域倾向于将信息技术视为一种同质现象,而没有考虑机器学习技术的独特性。本文献综述旨在研究通过机器学习技术在信息系统领域的价值创造和价值获取的研究程度。有证据表明,关注通过在企业中使用机器学习创造价值的出版物很少,而没有关于价值获取的出版物。本研究的贡献是提供了一个更好的理解机器学习技术在信息系统中的使用作为一种社会和商业实践。
{"title":"The Role of Machine Learning in Creating and Capturing Value","authors":"Ricardo Costa-Climent","doi":"10.4018/ijssci.312229","DOIUrl":"https://doi.org/10.4018/ijssci.312229","url":null,"abstract":"The use of machine learning technologies by the world's most profitable companies to personalise their offerings is commonplace. However, not all companies using machine learning technologies succeed in creating and capturing value. Academic research has studied value creation through the use of information technologies, but this field of research tends to consider information technology as a homogeneous phenomenon, not considering the unique characteristics of machine learning technologies. This literature review aims to study the extent to which value creation and value capture through machine learning technologies are being investigated in the field of information systems. Evidence is found of a paucity of publications focusing on value creation through the use of ML in the enterprise, and none on value capture. This study's contribution is to provide a better understanding of the use of machine learning technologies in information systems as a social and business practice.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125872287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVID-19 Detection Using Chest X-Ray Images Based on Deep Learning 基于深度学习的胸部x线图像COVID-19检测
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.312556
Sudeshna Sani, Abhijit Bera, D. Mitra, Kalyani Maity Das
Global public health will be severely impacted by the successive waves of emerging COVID-19 disease. Since 2019 people get sick and die in our daily lives placing a massive burden on our health system. One of the crucial factors that has led to the virus's fast spread is a protracted clinical testing gap before discovering of a positive or negative result. A detection system based on deep learning was developed by using chest X-ray(CXR) images of Covid19 patient and healthy people. In this regard the Convolution Neural Network along with other DNNs have been proved to produce good results. To improve the COVID-19 detection accuracy, we developed model using the deep learning(CNN) approach where we observed an accuracy of 96%. We validated the accuracy by using same dataset through a pretrained VGG16 model and an LSTM model which produced excellent reliable results. Our aim of this research is to implement a reliable Deep Learning model to detect presence of Covid-19 in case of limited availability of chest-Xray images.
新出现的COVID-19疾病的连续浪潮将严重影响全球公共卫生。自2019年以来,人们在日常生活中生病和死亡,给我们的卫生系统带来了巨大的负担。导致病毒快速传播的关键因素之一是,在发现阳性或阴性结果之前,临床检测的时间间隔很长。利用新冠肺炎患者和健康人的胸部x光片(CXR)图像,开发了基于深度学习的检测系统。在这方面,卷积神经网络和其他深度神经网络已经被证明可以产生良好的效果。为了提高COVID-19检测的准确性,我们使用深度学习(CNN)方法开发了模型,我们观察到准确率为96%。我们使用相同的数据集,通过预训练的VGG16模型和LSTM模型验证了准确性,得到了非常好的可靠结果。我们这项研究的目的是实现一个可靠的深度学习模型,在胸部x光图像可用性有限的情况下检测Covid-19的存在。
{"title":"COVID-19 Detection Using Chest X-Ray Images Based on Deep Learning","authors":"Sudeshna Sani, Abhijit Bera, D. Mitra, Kalyani Maity Das","doi":"10.4018/ijssci.312556","DOIUrl":"https://doi.org/10.4018/ijssci.312556","url":null,"abstract":"Global public health will be severely impacted by the successive waves of emerging COVID-19 disease. Since 2019 people get sick and die in our daily lives placing a massive burden on our health system. One of the crucial factors that has led to the virus's fast spread is a protracted clinical testing gap before discovering of a positive or negative result. A detection system based on deep learning was developed by using chest X-ray(CXR) images of Covid19 patient and healthy people. In this regard the Convolution Neural Network along with other DNNs have been proved to produce good results. To improve the COVID-19 detection accuracy, we developed model using the deep learning(CNN) approach where we observed an accuracy of 96%. We validated the accuracy by using same dataset through a pretrained VGG16 model and an LSTM model which produced excellent reliable results. Our aim of this research is to implement a reliable Deep Learning model to detect presence of Covid-19 in case of limited availability of chest-Xray images.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"27 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121796439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of Low-Power High-Speed 8 Bit CMOS Current Steering DAC for AI Applications 面向人工智能应用的低功耗高速8位CMOS电流转向DAC设计
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.304801
B. Krishna, S. S. Gill, Amod Kumar
This paper describes a current steering 8-bit DAC architecture for low power and high-speed assistance in AI networks. This design is most suitable for 5G and next-generation high-speed communication systems on chip (SoCs). This DAC keeps a constant load current and leads to faster operations in wideband portable device applications. The design is based on weighted current transmission through current mirrors wherein current reduces from MSB to LSB continuously. By choosing a low current for LSB, the power dissipation reduces. Power and area are also reduced by using a 2-bit binary to thermometer decoder. The DAC's integral nonlinearity (INL) and differential nonlinearity (DNL) are found to be within 0.4 and 0.9 LSB, respectively. The DAC's highest operating speed is 1GHz, with a power dissipation of around 24.2 mW with the supply voltage of 1.8v using 180nm CMOS technology.
本文描述了一种用于人工智能网络中低功耗和高速辅助的当前转向8位DAC架构。该设计最适合5G和下一代高速片上通信系统(soc)。该DAC保持恒定的负载电流,从而在宽带便携式设备应用中实现更快的操作。该设计基于通过电流镜的加权电流传输,其中电流从MSB不断减小到LSB。通过选择低电流的LSB,降低了功耗。功率和面积也减少了使用2位二进制温度计解码器。DAC的积分非线性(INL)和微分非线性(DNL)分别在0.4和0.9 LSB以内。DAC的最高工作速度为1GHz,功耗约为24.2 mW,电源电压为1.8v,采用180nm CMOS技术。
{"title":"Design of Low-Power High-Speed 8 Bit CMOS Current Steering DAC for AI Applications","authors":"B. Krishna, S. S. Gill, Amod Kumar","doi":"10.4018/ijssci.304801","DOIUrl":"https://doi.org/10.4018/ijssci.304801","url":null,"abstract":"This paper describes a current steering 8-bit DAC architecture for low power and high-speed assistance in AI networks. This design is most suitable for 5G and next-generation high-speed communication systems on chip (SoCs). This DAC keeps a constant load current and leads to faster operations in wideband portable device applications. The design is based on weighted current transmission through current mirrors wherein current reduces from MSB to LSB continuously. By choosing a low current for LSB, the power dissipation reduces. Power and area are also reduced by using a 2-bit binary to thermometer decoder. The DAC's integral nonlinearity (INL) and differential nonlinearity (DNL) are found to be within 0.4 and 0.9 LSB, respectively. The DAC's highest operating speed is 1GHz, with a power dissipation of around 24.2 mW with the supply voltage of 1.8v using 180nm CMOS technology.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127364638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
P3 Process for Object-Relational Data Migration to NoSQL Document-Oriented Datastore P3对象关系数据向NoSQL文档型数据存储迁移过程
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.309994
A. Aggoune, Mohamed Sofiane Namoune
The exponential growth of complex data in object-relational databases (ORDB) raises the need for efficient storage with scalability, consistency, and partition tolerance. The migration towards NoSQL (not only structured query language) datastores is the best fit for distributed complex data. Unfortunately, very few studies provide solutions for ORDB migration to NoSQL. This paper reports on how to achieve the migration of complex data from ORDB to a document-oriented NoSQL database. The proposed approach focused on the P3 process that involves three major stages: (P1) the preprocessing stage to access and extract the database features using SQL queries, (P2) the processing stage to provide the data mapping by using a list of mapping rules between the source and target models, and (P3) the post-processing stage to store and request the migrated data within the NoSQL context. A thorough experiments on two real-life databases veriðes the P3 process improves the performance of data migration with complex schema structures.
对象关系数据库(ORDB)中复杂数据的指数级增长提出了对具有可伸缩性、一致性和分区容忍度的高效存储的需求。迁移到NoSQL(不仅仅是结构化查询语言)数据存储是最适合分布式复杂数据的。不幸的是,很少有研究提供ORDB迁移到NoSQL的解决方案。本文介绍了如何实现复杂数据从ORDB到面向文档的NoSQL数据库的迁移。提出的方法侧重于P3过程,该过程包括三个主要阶段:(P1)预处理阶段,使用SQL查询访问和提取数据库特征;(P2)处理阶段,使用源模型和目标模型之间的映射规则列表提供数据映射;(P3)后处理阶段,在NoSQL上下文中存储和请求迁移后的数据。在两个实际数据库上进行的实验表明,P3过程提高了复杂模式结构下数据迁移的性能。
{"title":"P3 Process for Object-Relational Data Migration to NoSQL Document-Oriented Datastore","authors":"A. Aggoune, Mohamed Sofiane Namoune","doi":"10.4018/ijssci.309994","DOIUrl":"https://doi.org/10.4018/ijssci.309994","url":null,"abstract":"The exponential growth of complex data in object-relational databases (ORDB) raises the need for efficient storage with scalability, consistency, and partition tolerance. The migration towards NoSQL (not only structured query language) datastores is the best fit for distributed complex data. Unfortunately, very few studies provide solutions for ORDB migration to NoSQL. This paper reports on how to achieve the migration of complex data from ORDB to a document-oriented NoSQL database. The proposed approach focused on the P3 process that involves three major stages: (P1) the preprocessing stage to access and extract the database features using SQL queries, (P2) the processing stage to provide the data mapping by using a list of mapping rules between the source and target models, and (P3) the post-processing stage to store and request the migrated data within the NoSQL context. A thorough experiments on two real-life databases veriðes the P3 process improves the performance of data migration with complex schema structures.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130635859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resource Scheduling in Fog Environment Using Optimization Algorithms for 6G Networks 基于6G网络优化算法的雾环境下资源调度
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.304440
Gaurav Goel, Rajeev Tiwari
In the traditional system, various researchers have suggested different resource scheduling and optimization algorithms. However, still, there is a scope to reduce Bandwidth, latency, energy consumption, and total communication cost in the Fog environment. in this work discussion is done on various performance challenges that are experienced in the Fog Environment based on 6G networks and explore the role of optimization techniques to overcome these challenges This work is focused on the Comparison of PSO, GA, and Round-Robin algorithm on parameters Cost, makespan, average execution time, and energy consumption for the resource management in the Fog environment. This study also represents which technique among the Group behavior species, Social Behaviour, and Pre-emptive type is better for achieving QoS for resource management in the Fog environment for the 6G network. In this work, we have discussed various resource scheduling problems that may be faced in the future, and what type of improvement can be considered in terms of IoT devices and 6G networks.
在传统的系统中,不同的研究者提出了不同的资源调度和优化算法。然而,在Fog环境中,仍然存在降低带宽、延迟、能耗和总通信成本的空间。在这项工作中,讨论了基于6G网络的雾环境中遇到的各种性能挑战,并探讨了优化技术在克服这些挑战方面的作用。这项工作的重点是比较PSO、GA和Round-Robin算法在雾环境中资源管理的参数成本、完工时间、平均执行时间和能耗。本研究也代表了在群行为类型、社会行为类型和先发制人类型中,哪种技术更适合在6G网络雾环境中实现资源管理的QoS。在这项工作中,我们讨论了未来可能面临的各种资源调度问题,以及在物联网设备和6G网络方面可以考虑哪些类型的改进。
{"title":"Resource Scheduling in Fog Environment Using Optimization Algorithms for 6G Networks","authors":"Gaurav Goel, Rajeev Tiwari","doi":"10.4018/ijssci.304440","DOIUrl":"https://doi.org/10.4018/ijssci.304440","url":null,"abstract":"In the traditional system, various researchers have suggested different resource scheduling and optimization algorithms. However, still, there is a scope to reduce Bandwidth, latency, energy consumption, and total communication cost in the Fog environment. in this work discussion is done on various performance challenges that are experienced in the Fog Environment based on 6G networks and explore the role of optimization techniques to overcome these challenges This work is focused on the Comparison of PSO, GA, and Round-Robin algorithm on parameters Cost, makespan, average execution time, and energy consumption for the resource management in the Fog environment. This study also represents which technique among the Group behavior species, Social Behaviour, and Pre-emptive type is better for achieving QoS for resource management in the Fog environment for the 6G network. In this work, we have discussed various resource scheduling problems that may be faced in the future, and what type of improvement can be considered in terms of IoT devices and 6G networks.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133064764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An Improvement of Yield Production Rate for Crops by Predicting Disease Rate Using Intelligent Decision Systems 利用智能决策系统预测病害,提高作物产量
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.291714
M. U. Rani, N. Selvam, L. Deborah
Agriculture is the country's mainstay. Plant diseases reduce production and thus product prices. Clearly, prices of edible and non-edible goods rose dramatically after the outbreak. We can save plants and correct pricing inconsistencies using automated disease detection. Using light detection and range (LIDAR) to identify plant diseases lets farmers handle dense volumes with minimal human intervention. To address the limitations of passive systems like climate, light variations, viewing angle, and canopy architecture, LIDAR sensors are used. The DSRC was used to receive an alert signal from the cloud server and convey it to farmers in real-time via cluster heads. For each concept, we evaluate its strengths and weaknesses, as well as the potential for future research. This research work aims to improve the way deep neural networks identify plant diseases. Google Net, Inceptionv3, Res Net 50, and Improved Vgg19 are evaluated before Biased CNN. Finally, our proposed Biased CNN (B-CNN) methodology boosted farmers' production by 93% per area.
农业是这个国家的支柱产业。植物病害降低产量,从而降低产品价格。显然,疫情爆发后,食用和非食用商品的价格大幅上涨。我们可以通过自动疾病检测来拯救植物并纠正价格不一致。利用光探测和测距(激光雷达)来识别植物病害,农民可以在最少的人为干预下处理密集的植物。为了解决被动系统的局限性,如气候、光线变化、视角和树冠结构,使用了激光雷达传感器。DSRC用于接收来自云服务器的警报信号,并通过集群头将其实时传递给农民。对于每个概念,我们评估其优点和缺点,以及未来研究的潜力。本研究旨在改进深度神经网络识别植物病害的方法。Google Net、Inceptionv3、Res Net 50和Improved Vgg19在Biased CNN之前进行了评估。最后,我们提出的Biased CNN (B-CNN)方法使农民的亩产提高了93%。
{"title":"An Improvement of Yield Production Rate for Crops by Predicting Disease Rate Using Intelligent Decision Systems","authors":"M. U. Rani, N. Selvam, L. Deborah","doi":"10.4018/ijssci.291714","DOIUrl":"https://doi.org/10.4018/ijssci.291714","url":null,"abstract":"Agriculture is the country's mainstay. Plant diseases reduce production and thus product prices. Clearly, prices of edible and non-edible goods rose dramatically after the outbreak. We can save plants and correct pricing inconsistencies using automated disease detection. Using light detection and range (LIDAR) to identify plant diseases lets farmers handle dense volumes with minimal human intervention. To address the limitations of passive systems like climate, light variations, viewing angle, and canopy architecture, LIDAR sensors are used. The DSRC was used to receive an alert signal from the cloud server and convey it to farmers in real-time via cluster heads. For each concept, we evaluate its strengths and weaknesses, as well as the potential for future research. This research work aims to improve the way deep neural networks identify plant diseases. Google Net, Inceptionv3, Res Net 50, and Improved Vgg19 are evaluated before Biased CNN. Finally, our proposed Biased CNN (B-CNN) methodology boosted farmers' production by 93% per area.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Scalable Edge Computing Environment Based on the Containerized Microservices and Minikube 基于容器化微服务和Minikube的可扩展边缘计算环境
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.312560
Nitin Rathore, A. Rajavat
The growing number of connected IoT devices and their continuous data collection will generate huge amounts of data in the near future. Edge computing has emerged as a new paradigm in recent years for reducing network congestion and offering real-time IoT applications. Processing the large amount of data generated by such IoT devices requires the development of a scalable edge computing environment. Accordingly, applications deployed in an edge computing environment need to be scalable enough to handle the enormous amount of data generated by IoT devices. The performance of MSA and monolithic architecture is analyzed and compared to develop a scalable edge computing environment. An auto-scaling approach is described to handle multiple concurrent requests at runtime. Minikube is used to perform auto-scaling operation of containerized microservices on resource constraint edge node. Considering performance of both the architecture and according to the results and discussions, MSA is a better choice for building scalable edge computing environment.
越来越多的物联网设备及其持续的数据收集将在不久的将来产生大量的数据。近年来,边缘计算已成为减少网络拥塞和提供实时物联网应用的新范式。处理此类物联网设备产生的大量数据需要开发可扩展的边缘计算环境。因此,部署在边缘计算环境中的应用程序需要具有足够的可扩展性,以处理物联网设备生成的大量数据。分析和比较了单片架构和MSA的性能,开发了一个可扩展的边缘计算环境。描述了在运行时处理多个并发请求的自动缩放方法。Minikube用于在资源约束边缘节点上执行容器化微服务的自动伸缩操作。考虑到体系结构的性能,并根据结果和讨论,MSA是构建可扩展边缘计算环境的更好选择。
{"title":"Scalable Edge Computing Environment Based on the Containerized Microservices and Minikube","authors":"Nitin Rathore, A. Rajavat","doi":"10.4018/ijssci.312560","DOIUrl":"https://doi.org/10.4018/ijssci.312560","url":null,"abstract":"The growing number of connected IoT devices and their continuous data collection will generate huge amounts of data in the near future. Edge computing has emerged as a new paradigm in recent years for reducing network congestion and offering real-time IoT applications. Processing the large amount of data generated by such IoT devices requires the development of a scalable edge computing environment. Accordingly, applications deployed in an edge computing environment need to be scalable enough to handle the enormous amount of data generated by IoT devices. The performance of MSA and monolithic architecture is analyzed and compared to develop a scalable edge computing environment. An auto-scaling approach is described to handle multiple concurrent requests at runtime. Minikube is used to perform auto-scaling operation of containerized microservices on resource constraint edge node. Considering performance of both the architecture and according to the results and discussions, MSA is a better choice for building scalable edge computing environment.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"19 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133817654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Robust Intelligent System for Detecting Tomato Crop Diseases Using Deep Learning 基于深度学习的番茄作物病害检测系统
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.304439
M. Afify, Mohamed Loey, A. Elsawy
The tomato crop is a strategic crop in the Egyptian market with high commercial value and large production. However, tomato diseases can cause huge losses and reduce yields. This work aims to use deep learning to construct a robust intelligent system for detecting tomato crop diseases to help farmers and agricultural workers by comparing the performance of four different recent state-of-the-art deep learning models to recognize 9 different diseases of tomatoes. In order to maximize the system's generalization ability, data augmentation, fine-tuning, label smoothing, and dataset enrichment techniques were investigated. The best-performing model achieved an average accuracy of 99.12% with a hold-out test set from the original dataset and an accuracy of 71.43% with new images downloaded from the Internet that had never been seen before. Training and testing were performed on a computer, and the final model was deployed on a smartphone for real-time on-site disease classification.
番茄作物是埃及市场的战略作物,具有很高的商业价值和大量的产量。然而,番茄病害会造成巨大损失并降低产量。本工作旨在通过比较四种不同的最新最先进的深度学习模型识别9种不同的番茄疾病的性能,利用深度学习构建一个强大的番茄作物病害检测智能系统,以帮助农民和农业工人。为了使系统的泛化能力最大化,研究了数据增强、微调、标签平滑和数据集丰富等技术。表现最好的模型在原始数据集的保留测试集上实现了99.12%的平均准确率,在从互联网上下载的从未见过的新图像上实现了71.43%的准确率。训练和测试在计算机上进行,最终模型部署在智能手机上,用于实时现场疾病分类。
{"title":"A Robust Intelligent System for Detecting Tomato Crop Diseases Using Deep Learning","authors":"M. Afify, Mohamed Loey, A. Elsawy","doi":"10.4018/ijssci.304439","DOIUrl":"https://doi.org/10.4018/ijssci.304439","url":null,"abstract":"The tomato crop is a strategic crop in the Egyptian market with high commercial value and large production. However, tomato diseases can cause huge losses and reduce yields. This work aims to use deep learning to construct a robust intelligent system for detecting tomato crop diseases to help farmers and agricultural workers by comparing the performance of four different recent state-of-the-art deep learning models to recognize 9 different diseases of tomatoes. In order to maximize the system's generalization ability, data augmentation, fine-tuning, label smoothing, and dataset enrichment techniques were investigated. The best-performing model achieved an average accuracy of 99.12% with a hold-out test set from the original dataset and an accuracy of 71.43% with new images downloaded from the Internet that had never been seen before. Training and testing were performed on a computer, and the final model was deployed on a smartphone for real-time on-site disease classification.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133873671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Comparative Study of Generative Adversarial Networks for Text-to-Image Synthesis 文本到图像合成的生成对抗网络的比较研究
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.300364
M. Chopra, Sunil K. Singh, Akhil Sharma, Shabeg Singh Gill
Text-to-picture alludes to the conversion of a textual description into a semantically similar image.The automatic synthesis of top-quality pictures from text portrayals is both exciting and useful at the same time.Current AI systems have shown significant advances in the field,but the work is still far from complete. Recent advances in the field of Deep Learning have resulted in the introduction of generative models that are capable of generating realistic images when trained appropriately.In this paper,authors will review the advancements in architectures for solving the problem of image synthesis using a text description.They begin by studying the concepts of the standard GAN, how the DCGAN has been used for the task at hand is followed by the StackGAN with uses a stack of two GANs to generate an image through iterative refinement & StackGAN++ which uses multiple GANs in a tree-like structure making the task of generating images from the text more generalized. They look at the AttnGAN which uses an attentional model to generate sub-regions of an image based on the description.
文本到图片指的是将文本描述转换为语义相似的图像。从文字描述中自动合成高质量图片的功能令人兴奋,同时也很有用。目前的人工智能系统已经在该领域取得了重大进展,但工作还远远没有完成。深度学习领域的最新进展导致了生成模型的引入,这些模型在经过适当训练后能够生成逼真的图像。在本文中,作者将回顾使用文本描述解决图像合成问题的体系结构的进展。他们首先研究标准GAN的概念,如何将DCGAN用于手头的任务,然后是StackGAN,使用两个GAN的堆栈通过迭代细化生成图像;StackGAN++使用树形结构中的多个GAN,使从文本生成图像的任务更加一般化。他们观察AttnGAN,它使用注意力模型根据描述生成图像的子区域。
{"title":"A Comparative Study of Generative Adversarial Networks for Text-to-Image Synthesis","authors":"M. Chopra, Sunil K. Singh, Akhil Sharma, Shabeg Singh Gill","doi":"10.4018/ijssci.300364","DOIUrl":"https://doi.org/10.4018/ijssci.300364","url":null,"abstract":"Text-to-picture alludes to the conversion of a textual description into a semantically similar image.The automatic synthesis of top-quality pictures from text portrayals is both exciting and useful at the same time.Current AI systems have shown significant advances in the field,but the work is still far from complete. Recent advances in the field of Deep Learning have resulted in the introduction of generative models that are capable of generating realistic images when trained appropriately.In this paper,authors will review the advancements in architectures for solving the problem of image synthesis using a text description.They begin by studying the concepts of the standard GAN, how the DCGAN has been used for the task at hand is followed by the StackGAN with uses a stack of two GANs to generate an image through iterative refinement & StackGAN++ which uses multiple GANs in a tree-like structure making the task of generating images from the text more generalized. They look at the AttnGAN which uses an attentional model to generate sub-regions of an image based on the description.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116893675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance Comparison of Machine Learning Algorithms for Dementia Progression Detection 机器学习算法在痴呆进展检测中的性能比较
Pub Date : 2022-01-01 DOI: 10.4018/ijssci.312553
Tripti Tripathi, R. Kumar
Dementia is a neurological disease that that encompasses a wide range of conditions like verbal communication, problem-solving, and other judgment abilities that are severely sufficient to interfere with daily life. It is among the leading causes of vulnerability among the elderly all over the world. A considerable amount of research has been conducted in this area so that we can perform early detection of the disease, yet further research into its betterment is still an emerging trend. This article compares the performance of multiple machine learning models for dementia detection and classification using brain MRI data, including support vector machine, random forest, AdaBoost, and XGBoost. Meanwhile, the research conducts a systematic assessment of papers for the clinical categorization of dementia using ML algorithms and neuroimaging data. The authors used 373 participants from the OASIS database. Among the tested models, RF model exhibited the best performance with 83.92% accuracy, 87.5% precision, 81.67% recall, 84.48% F1-score, 81.67% sensitivity, and 88.46% specificity.
痴呆症是一种神经系统疾病,包括多种情况,如语言交流、解决问题和其他判断能力,严重影响日常生活。它是全世界老年人易受伤害的主要原因之一。在这方面已经进行了大量的研究,以便我们能够早期发现这种疾病,但进一步研究改善这种疾病仍是一个新兴趋势。本文比较了使用脑MRI数据进行痴呆检测和分类的多种机器学习模型的性能,包括支持向量机、随机森林、AdaBoost和XGBoost。同时,利用ML算法和神经影像学数据对痴呆临床分类的论文进行系统评估。作者使用了来自OASIS数据库的373名参与者。其中,RF模型的准确率为83.92%,精密度为87.5%,召回率为81.67%,f1评分为84.48%,灵敏度为81.67%,特异性为88.46%。
{"title":"Performance Comparison of Machine Learning Algorithms for Dementia Progression Detection","authors":"Tripti Tripathi, R. Kumar","doi":"10.4018/ijssci.312553","DOIUrl":"https://doi.org/10.4018/ijssci.312553","url":null,"abstract":"Dementia is a neurological disease that that encompasses a wide range of conditions like verbal communication, problem-solving, and other judgment abilities that are severely sufficient to interfere with daily life. It is among the leading causes of vulnerability among the elderly all over the world. A considerable amount of research has been conducted in this area so that we can perform early detection of the disease, yet further research into its betterment is still an emerging trend. This article compares the performance of multiple machine learning models for dementia detection and classification using brain MRI data, including support vector machine, random forest, AdaBoost, and XGBoost. Meanwhile, the research conducts a systematic assessment of papers for the clinical categorization of dementia using ML algorithms and neuroimaging data. The authors used 373 participants from the OASIS database. Among the tested models, RF model exhibited the best performance with 83.92% accuracy, 87.5% precision, 81.67% recall, 84.48% F1-score, 81.67% sensitivity, and 88.46% specificity.","PeriodicalId":432255,"journal":{"name":"Int. J. Softw. Sci. Comput. Intell.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117006689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Int. J. Softw. Sci. Comput. Intell.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1