Pub Date : 2023-08-19DOI: 10.5121/csit.2023.131402
Harshit Mittal
Dimensionality reduction techniques are widely used in machine learning to reduce the computational complexity of the model and improve its performance by identifying the most relevant features. In this research paper, we compare various dimensionality reduction techniques, including Principal Component Analysis(PCA), Independent Component Analysis(ICA), Local Linear Embedding(LLE), Local Binary Patterns(LBP), and Simple Autoencoder, on the Olivetti dataset, which is a popular benchmark dataset in the field of face recognition. We evaluate the performance of these dimensionality reduction techniques using various classification algorithms, including Support Vector Classifier (SVC), Linear Discriminant Analysis (LDA), Logistic Regression (LR), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM). The goal of this research is to determine which combination of dimensionality reduction technique and classification algorithm is the most effective for the Olivetti dataset. Our research provides insights into the performance of various dimensionality reduction techniques and classification algorithms on the Olivetti dataset. These results can be useful in improving the performance of face recognition systems and other applications that deal with high-dimensional data.
{"title":"Evaluating The Performance of Feature Extraction Techniques Using Classification Techniques","authors":"Harshit Mittal","doi":"10.5121/csit.2023.131402","DOIUrl":"https://doi.org/10.5121/csit.2023.131402","url":null,"abstract":"Dimensionality reduction techniques are widely used in machine learning to reduce the computational complexity of the model and improve its performance by identifying the most relevant features. In this research paper, we compare various dimensionality reduction techniques, including Principal Component Analysis(PCA), Independent Component Analysis(ICA), Local Linear Embedding(LLE), Local Binary Patterns(LBP), and Simple Autoencoder, on the Olivetti dataset, which is a popular benchmark dataset in the field of face recognition. We evaluate the performance of these dimensionality reduction techniques using various classification algorithms, including Support Vector Classifier (SVC), Linear Discriminant Analysis (LDA), Logistic Regression (LR), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM). The goal of this research is to determine which combination of dimensionality reduction technique and classification algorithm is the most effective for the Olivetti dataset. Our research provides insights into the performance of various dimensionality reduction techniques and classification algorithms on the Olivetti dataset. These results can be useful in improving the performance of face recognition systems and other applications that deal with high-dimensional data.","PeriodicalId":430291,"journal":{"name":"Artificial Intelligence, NLP , Data Science and Cloud Computing Technology","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123468223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-19DOI: 10.5121/csit.2023.131404
Jairo R. Junior, Leandro A Silva
Significant advancements have been achieved in natural language processing models for text classification with the emergence of pre-trained transformers and deep learning. Despite promising results, deploying these models in production environments still faces challenges. Classification models are continuously evolving, adapting to new data and predictions. However, changes in data distribution over time can lead to a decline in performance, indicating that the model is outdated. This article aims to analyze the lifecycle of a natural language processing model by employing multivariate statistical methods capable of detecting model drift over time. These methods can be integrated into the training and workflow management of machine learning models. Preliminary results show that the statistical method Maximum Mean Discrepancy performs better in detecting drift in models trained with data from multiple domains through high-dimensional vector spaces after being subjected to an untrained auto-encoder. The classifier model achieved an accuracy rate of 93% in predicting intentions, using accuracy as the evaluation metric.
随着预训练变形器和深度学习的出现,用于文本分类的自然语言处理模型取得了重大进展。尽管结果令人鼓舞,但在生产环境中部署这些模型仍然面临挑战。分类模型不断发展,以适应新的数据和预测。但是,随着时间的推移,数据分布的变化可能导致性能下降,这表明该模型已经过时。本文旨在通过采用能够检测模型随时间漂移的多元统计方法来分析自然语言处理模型的生命周期。这些方法可以集成到机器学习模型的训练和工作流管理中。初步结果表明,在未经训练的自编码器作用下,统计方法Maximum Mean difference可以更好地检测由多域数据通过高维向量空间训练的模型的漂移。该分类器模型以准确率作为评价指标,在预测意图方面达到了93%的准确率。
{"title":"Drift Detection in Models Applied to the Recognition of Intentions in Short Sentences Using Convolutional Neural Networks for Classification","authors":"Jairo R. Junior, Leandro A Silva","doi":"10.5121/csit.2023.131404","DOIUrl":"https://doi.org/10.5121/csit.2023.131404","url":null,"abstract":"Significant advancements have been achieved in natural language processing models for text classification with the emergence of pre-trained transformers and deep learning. Despite promising results, deploying these models in production environments still faces challenges. Classification models are continuously evolving, adapting to new data and predictions. However, changes in data distribution over time can lead to a decline in performance, indicating that the model is outdated. This article aims to analyze the lifecycle of a natural language processing model by employing multivariate statistical methods capable of detecting model drift over time. These methods can be integrated into the training and workflow management of machine learning models. Preliminary results show that the statistical method Maximum Mean Discrepancy performs better in detecting drift in models trained with data from multiple domains through high-dimensional vector spaces after being subjected to an untrained auto-encoder. The classifier model achieved an accuracy rate of 93% in predicting intentions, using accuracy as the evaluation metric.","PeriodicalId":430291,"journal":{"name":"Artificial Intelligence, NLP , Data Science and Cloud Computing Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129058899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-19DOI: 10.5121/csit.2023.131401
Xiaohan Feng, Makoto Murakami
The Aim of this paper is to explore different ways of using AI to subvert stereotypes more efficiently and effectively. It will also enumerate the advantages and disadvantages of each approach, helping creators select the most appropriate method for their specific situations. AI opens up new possibilities, enabling anyone to effortlessly generate visually stunning images without the need for artistic skills. However, it also leads to the creation of more stereotypes when using large amounts of data. Consequently, stereotypes are becoming more prevalent and serious than ever before. Our belief is that we can use this situation in reverse, aiming to summarize stereotypes with AI and then subvert them through elemental exchange. In this study, we have attempted to develop a less time-consuming method to challenge character stereotypes while embracing the concept of "exchange." We selected two character archetypes, namely the "tyrant" and the "mad scientist," and summarized their stereotypes by generating AI images or asking ChatGPT questions. Additionally, we conducted a survey of real historical tyrants to gain insights into their behavior and characteristics. This step helped us comprehend the reasons behind stereotyping in artwork depicting tyrants. Based on this understanding, we made choices about which stereotypes to retain. The intention was to empower the audience to better evaluate the identity of the character. Finally, the two remaining character stereotypes were exchanged, and the design was completed. This paper documents the last and most time-consuming method. By examining a large number of sources and examining what stereotypical influences were used, we were able to achieve a greater effect of subverting stereotypes. The other method is much less time-consuming but somewhat more random. Whether one chooses by subjective experience or by the most frequent choices, there is no guarantee of the best outcome. In other words, it is the one that best guarantees that the audience will be able to quickly identify the original character and at the same time move the two characters the furthest away from the original stereotypical image of the original. In conclusion, if the designer has sufficient time, ai portrait + research or chatGPT + research can be chosen. If there is not enough time, the remaining methods can be chosen. The remaining methods take less time and the designer can try them all to get the desired result.
{"title":"Subverting Two Character Stereotypes at Once: Exploring AI's Role in Subverting Stereotypes","authors":"Xiaohan Feng, Makoto Murakami","doi":"10.5121/csit.2023.131401","DOIUrl":"https://doi.org/10.5121/csit.2023.131401","url":null,"abstract":"The Aim of this paper is to explore different ways of using AI to subvert stereotypes more efficiently and effectively. It will also enumerate the advantages and disadvantages of each approach, helping creators select the most appropriate method for their specific situations. AI opens up new possibilities, enabling anyone to effortlessly generate visually stunning images without the need for artistic skills. However, it also leads to the creation of more stereotypes when using large amounts of data. Consequently, stereotypes are becoming more prevalent and serious than ever before. Our belief is that we can use this situation in reverse, aiming to summarize stereotypes with AI and then subvert them through elemental exchange. In this study, we have attempted to develop a less time-consuming method to challenge character stereotypes while embracing the concept of \"exchange.\" We selected two character archetypes, namely the \"tyrant\" and the \"mad scientist,\" and summarized their stereotypes by generating AI images or asking ChatGPT questions. Additionally, we conducted a survey of real historical tyrants to gain insights into their behavior and characteristics. This step helped us comprehend the reasons behind stereotyping in artwork depicting tyrants. Based on this understanding, we made choices about which stereotypes to retain. The intention was to empower the audience to better evaluate the identity of the character. Finally, the two remaining character stereotypes were exchanged, and the design was completed. This paper documents the last and most time-consuming method. By examining a large number of sources and examining what stereotypical influences were used, we were able to achieve a greater effect of subverting stereotypes. The other method is much less time-consuming but somewhat more random. Whether one chooses by subjective experience or by the most frequent choices, there is no guarantee of the best outcome. In other words, it is the one that best guarantees that the audience will be able to quickly identify the original character and at the same time move the two characters the furthest away from the original stereotypical image of the original. In conclusion, if the designer has sufficient time, ai portrait + research or chatGPT + research can be chosen. If there is not enough time, the remaining methods can be chosen. The remaining methods take less time and the designer can try them all to get the desired result.","PeriodicalId":430291,"journal":{"name":"Artificial Intelligence, NLP , Data Science and Cloud Computing Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116740975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-19DOI: 10.5121/csit.2023.131406
A. Yajnik, Sabu Lama Tamang
The article represents the Sentiment Analysis (SA) of a Nepali sentence. Skip-gram model is used for the word to vector encoding. In the first experiment the vector representation of each sentence is generated by using Skip-gram model followed by the Multi-Layer Perceptron (MLP) classification and it is observed that the F1 score of 0.6486 is achieved for positive-negative classification with overall accuracy of 68%. Whereas in the second experiment the verb chunks are extracted using Nepali parser and carried out the similar experiment on the verb chunks. F1 score of 0.6779 is observedfor positive -negative classification with overall accuracy of 85%. Hence, Chunker based sentiment analysis is proven to be better than sentiment analysis using sentences.
{"title":"Chunker Based Sentiment Analysis for Nepali Text","authors":"A. Yajnik, Sabu Lama Tamang","doi":"10.5121/csit.2023.131406","DOIUrl":"https://doi.org/10.5121/csit.2023.131406","url":null,"abstract":"The article represents the Sentiment Analysis (SA) of a Nepali sentence. Skip-gram model is used for the word to vector encoding. In the first experiment the vector representation of each sentence is generated by using Skip-gram model followed by the Multi-Layer Perceptron (MLP) classification and it is observed that the F1 score of 0.6486 is achieved for positive-negative classification with overall accuracy of 68%. Whereas in the second experiment the verb chunks are extracted using Nepali parser and carried out the similar experiment on the verb chunks. F1 score of 0.6779 is observedfor positive -negative classification with overall accuracy of 85%. Hence, Chunker based sentiment analysis is proven to be better than sentiment analysis using sentences.","PeriodicalId":430291,"journal":{"name":"Artificial Intelligence, NLP , Data Science and Cloud Computing Technology","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122126387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Revolution of the Artificial Intelligence (AI) has started when machines could decipher enigmatic symbols concealed within messages. Subsequently, with the progress of Natural Language Processing (NLP), machines attained the capacity to understand and comprehend human language. Tree Adjoining Grammar (TAG) has become powerful grammatical formalism for processing Large-scale Grammar. However, TAG mostly rely on Grammar which is created by Languages expert and due to structural ambiguity in Natural Languages computation complexity of TAG is very high o(n^6). We observed that rules-based approach has many serious flaws, firstly, language evolves with time and it is impossible to create grammar which is extensive enough to represent every structure of language in real world. Secondly, it takes too much time and language resources to develop a practical solution. These difficulties motivated us to explore an alternative approach instead of completely rely on the rule-based method. In this paper, we proposed a Statistical Parsing algorithm for Natural Languages (NL) using TAG formalism where Parser makes crucial use of data driven model for identifying Syntactic dependencies of complex structure. We observed that using probabilistic model along with limited training data can significantly improve both the quality and performance of TAG Parser. We also demonstrate that the newer parser outperforms previous rule-based parser on given sample corpus. Our experiment for many Indian Languages, also provides further support for the claim that above mentioned approach might be an awaiting solution for problem that require rich structural analysis of corpus and constructing syntactic dependencies of any Natural Language without much depending on manual process of creating grammar for same. Finally, we present result of our on-going research where probability model will be applying to appropriate selection of adjunction of any given node of elementary trees and state chart representations are shared across derivation.
{"title":"Unveiling the Power of TAG Using Statistical Parsing for Natural Languages","authors":"Pavan Kurariya, Prashant Chaudhary, Jahnavi Bodhankar, Lenali Singh, Ajai Kumar","doi":"10.5121/csit.2023.131407","DOIUrl":"https://doi.org/10.5121/csit.2023.131407","url":null,"abstract":"The Revolution of the Artificial Intelligence (AI) has started when machines could decipher enigmatic symbols concealed within messages. Subsequently, with the progress of Natural Language Processing (NLP), machines attained the capacity to understand and comprehend human language. Tree Adjoining Grammar (TAG) has become powerful grammatical formalism for processing Large-scale Grammar. However, TAG mostly rely on Grammar which is created by Languages expert and due to structural ambiguity in Natural Languages computation complexity of TAG is very high o(n^6). We observed that rules-based approach has many serious flaws, firstly, language evolves with time and it is impossible to create grammar which is extensive enough to represent every structure of language in real world. Secondly, it takes too much time and language resources to develop a practical solution. These difficulties motivated us to explore an alternative approach instead of completely rely on the rule-based method. In this paper, we proposed a Statistical Parsing algorithm for Natural Languages (NL) using TAG formalism where Parser makes crucial use of data driven model for identifying Syntactic dependencies of complex structure. We observed that using probabilistic model along with limited training data can significantly improve both the quality and performance of TAG Parser. We also demonstrate that the newer parser outperforms previous rule-based parser on given sample corpus. Our experiment for many Indian Languages, also provides further support for the claim that above mentioned approach might be an awaiting solution for problem that require rich structural analysis of corpus and constructing syntactic dependencies of any Natural Language without much depending on manual process of creating grammar for same. Finally, we present result of our on-going research where probability model will be applying to appropriate selection of adjunction of any given node of elementary trees and state chart representations are shared across derivation.","PeriodicalId":430291,"journal":{"name":"Artificial Intelligence, NLP , Data Science and Cloud Computing Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133283994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-19DOI: 10.5121/csit.2023.131409
John Hawkins
Many data science problems require processing log data derived from web pages, apis or other internet traffic sources. URLs are one of the few ubiquitous data fields that describe internet activity, hence they require effective processing for a wide variety of machine learning applications. While URLs are structurally rich, the structure can be both domain specific and subject to change over time, making feature engineering for URLs an ongoing challenge. In this research we outline the key structural components of URLs and discuss the information available within each. We describe methods for generating features on these URL components and share an open source implementation of these ideas. In addition, we describe a method for exploring URL feature importance that allows for comparison and analysis of the information available inside URLs. We experiment with a collection of URL classification datasets and demonstrate the utility of these tools. Package and source code is open on https://pypi.org/project/url2features.
{"title":"What's in a Domain? Anaylsis of URL Features","authors":"John Hawkins","doi":"10.5121/csit.2023.131409","DOIUrl":"https://doi.org/10.5121/csit.2023.131409","url":null,"abstract":"Many data science problems require processing log data derived from web pages, apis or other internet traffic sources. URLs are one of the few ubiquitous data fields that describe internet activity, hence they require effective processing for a wide variety of machine learning applications. While URLs are structurally rich, the structure can be both domain specific and subject to change over time, making feature engineering for URLs an ongoing challenge. In this research we outline the key structural components of URLs and discuss the information available within each. We describe methods for generating features on these URL components and share an open source implementation of these ideas. In addition, we describe a method for exploring URL feature importance that allows for comparison and analysis of the information available inside URLs. We experiment with a collection of URL classification datasets and demonstrate the utility of these tools. Package and source code is open on https://pypi.org/project/url2features.","PeriodicalId":430291,"journal":{"name":"Artificial Intelligence, NLP , Data Science and Cloud Computing Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121985217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-19DOI: 10.5121/csit.2023.131408
Umeed VR Game
UmeedVR aims to create a conversational therapy VR game using natural language processing for patients with Speech Disorders like Autism or Aphasia. This study developed 5 psychological task sets and 3 environments via Maya and Unity. The Topic-Modeling AI, employing 25 live participants' recordings and 980+ TwineAI datasets, generated initial VR grading with a coherence score averaging 6.98 themes in 5-minute conversations across scenarios, forming a foundation for enhancements. Employing latent semantic analysis (gensimcorpus Python) and Term-Frequency-Inverse Document-Frequency (TF-IDF), grammatical errors and user-specific improvements were addressed. Results were visualized via audio-visual plots, highlighting conversation topics based on occurrence and interpretability. UMEED enhances cognitive and intuitive skills, elevating average topics from 6.98 to 13.56 in a 5- minute conversation with a 143.12 coherence score. LSA achieved 98.39% accuracy, topic modeling 100%. Significantly, real-time grammatical correction integration in the game was realized.
{"title":"Umeed: VR Game Using NLP Models and Latent Semantic Analysis for Conversation Therapy for People with Speech Disorders","authors":"Umeed VR Game","doi":"10.5121/csit.2023.131408","DOIUrl":"https://doi.org/10.5121/csit.2023.131408","url":null,"abstract":"UmeedVR aims to create a conversational therapy VR game using natural language processing for patients with Speech Disorders like Autism or Aphasia. This study developed 5 psychological task sets and 3 environments via Maya and Unity. The Topic-Modeling AI, employing 25 live participants' recordings and 980+ TwineAI datasets, generated initial VR grading with a coherence score averaging 6.98 themes in 5-minute conversations across scenarios, forming a foundation for enhancements. Employing latent semantic analysis (gensimcorpus Python) and Term-Frequency-Inverse Document-Frequency (TF-IDF), grammatical errors and user-specific improvements were addressed. Results were visualized via audio-visual plots, highlighting conversation topics based on occurrence and interpretability. UMEED enhances cognitive and intuitive skills, elevating average topics from 6.98 to 13.56 in a 5- minute conversation with a 143.12 coherence score. LSA achieved 98.39% accuracy, topic modeling 100%. Significantly, real-time grammatical correction integration in the game was realized.","PeriodicalId":430291,"journal":{"name":"Artificial Intelligence, NLP , Data Science and Cloud Computing Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121063873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-19DOI: 10.5121/csit.2023.131403
Harshit Mittal
Medical image analysis is a vital component of modern medical practice, and the accuracy of such analysis is critical for accurate diagnosis and treatment. Computed tomography (CT) scans are commonly used to visualize the kidneys and identify abnormalities such as cysts, tumors, and stones. Manual interpretation of CT images can be time-consuming and subject to human error, leading to inaccurate diagnosis and treatment. Deep learning models based on Convolutional Neural Networks (CNNs) have shown promise in improving the accuracy and speed of medical image analysis. In this study, we present a CNN-based model to accurately classify CT images of the kidney into four categories: Normal, Cyst, Tumor, and Stone, using the CT KIDNEY DATASET. The proposed CNN model achieved an accuracy of 99.84% on the test set, with a precision of 0.9964, a recall of 0.9986, and a F1-score of 0.9975 for all categories. The model was able to accurately classify all images in the test set, indicating its high accuracy in identifying abnormalities in CT images of the kidney. The results of this study demonstrate the potential of deep learning models based on CNNs in accurately classifying CT images of the kidney, which could lead to improved diagnosis and treatment outcomes for patients. This study contributes to the growing body of literature on the use of deep learning models in medical image analysis, highlighting the potential of these models in improving the accuracy and efficiency of medical diagnosis.
{"title":"Kidney CT Image Analysis Using CNN","authors":"Harshit Mittal","doi":"10.5121/csit.2023.131403","DOIUrl":"https://doi.org/10.5121/csit.2023.131403","url":null,"abstract":"Medical image analysis is a vital component of modern medical practice, and the accuracy of such analysis is critical for accurate diagnosis and treatment. Computed tomography (CT) scans are commonly used to visualize the kidneys and identify abnormalities such as cysts, tumors, and stones. Manual interpretation of CT images can be time-consuming and subject to human error, leading to inaccurate diagnosis and treatment. Deep learning models based on Convolutional Neural Networks (CNNs) have shown promise in improving the accuracy and speed of medical image analysis. In this study, we present a CNN-based model to accurately classify CT images of the kidney into four categories: Normal, Cyst, Tumor, and Stone, using the CT KIDNEY DATASET. The proposed CNN model achieved an accuracy of 99.84% on the test set, with a precision of 0.9964, a recall of 0.9986, and a F1-score of 0.9975 for all categories. The model was able to accurately classify all images in the test set, indicating its high accuracy in identifying abnormalities in CT images of the kidney. The results of this study demonstrate the potential of deep learning models based on CNNs in accurately classifying CT images of the kidney, which could lead to improved diagnosis and treatment outcomes for patients. This study contributes to the growing body of literature on the use of deep learning models in medical image analysis, highlighting the potential of these models in improving the accuracy and efficiency of medical diagnosis.","PeriodicalId":430291,"journal":{"name":"Artificial Intelligence, NLP , Data Science and Cloud Computing Technology","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127207697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-19DOI: 10.5121/csit.2023.131410
John Hawkins
Consumers are expected to partially reveal their preferences and interests through the media they consume. The development of visual attention measurement with eye tracking technologies allows us to investigate the consistency of these preferences across the creative executions of a given brand and over all brands within a given vertical. In this study we use a large-scale attention measurement dataset to analyse a collection of digital display advertising impressions across a variety of industry verti- cals. We evaluate the extent to which the high attention contexts for a given brand’s ads remain consistent for that brand, and the extent to which those contexts remain consistent across many brands within an industry vertical. The results illustrate that consumer attention on advertising can vary significantly across creatives for a specific brand, and across a vertical. Nevertheless, there are coherence effects across campaigns that are stronger than random, and that contain actionable information at the level of industry vertical categorisation.
{"title":"Brands, Verticals and Contexts: Coherence Patterns in Consumer Attention","authors":"John Hawkins","doi":"10.5121/csit.2023.131410","DOIUrl":"https://doi.org/10.5121/csit.2023.131410","url":null,"abstract":"Consumers are expected to partially reveal their preferences and interests through the media they consume. The development of visual attention measurement with eye tracking technologies allows us to investigate the consistency of these preferences across the creative executions of a given brand and over all brands within a given vertical. In this study we use a large-scale attention measurement dataset to analyse a collection of digital display advertising impressions across a variety of industry verti- cals. We evaluate the extent to which the high attention contexts for a given brand’s ads remain consistent for that brand, and the extent to which those contexts remain consistent across many brands within an industry vertical. The results illustrate that consumer attention on advertising can vary significantly across creatives for a specific brand, and across a vertical. Nevertheless, there are coherence effects across campaigns that are stronger than random, and that contain actionable information at the level of industry vertical categorisation.","PeriodicalId":430291,"journal":{"name":"Artificial Intelligence, NLP , Data Science and Cloud Computing Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131246473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}