Abstract In this article the concept of domination in signed graphs is examined from an alternate perspective and a new definition of the same is introduced. A vertex subset D of a signed graph S is a dominating set, if for each vertex v not in D there exists a vertex u ∈ D such that the sign of the edge uv is positive. The domination number γ (S) of S is the minimum cardinality among all the dominating sets of S. We obtain certain bounds of γ (S) and present a necessary and su cient condition for a dominating set to be a minimal dominating set. Further, we characterise the signed graphs having small and large values for domination number.
{"title":"On domination in signed graphs","authors":"James Joseph, Mayamma Joseph","doi":"10.2478/ausi-2023-0001","DOIUrl":"https://doi.org/10.2478/ausi-2023-0001","url":null,"abstract":"Abstract In this article the concept of domination in signed graphs is examined from an alternate perspective and a new definition of the same is introduced. A vertex subset D of a signed graph S is a dominating set, if for each vertex v not in D there exists a vertex u ∈ D such that the sign of the edge uv is positive. The domination number γ (S) of S is the minimum cardinality among all the dominating sets of S. We obtain certain bounds of γ (S) and present a necessary and su cient condition for a dominating set to be a minimal dominating set. Further, we characterise the signed graphs having small and large values for domination number.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"4 1","pages":"1 - 9"},"PeriodicalIF":0.3,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73024192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Automatic detection of tissue types on whole-slide images (WSI) is an important task in computational histopathology that can be solved with convolutional neural networks (CNN) with high accuracy. However, the black-box nature of CNNs rightfully raises concerns about using them for this task. In this paper, we reformulate the task of tissue type detection to multiple binary classification problems to simplify the justification of model decisions. We propose an adapted Bag-of-local-Features interpretable CNN for solving this problem, which we train on eight newly introduced binary tissue classification datasets. The performance of the model is evaluated simultaneously with its decision-making process using logit heatmaps. Our model achieves better performance than its non-interpretable counterparts, while also being able to provide human-readable justification for decisions. Furthermore, the problem of data scarcity in computational histopathology is accounted for by using data augmentation techniques to improve both the performance and even the validity of model decisions. The source code and binary datasets can be accessed at: https://github.com/galigergergo/BolFTissueDetect.
摘要:全切片图像(WSI)上组织类型的自动检测是计算组织病理学中的一个重要任务,卷积神经网络(CNN)可以高精度地解决这一问题。然而,cnn的黑盒子特性合理地引发了人们对使用它们来完成这项任务的担忧。在本文中,我们将组织类型检测的任务重新表述为多个二值分类问题,以简化模型决策的证明。我们提出了一种自适应的local- feature bag - interpretable CNN来解决这个问题,我们在8个新引入的二值组织分类数据集上进行训练。利用logit热图对模型的性能和决策过程进行同步评估。我们的模型比不可解释的模型实现了更好的性能,同时也能够为决策提供人类可读的理由。此外,通过使用数据增强技术来提高模型决策的性能甚至有效性,可以解决计算组织病理学中数据稀缺的问题。源代码和二进制数据集可以访问:https://github.com/galigergergo/BolFTissueDetect。
{"title":"Explainable patch-level histopathology tissue type detection with bag-of-local-features models and data augmentation","authors":"Gergő Galiger, Z. Bodó","doi":"10.2478/ausi-2023-0006","DOIUrl":"https://doi.org/10.2478/ausi-2023-0006","url":null,"abstract":"Abstract Automatic detection of tissue types on whole-slide images (WSI) is an important task in computational histopathology that can be solved with convolutional neural networks (CNN) with high accuracy. However, the black-box nature of CNNs rightfully raises concerns about using them for this task. In this paper, we reformulate the task of tissue type detection to multiple binary classification problems to simplify the justification of model decisions. We propose an adapted Bag-of-local-Features interpretable CNN for solving this problem, which we train on eight newly introduced binary tissue classification datasets. The performance of the model is evaluated simultaneously with its decision-making process using logit heatmaps. Our model achieves better performance than its non-interpretable counterparts, while also being able to provide human-readable justification for decisions. Furthermore, the problem of data scarcity in computational histopathology is accounted for by using data augmentation techniques to improve both the performance and even the validity of model decisions. The source code and binary datasets can be accessed at: https://github.com/galigergergo/BolFTissueDetect.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"51 1","pages":"60 - 80"},"PeriodicalIF":0.3,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90962474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Despite the practical importance of accurate long-term electricity price forecast with high resolution - and the significant need for that - only small percentage of the tremendous papers on energy price forecast attempted to target this topic. Its reason can be the high volatility of electricity prices and the hidden – and often unpredictable – relations with its influencing factors. In our research, we performed different experiments to predicate hourly Hungarian electricity prices using deep neural networks, for short-term and long-term, too. During this work, investigations were made to compare the results of different network structures and to determine the effect of some environmental factors (meteorologic data and date/time - beside the historical electricity prices). Our results were promising, mostly for short-term forecasts - especially by using a deep neural network with one ConvLSTM encoder.
{"title":"Hourly electricity price forecast for short-and long-term, using deep neural networks","authors":"Gergely Dombi, T. Dulai","doi":"10.2478/ausi-2022-0013","DOIUrl":"https://doi.org/10.2478/ausi-2022-0013","url":null,"abstract":"Abstract Despite the practical importance of accurate long-term electricity price forecast with high resolution - and the significant need for that - only small percentage of the tremendous papers on energy price forecast attempted to target this topic. Its reason can be the high volatility of electricity prices and the hidden – and often unpredictable – relations with its influencing factors. In our research, we performed different experiments to predicate hourly Hungarian electricity prices using deep neural networks, for short-term and long-term, too. During this work, investigations were made to compare the results of different network structures and to determine the effect of some environmental factors (meteorologic data and date/time - beside the historical electricity prices). Our results were promising, mostly for short-term forecasts - especially by using a deep neural network with one ConvLSTM encoder.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"17 1","pages":"208 - 222"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75013893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Residual closeness is recently proposed as a vulnerability measure to characterize the stability of complex networks. Residual closeness is essential in the analysis of complex networks, but costly to compute. Currently, the fastest known algorithms run in polynomial time. Motivated by the fast-growing need to compute vulnerability measures on complex networks, new algorithms for computing node and edge residual closeness are introduced in this paper. Those proposed algorithms reduce the running times to Θ(n3) and Θ (n4) on unweighted networks, respectively, where n is the number of nodes.
{"title":"Computational complexity of network vulnerability analysis","authors":"M. Berberler","doi":"10.2478/ausi-2022-0012","DOIUrl":"https://doi.org/10.2478/ausi-2022-0012","url":null,"abstract":"Abstract Residual closeness is recently proposed as a vulnerability measure to characterize the stability of complex networks. Residual closeness is essential in the analysis of complex networks, but costly to compute. Currently, the fastest known algorithms run in polynomial time. Motivated by the fast-growing need to compute vulnerability measures on complex networks, new algorithms for computing node and edge residual closeness are introduced in this paper. Those proposed algorithms reduce the running times to Θ(n3) and Θ (n4) on unweighted networks, respectively, where n is the number of nodes.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"21 1","pages":"199 - 207"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79489606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Automatic bokeh is one of the smartphone’s essential photography effects. This effect enhances the quality of the image where the subject background gets out of focus by providing a soft (i.e., diverse) background. Most smartphones have a single rear camera that is lacking to provide which effects need to be applied to which kind of images. To do the same, smartphones depend on different software to generate the bokeh effect on images. Blur, Color-point, Zoom, Spin, Big Bokeh, Color Picker, Low-key, High-Key, and Silhouette are the popular bokeh effects. With this wide range of bokeh types available, it is difficult for the user to choose a suitable effect for their images. Deep Learning (DL) models (i.e., MobileNetV2, InceptionV3, and VGG16) are used in this work to recommend high-quality bokeh effects for images. Four thousand five hundred images are collected from online resources such as Google images, Unsplash, and Kaggle to examine the model performance. 85% accuracy has been achieved for recommending different bokeh effects using the proposed model MobileNetV2, which exceeds many of the existing models.
{"title":"Rendering automatic bokeh recommendation engine for photos using deep learning algorithm","authors":"Rakesh Kumar, Meenu Gupta, Jaismeen, Shreya Dhanta, Nishant Kumar Pathak, Yukti Vivek, Ayush Sharma, Deepak, Gaurav Ramola, S. Velusamy","doi":"10.2478/ausi-2022-0015","DOIUrl":"https://doi.org/10.2478/ausi-2022-0015","url":null,"abstract":"Abstract Automatic bokeh is one of the smartphone’s essential photography effects. This effect enhances the quality of the image where the subject background gets out of focus by providing a soft (i.e., diverse) background. Most smartphones have a single rear camera that is lacking to provide which effects need to be applied to which kind of images. To do the same, smartphones depend on different software to generate the bokeh effect on images. Blur, Color-point, Zoom, Spin, Big Bokeh, Color Picker, Low-key, High-Key, and Silhouette are the popular bokeh effects. With this wide range of bokeh types available, it is difficult for the user to choose a suitable effect for their images. Deep Learning (DL) models (i.e., MobileNetV2, InceptionV3, and VGG16) are used in this work to recommend high-quality bokeh effects for images. Four thousand five hundred images are collected from online resources such as Google images, Unsplash, and Kaggle to examine the model performance. 85% accuracy has been achieved for recommending different bokeh effects using the proposed model MobileNetV2, which exceeds many of the existing models.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"10 1","pages":"248 - 272"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78672850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The Nirmala matrix of a graph and its energy have recently defined. In this paper, we establish some features of the Nirmala eigenvalues. Then we propose various bounds on the Nirmala spectral radius and energy. Moreover, we derive a bound on the Nirmala energy including graph energy and maximum vertex degree.
{"title":"Bounds on Nirmala energy of graphs","authors":"N. Yalçın","doi":"10.2478/ausi-2022-0017","DOIUrl":"https://doi.org/10.2478/ausi-2022-0017","url":null,"abstract":"Abstract The Nirmala matrix of a graph and its energy have recently defined. In this paper, we establish some features of the Nirmala eigenvalues. Then we propose various bounds on the Nirmala spectral radius and energy. Moreover, we derive a bound on the Nirmala energy including graph energy and maximum vertex degree.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"7 1","pages":"302 - 315"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89505250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The automated segmentation of brain tissues and lesions represents a widely investigated research topic. The Brain Tumor Segmentation Challenges (BraTS) organized yearly since 2012 provided standard training and testing data and a unified evaluation framework to the research community, which provoked an intensification in this research field. This paper proposes a solution to the brain tumor segmentation problem, which is built upon the U-net architecture that is very popular in medical imaging. The proposed procedure involves two identical, cascaded U-net networks with 3D convolution. The first stage produces an initial segmentation of a brain volume, while the second stage applies a post-processing based on the labels provided by the first stage. In the first U-net based classification, each pixel is characterized by the four observed features (T1, T2, T1c, and FLAIR), while the second identical U-net works with four features extracted from the volumetric neighborhood of the pixels, representing the ratio of pixels with positive initial labeling within the neighborhood. Statistical accuracy indexes are employed to evaluate the initial and final segmentation of each MRI record. Tests based on BraTS 2019 training data set led to average Dice scores over 87%. The postprocessing step can increase the average Dice scores by 0.5%, it improves more those volumes whose initial segmentation was less successful.
{"title":"A two-stage U-net approach to brain tumor segmentation from multi-spectral MRI records","authors":"Ágnes Győrfi, L. Kovács, L. Szilágyi","doi":"10.2478/ausi-2022-0014","DOIUrl":"https://doi.org/10.2478/ausi-2022-0014","url":null,"abstract":"Abstract The automated segmentation of brain tissues and lesions represents a widely investigated research topic. The Brain Tumor Segmentation Challenges (BraTS) organized yearly since 2012 provided standard training and testing data and a unified evaluation framework to the research community, which provoked an intensification in this research field. This paper proposes a solution to the brain tumor segmentation problem, which is built upon the U-net architecture that is very popular in medical imaging. The proposed procedure involves two identical, cascaded U-net networks with 3D convolution. The first stage produces an initial segmentation of a brain volume, while the second stage applies a post-processing based on the labels provided by the first stage. In the first U-net based classification, each pixel is characterized by the four observed features (T1, T2, T1c, and FLAIR), while the second identical U-net works with four features extracted from the volumetric neighborhood of the pixels, representing the ratio of pixels with positive initial labeling within the neighborhood. Statistical accuracy indexes are employed to evaluate the initial and final segmentation of each MRI record. Tests based on BraTS 2019 training data set led to average Dice scores over 87%. The postprocessing step can increase the average Dice scores by 0.5%, it improves more those volumes whose initial segmentation was less successful.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"45 1","pages":"223 - 247"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73865741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We reiterate the theoretical basics of holographic associative memory, and conduct two experiments. During the first experiment, we teach the system many associations, while during the second experiment, we teach it only one association. In both cases, the recalling capability of the system is examined from different aspects.
{"title":"Experiments with holographic associative memory","authors":"G. Román","doi":"10.2478/ausi-2022-0010","DOIUrl":"https://doi.org/10.2478/ausi-2022-0010","url":null,"abstract":"Abstract We reiterate the theoretical basics of holographic associative memory, and conduct two experiments. During the first experiment, we teach the system many associations, while during the second experiment, we teach it only one association. In both cases, the recalling capability of the system is examined from different aspects.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"45 1","pages":"155 - 184"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78299877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioan-Marius Pisak-Lukáts, L. Kovács, Szilágyi László
Abstract The automatic segmentation of medical images stands at the basis of modern medical diagnosis, therapy planning and follow-up studies after interventions. The accuracy of the segmentation is a key element in assisting the work of the physician, but the efficiency of the process is also relevant. This paper introduces a feature selection strategy that attempts to define reduced feature sets for ensemble learning methods employed in brain tumor segmentation based on MRI data such a way that the segmentation outcome hardly suffers any damage. Initially, the full set of observed and generated features are deployed in ensemble training and prediction on testing data, which provide us information on all couples of features from the full feature set. The extracted pairwise data is fed to a Markov clustering (MCL) algorithm, which uses a graph structure to characterize the relation between features. MCL produces connected subgraphs that are totally separated from each other. The largest such subgraph defines the group of features which are selected for evaluation. The proposed technique is evaluated using the high-grade and low-grade tumor records of the training dataset of the BraTS 2019 challenge, in an ensemble learning framework relying on binary decision trees. The proposed method can reduce the set of features to 30%ofits initial size without losing anything in terms of segmentation accuracy, significantly contributing to the efficiency of the segmentation process. A detailed comparison of the full set of 104 features and the reduced set of 41 features is provided, with special attention to highly discriminative and redundant features within the MRI data.
{"title":"A feature selection strategy using Markov clustering, for the optimization of brain tumor segmentation from MRI data","authors":"Ioan-Marius Pisak-Lukáts, L. Kovács, Szilágyi László","doi":"10.2478/ausi-2022-0018","DOIUrl":"https://doi.org/10.2478/ausi-2022-0018","url":null,"abstract":"Abstract The automatic segmentation of medical images stands at the basis of modern medical diagnosis, therapy planning and follow-up studies after interventions. The accuracy of the segmentation is a key element in assisting the work of the physician, but the efficiency of the process is also relevant. This paper introduces a feature selection strategy that attempts to define reduced feature sets for ensemble learning methods employed in brain tumor segmentation based on MRI data such a way that the segmentation outcome hardly suffers any damage. Initially, the full set of observed and generated features are deployed in ensemble training and prediction on testing data, which provide us information on all couples of features from the full feature set. The extracted pairwise data is fed to a Markov clustering (MCL) algorithm, which uses a graph structure to characterize the relation between features. MCL produces connected subgraphs that are totally separated from each other. The largest such subgraph defines the group of features which are selected for evaluation. The proposed technique is evaluated using the high-grade and low-grade tumor records of the training dataset of the BraTS 2019 challenge, in an ensemble learning framework relying on binary decision trees. The proposed method can reduce the set of features to 30%ofits initial size without losing anything in terms of segmentation accuracy, significantly contributing to the efficiency of the segmentation process. A detailed comparison of the full set of 104 features and the reduced set of 41 features is provided, with special attention to highly discriminative and redundant features within the MRI data.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"5 1","pages":"316 - 337"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79186347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pálma Rozália Osztián, Z. Kátai, Ágnes Sántha, Erika Osztián
Abstract In this paper we investigate the comments from the AlgoRythmics YouTube channel using the Comment Term Frequency Comparison social media analytics method. Comment Term Frequency Comparison can be a useful tool to understand how a social media platform, such as a Youtube channel is being discussed by users and to identify opportunities to engage with the audience. Understanding viewer opinions and reactions to a video, identifying trends and patterns in the way people are discussing a particular topic, and measuring the effectiveness of a video in achieving its intended goals is one of the most important points of view for a channel to develop. Youtube comment analytics can be a valuable tool looking to understand how the AlgoRythmics channel videos are being received by viewers and to identify opportunities for improvement. Our study focuses on the importance of user feedback based on ten algorithm visualization videos from the AlgoRythmics channel. In order to find evidence how our channel works and new ideas to improve we used the so-called comment term frequency comparison social media analytics method to investigate the main characteristics of user feedback. We analyzed the comments using both Youtube Studio Analytics and Mozdeh Big Data Analysis tool.
摘要本文采用评论词频比较社交媒体分析方法对算法YouTube频道的评论进行了研究。评论术语频率比较是一种有用的工具,可以帮助我们了解用户如何讨论社交媒体平台(如Youtube频道),并识别与受众互动的机会。了解观众对视频的意见和反应,确定人们讨论特定主题的趋势和模式,以及衡量视频在实现预期目标方面的有效性,是频道发展的最重要的观点之一。Youtube评论分析是一个很有价值的工具,它可以帮助我们了解AlgoRythmics频道视频是如何被观众接受的,并找到改进的机会。我们的研究集中在用户反馈的重要性基于十个算法可视化视频从算法频道。为了找到我们的渠道如何运作的证据和改进的新思路,我们使用了所谓的评论词频率比较社交媒体分析方法来调查用户反馈的主要特征。我们使用Youtube Studio Analytics和Mozdeh大数据分析工具对评论进行了分析。
{"title":"Investigating the AlgoRythmics YouTube channel: the Comment Term Frequency Comparison social media analytics method","authors":"Pálma Rozália Osztián, Z. Kátai, Ágnes Sántha, Erika Osztián","doi":"10.2478/ausi-2022-0016","DOIUrl":"https://doi.org/10.2478/ausi-2022-0016","url":null,"abstract":"Abstract In this paper we investigate the comments from the AlgoRythmics YouTube channel using the Comment Term Frequency Comparison social media analytics method. Comment Term Frequency Comparison can be a useful tool to understand how a social media platform, such as a Youtube channel is being discussed by users and to identify opportunities to engage with the audience. Understanding viewer opinions and reactions to a video, identifying trends and patterns in the way people are discussing a particular topic, and measuring the effectiveness of a video in achieving its intended goals is one of the most important points of view for a channel to develop. Youtube comment analytics can be a valuable tool looking to understand how the AlgoRythmics channel videos are being received by viewers and to identify opportunities for improvement. Our study focuses on the importance of user feedback based on ten algorithm visualization videos from the AlgoRythmics channel. In order to find evidence how our channel works and new ideas to improve we used the so-called comment term frequency comparison social media analytics method to investigate the main characteristics of user feedback. We analyzed the comments using both Youtube Studio Analytics and Mozdeh Big Data Analysis tool.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"6 1","pages":"273 - 301"},"PeriodicalIF":0.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76319973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}