首页 > 最新文献

Applied Computing and Informatics最新文献

英文 中文
Brain tumor classification using ResNet50-convolutional block attention module 使用 ResNet50 卷积块注意力模块进行脑肿瘤分类
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-21 DOI: 10.1108/aci-09-2023-0022
Oladosu Oyebisi Oladimeji, A. Ibitoye
PurposeDiagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.Design/methodology/approachTo selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.FindingsThe ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.Practical implicationsSince ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.Originality/valueThis research has not been published anywhere else.
目的诊断脑肿瘤是一个需要花费大量时间的过程,在很大程度上依赖于放射科医生的熟练程度和知识积累。与传统方法相比,深度学习方法在脑肿瘤的自动化诊断中越来越受欢迎,有望带来更准确、更高效的结果。值得注意的是,基于注意力的模型已成为一种先进的动态完善和放大模型功能,可进一步提升诊断能力。为了有选择性地强调相关特征,同时抑制噪声,本研究将 ResNet50 与 CBAM(ResNet50-CBAM)结合用于脑肿瘤分类。研究结果 ResNet50-CBAM 的表现优于卷积神经网络(CNN)等现有深度学习分类方法,ResNet-CBAM 的性能分别达到了 99.43%、99.01%、99.01% 和 99.43%。实际意义由于 ResNet-CBAM 融合可以捕捉空间上下文,同时增强特征表示,因此可以将其集成到脑分类软件平台中,供医生用于增强临床决策和改进脑肿瘤分类。
{"title":"Brain tumor classification using ResNet50-convolutional block attention module","authors":"Oladosu Oyebisi Oladimeji, A. Ibitoye","doi":"10.1108/aci-09-2023-0022","DOIUrl":"https://doi.org/10.1108/aci-09-2023-0022","url":null,"abstract":"PurposeDiagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.Design/methodology/approachTo selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.FindingsThe ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.Practical implicationsSince ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.Originality/valueThis research has not been published anywhere else.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":"39 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138949796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic measurement of cardiothoracic ratio in chest x-ray images with ProGAN-generated dataset 使用ProGAN生成的数据集自动测量胸部x射线图像中的心胸比率
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-04-18 DOI: 10.1108/aci-11-2022-0322
Worapan Kusakunniran, P. Saiviroonporn, T. Siriapisith, T. Tongdee, Amphai Uraiverotchanakorn, Suphawan Leesakul, Penpitcha Thongnarintr, Apichaya Kuama, Pakorn Yodprom
PurposeThe cardiomegaly can be determined by the cardiothoracic ratio (CTR) which can be measured in a chest x-ray image. It is calculated based on a relationship between a size of heart and a transverse dimension of chest. The cardiomegaly is identified when the ratio is larger than a cut-off threshold. This paper aims to propose a solution to calculate the ratio for classifying the cardiomegaly in chest x-ray images.Design/methodology/approachThe proposed method begins with constructing lung and heart segmentation models based on U-Net architecture using the publicly available datasets with the groundtruth of heart and lung masks. The ratio is then calculated using the sizes of segmented lung and heart areas. In addition, Progressive Growing of GANs (PGAN) is adopted here for constructing the new dataset containing chest x-ray images of three classes including male normal, female normal and cardiomegaly classes. This dataset is then used for evaluating the proposed solution. Also, the proposed solution is used to evaluate the quality of chest x-ray images generated from PGAN.FindingsIn the experiments, the trained models are applied to segment regions of heart and lung in chest x-ray images on the self-collected dataset. The calculated CTR values are compared with the values that are manually measured by human experts. The average error is 3.08%. Then, the models are also applied to segment regions of heart and lung for the CTR calculation, on the dataset computed by PGAN. Then, the cardiomegaly is determined using various attempts of different cut-off threshold values. With the standard cut-off at 0.50, the proposed method achieves 94.61% accuracy, 88.31% sensitivity and 94.20% specificity.Originality/valueThe proposed solution is demonstrated to be robust across unseen datasets for the segmentation, CTR calculation and cardiomegaly classification, including the dataset generated from PGAN. The cut-off value can be adjusted to be lower than 0.50 for increasing the sensitivity. For example, the sensitivity of 97.04% can be achieved at the cut-off of 0.45. However, the specificity is decreased from 94.20% to 79.78%.
目的心脏肥大可以通过胸部x射线图像中的心胸比值(CTR)来确定。它是根据心脏大小和胸部横向尺寸之间的关系来计算的。当比值大于临界阈值时,可确定心脏肥大。本文旨在提出一种计算胸部x射线图像中心脏肥大分类比率的解决方案。设计/方法论/方法所提出的方法首先基于U-Net架构,使用公开可用的数据集,以心脏和肺部掩码为基础,构建肺部和心脏分割模型。然后使用分割的肺和心脏区域的大小来计算该比率。此外,本文采用了GANs的渐进生长(PGAN)来构建新的数据集,该数据集包含三个类别的胸部x射线图像,包括男性正常、女性正常和心脏肥大类别。然后使用该数据集来评估所提出的解决方案。此外,所提出的解决方案还用于评估由PGAN生成的胸部x射线图像的质量。在实验中,将训练的模型应用于在自收集的数据集上分割胸部x射线图中的心脏和肺部区域。将计算出的CTR值与由人类专家手动测量的值进行比较。平均误差为3.08%。然后,在PGAN计算的数据集上,将该模型应用于心肺区域的CTR计算。然后,使用不同截止阈值的各种尝试来确定心脏肥大。在标准截止值为0.50的情况下,该方法的准确率为94.61%,灵敏度为88.31%,特异性为94.20%。独创性/价值所提出的解决方案被证明在分割、CTR计算和心脏肥大分类的未发现数据集上是稳健的,包括PGAN生成的数据集。为了提高灵敏度,可以将截止值调整为低于0.50。例如,在0.45的截止值下可以实现97.04%的灵敏度。但特异性从94.20%下降到79.78%。
{"title":"Automatic measurement of cardiothoracic ratio in chest x-ray images with ProGAN-generated dataset","authors":"Worapan Kusakunniran, P. Saiviroonporn, T. Siriapisith, T. Tongdee, Amphai Uraiverotchanakorn, Suphawan Leesakul, Penpitcha Thongnarintr, Apichaya Kuama, Pakorn Yodprom","doi":"10.1108/aci-11-2022-0322","DOIUrl":"https://doi.org/10.1108/aci-11-2022-0322","url":null,"abstract":"PurposeThe cardiomegaly can be determined by the cardiothoracic ratio (CTR) which can be measured in a chest x-ray image. It is calculated based on a relationship between a size of heart and a transverse dimension of chest. The cardiomegaly is identified when the ratio is larger than a cut-off threshold. This paper aims to propose a solution to calculate the ratio for classifying the cardiomegaly in chest x-ray images.Design/methodology/approachThe proposed method begins with constructing lung and heart segmentation models based on U-Net architecture using the publicly available datasets with the groundtruth of heart and lung masks. The ratio is then calculated using the sizes of segmented lung and heart areas. In addition, Progressive Growing of GANs (PGAN) is adopted here for constructing the new dataset containing chest x-ray images of three classes including male normal, female normal and cardiomegaly classes. This dataset is then used for evaluating the proposed solution. Also, the proposed solution is used to evaluate the quality of chest x-ray images generated from PGAN.FindingsIn the experiments, the trained models are applied to segment regions of heart and lung in chest x-ray images on the self-collected dataset. The calculated CTR values are compared with the values that are manually measured by human experts. The average error is 3.08%. Then, the models are also applied to segment regions of heart and lung for the CTR calculation, on the dataset computed by PGAN. Then, the cardiomegaly is determined using various attempts of different cut-off threshold values. With the standard cut-off at 0.50, the proposed method achieves 94.61% accuracy, 88.31% sensitivity and 94.20% specificity.Originality/valueThe proposed solution is demonstrated to be robust across unseen datasets for the segmentation, CTR calculation and cardiomegaly classification, including the dataset generated from PGAN. The cut-off value can be adjusted to be lower than 0.50 for increasing the sensitivity. For example, the sensitivity of 97.04% can be achieved at the cut-off of 0.45. However, the specificity is decreased from 94.20% to 79.78%.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49561142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cyber threat: its origins and consequence and the use of qualitative and quantitative methods in cyber risk assessment 网络威胁:其起源和后果以及在网络风险评估中定性和定量方法的使用
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-12-26 DOI: 10.1108/aci-07-2022-0178
James R. Crotty, E. Daniel
PurposeConsumers increasingly rely on organisations for online services and data storage while these same institutions seek to digitise the information assets they hold to create economic value. Cybersecurity failures arising from malicious or accidental actions can lead to significant reputational and financial loss which organisations must guard against. Despite having some critical weaknesses, qualitative cybersecurity risk analysis is widely used in developing cybersecurity plans. This research explores these weaknesses, considers how quantitative methods might address the constraints and seeks the insights and recommendations of leading cybersecurity practitioners on the use of qualitative and quantitative cyber risk assessment methods.Design/methodology/approachThe study is based upon a literature review and thematic analysis of in-depth qualitative interviews with 16 senior cybersecurity practitioners representing financial services and advisory companies from across the world.FindingsWhile most organisations continue to rely on qualitative methods for cybersecurity risk assessment, some are also actively using quantitative approaches to enhance their cybersecurity planning efforts. The primary recommendation of this paper is that organisations should adopt both a qualitative and quantitative cyber risk assessment approach.Originality/valueThis work provides the first insight into how senior practitioners are using and combining qualitative and quantitative cybersecurity risk assessment, and highlights the need for in-depth comparisons of these two different approaches.
消费者越来越依赖组织提供在线服务和数据存储,而这些机构则寻求将其持有的信息资产数字化,以创造经济价值。由恶意或意外行为引起的网络安全故障可能导致重大的声誉和财务损失,这是组织必须防范的。定性网络安全风险分析在制定网络安全计划中被广泛应用,尽管存在一些关键的弱点。本研究探讨了这些弱点,考虑了定量方法如何解决这些限制,并寻求领先的网络安全从业者对使用定性和定量网络风险评估方法的见解和建议。设计/方法/方法本研究基于文献综述和专题分析,对来自世界各地的16位代表金融服务和咨询公司的高级网络安全从业人员进行了深入的定性访谈。虽然大多数组织继续依赖定性方法进行网络安全风险评估,但一些组织也积极使用定量方法来加强其网络安全规划工作。本文的主要建议是,组织应采用定性和定量的网络风险评估方法。独创性/价值这项工作首次深入了解了高级从业人员如何使用和结合定性和定量网络安全风险评估,并强调了对这两种不同方法进行深入比较的必要性。
{"title":"Cyber threat: its origins and consequence and the use of qualitative and quantitative methods in cyber risk assessment","authors":"James R. Crotty, E. Daniel","doi":"10.1108/aci-07-2022-0178","DOIUrl":"https://doi.org/10.1108/aci-07-2022-0178","url":null,"abstract":"PurposeConsumers increasingly rely on organisations for online services and data storage while these same institutions seek to digitise the information assets they hold to create economic value. Cybersecurity failures arising from malicious or accidental actions can lead to significant reputational and financial loss which organisations must guard against. Despite having some critical weaknesses, qualitative cybersecurity risk analysis is widely used in developing cybersecurity plans. This research explores these weaknesses, considers how quantitative methods might address the constraints and seeks the insights and recommendations of leading cybersecurity practitioners on the use of qualitative and quantitative cyber risk assessment methods.Design/methodology/approachThe study is based upon a literature review and thematic analysis of in-depth qualitative interviews with 16 senior cybersecurity practitioners representing financial services and advisory companies from across the world.FindingsWhile most organisations continue to rely on qualitative methods for cybersecurity risk assessment, some are also actively using quantitative approaches to enhance their cybersecurity planning efforts. The primary recommendation of this paper is that organisations should adopt both a qualitative and quantitative cyber risk assessment approach.Originality/valueThis work provides the first insight into how senior practitioners are using and combining qualitative and quantitative cybersecurity risk assessment, and highlights the need for in-depth comparisons of these two different approaches.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44565165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting and staging diabetic retinopathy in retinal images using multi-branch CNN 多分支CNN在视网膜图像中检测和分期糖尿病视网膜病变
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-12-06 DOI: 10.1108/aci-06-2022-0150
Worapan Kusakunniran, Sarattha Karnjanapreechakorn, Pitipol Choopong, T. Siriapisith, N. Tesavibul, N. Phasukkijwatana, S. Prakhunhungsit, Sutasinee Boonsopon
PurposeThis paper aims to propose a solution for detecting and grading diabetic retinopathy (DR) in retinal images using a convolutional neural network (CNN)-based approach. It could classify input retinal images into a normal class or an abnormal class, which would be further split into four stages of abnormalities automatically.Design/methodology/approachThe proposed solution is developed based on a newly proposed CNN architecture, namely, DeepRoot. It consists of one main branch, which is connected by two side branches. The main branch is responsible for the primary feature extractor of both high-level and low-level features of retinal images. Then, the side branches further extract more complex and detailed features from the features outputted from the main branch. They are designed to capture details of small traces of DR in retinal images, using modified zoom-in/zoom-out and attention layers.FindingsThe proposed method is trained, validated and tested on the Kaggle dataset. The regularization of the trained model is evaluated using unseen data samples, which were self-collected from a real scenario from a hospital. It achieves a promising performance with a sensitivity of 98.18% under the two classes scenario.Originality/valueThe new CNN-based architecture (i.e. DeepRoot) is introduced with the concept of a multi-branch network. It could assist in solving a problem of an unbalanced dataset, especially when there are common characteristics across different classes (i.e. four stages of DR). Different classes could be outputted at different depths of the network.
目的本文旨在提出一种基于卷积神经网络(CNN)的方法来检测和分级视网膜图像中的糖尿病视网膜病变(DR)的解决方案。它可以将输入的视网膜图像分类为正常类别或异常类别,然后自动将其进一步划分为四个异常阶段。设计/方法论/方法所提出的解决方案是基于新提出的CNN架构,即DeepRoot开发的。它由一个主分支组成,主分支由两个分支连接。主要分支负责视网膜图像的高级和低级特征的主要特征提取器。然后,侧分支进一步从主分支输出的特征中提取更复杂和详细的特征。它们被设计为使用修改的放大/缩小和注意力层来捕捉视网膜图像中DR的小痕迹的细节。发现所提出的方法在Kaggle数据集上进行了训练、验证和测试。使用从医院的真实场景中自行收集的看不见的数据样本来评估训练模型的正则化。它在两类场景下实现了98.18%的灵敏度,具有良好的性能。独创性/价值新的基于CNN的架构(即DeepRoot)引入了多分支网络的概念。它可以帮助解决数据集不平衡的问题,特别是当不同类别(即DR的四个阶段)之间存在共同特征时。不同的类可以在网络的不同深度处输出。
{"title":"Detecting and staging diabetic retinopathy in retinal images using multi-branch CNN","authors":"Worapan Kusakunniran, Sarattha Karnjanapreechakorn, Pitipol Choopong, T. Siriapisith, N. Tesavibul, N. Phasukkijwatana, S. Prakhunhungsit, Sutasinee Boonsopon","doi":"10.1108/aci-06-2022-0150","DOIUrl":"https://doi.org/10.1108/aci-06-2022-0150","url":null,"abstract":"PurposeThis paper aims to propose a solution for detecting and grading diabetic retinopathy (DR) in retinal images using a convolutional neural network (CNN)-based approach. It could classify input retinal images into a normal class or an abnormal class, which would be further split into four stages of abnormalities automatically.Design/methodology/approachThe proposed solution is developed based on a newly proposed CNN architecture, namely, DeepRoot. It consists of one main branch, which is connected by two side branches. The main branch is responsible for the primary feature extractor of both high-level and low-level features of retinal images. Then, the side branches further extract more complex and detailed features from the features outputted from the main branch. They are designed to capture details of small traces of DR in retinal images, using modified zoom-in/zoom-out and attention layers.FindingsThe proposed method is trained, validated and tested on the Kaggle dataset. The regularization of the trained model is evaluated using unseen data samples, which were self-collected from a real scenario from a hospital. It achieves a promising performance with a sensitivity of 98.18% under the two classes scenario.Originality/valueThe new CNN-based architecture (i.e. DeepRoot) is introduced with the concept of a multi-branch network. It could assist in solving a problem of an unbalanced dataset, especially when there are common characteristics across different classes (i.e. four stages of DR). Different classes could be outputted at different depths of the network.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49366416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A lightweight deep learning approach to mouth segmentation in color images 彩色图像中嘴巴分割的一种轻量级深度学习方法
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-12-05 DOI: 10.1108/aci-08-2022-0225
Kittisak Chotikkakamthorn, P. Ritthipravat, Worapan Kusakunniran, Pimchanok Tuakta, Paitoon Benjapornlert
PurposeMouth segmentation is one of the challenging tasks of development in lip reading applications due to illumination, low chromatic contrast and complex mouth appearance. Recently, deep learning methods effectively solved mouth segmentation problems with state-of-the-art performances. This study presents a modified Mobile DeepLabV3 based technique with a comprehensive evaluation based on mouth datasets.Design/methodology/approachThis paper presents a novel approach to mouth segmentation by Mobile DeepLabV3 technique with integrating decode and auxiliary heads. Extensive data augmentation, online hard example mining (OHEM) and transfer learning have been applied. CelebAMask-HQ and the mouth dataset from 15 healthy subjects in the department of rehabilitation medicine, Ramathibodi hospital, are used in validation for mouth segmentation performance.FindingsExtensive data augmentation, OHEM and transfer learning had been performed in this study. This technique achieved better performance on CelebAMask-HQ than existing segmentation techniques with a mean Jaccard similarity coefficient (JSC), mean classification accuracy and mean Dice similarity coefficient (DSC) of 0.8640, 93.34% and 0.9267, respectively. This technique also achieved better performance on the mouth dataset with a mean JSC, mean classification accuracy and mean DSC of 0.8834, 94.87% and 0.9367, respectively. The proposed technique achieved inference time usage per image of 48.12 ms.Originality/valueThe modified Mobile DeepLabV3 technique was developed with extensive data augmentation, OHEM and transfer learning. This technique gained better mouth segmentation performance than existing techniques. This makes it suitable for implementation in further lip-reading applications.
目的唇形分割是唇读应用发展中具有挑战性的任务之一,主要是由于光照、低颜色对比度和复杂的嘴形。近年来,深度学习方法以最先进的性能有效地解决了口腔分割问题。本研究提出了一种改进的基于移动DeepLabV3的技术,并基于口腔数据集进行了综合评估。设计/方法/方法本文提出了一种基于移动DeepLabV3技术的口部分割新方法,该方法集成了解码和辅助头。广泛的数据扩充、在线硬例挖掘(OHEM)和迁移学习已被应用。CelebAMask-HQ和来自Ramathibodi医院康复医学科的15名健康受试者的口腔数据集用于口腔分割性能的验证。本研究进行了大量的数据扩充、OHEM和迁移学习。该方法在CelebAMask-HQ上的平均Jaccard相似系数(JSC)、平均分类准确率(93.34%)和平均Dice相似系数(DSC)分别为0.8640、93.34%和0.9267,优于现有的分割技术。该技术在口腔数据集上也取得了较好的性能,平均JSC、平均分类精度和平均DSC分别为0.8834、94.87%和0.9367。该技术实现了每张图像48.12 ms的推理时间使用。原创性/价值改进的移动DeepLabV3技术采用了广泛的数据增强、OHEM和迁移学习。该方法获得了比现有方法更好的嘴巴分割性能。这使得它适合在进一步的唇读应用中实现。
{"title":"A lightweight deep learning approach to mouth segmentation in color images","authors":"Kittisak Chotikkakamthorn, P. Ritthipravat, Worapan Kusakunniran, Pimchanok Tuakta, Paitoon Benjapornlert","doi":"10.1108/aci-08-2022-0225","DOIUrl":"https://doi.org/10.1108/aci-08-2022-0225","url":null,"abstract":"PurposeMouth segmentation is one of the challenging tasks of development in lip reading applications due to illumination, low chromatic contrast and complex mouth appearance. Recently, deep learning methods effectively solved mouth segmentation problems with state-of-the-art performances. This study presents a modified Mobile DeepLabV3 based technique with a comprehensive evaluation based on mouth datasets.Design/methodology/approachThis paper presents a novel approach to mouth segmentation by Mobile DeepLabV3 technique with integrating decode and auxiliary heads. Extensive data augmentation, online hard example mining (OHEM) and transfer learning have been applied. CelebAMask-HQ and the mouth dataset from 15 healthy subjects in the department of rehabilitation medicine, Ramathibodi hospital, are used in validation for mouth segmentation performance.FindingsExtensive data augmentation, OHEM and transfer learning had been performed in this study. This technique achieved better performance on CelebAMask-HQ than existing segmentation techniques with a mean Jaccard similarity coefficient (JSC), mean classification accuracy and mean Dice similarity coefficient (DSC) of 0.8640, 93.34% and 0.9267, respectively. This technique also achieved better performance on the mouth dataset with a mean JSC, mean classification accuracy and mean DSC of 0.8834, 94.87% and 0.9367, respectively. The proposed technique achieved inference time usage per image of 48.12 ms.Originality/valueThe modified Mobile DeepLabV3 technique was developed with extensive data augmentation, OHEM and transfer learning. This technique gained better mouth segmentation performance than existing techniques. This makes it suitable for implementation in further lip-reading applications.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43420892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of chatbots for customer service in MSMEs 在中小微企业中使用聊天机器人为客户服务
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-11-07 DOI: 10.1108/aci-06-2022-0148
Jorge Cordero, L. Barba-Guaman, Franco Guamán
PurposeThis research work aims to arise from developing new communication channels for customer service in micro, small and medium enterprises (MSMEs), such as chatbots. In particular, the results of the usability testing of three chatbots implemented in MSMEs are presented.Design/methodology/approachThe methodology employed includes participants, chatbot development platform, research methodology, software development methodology and usability test to contextualize the study's results.FindingsBased on the results obtained from the System Usability Scale (SUS) and considering the accuracy of the chatbot's responses, it is concluded that the level of satisfaction in using chatbots is high; therefore, if the chatbot is well integrated with the communication systems/channels of the MSMEs, the client receives an excellent, fast and efficient service.Originality/valueThe paper analyzes chatbots for customer service and presents the usability testing results of three chatbots implemented in MSMEs.
目的这项研究工作旨在为微型、小型和中型企业(MSME)的客户服务开发新的沟通渠道,如聊天机器人。特别是,给出了在中小微企业中实现的三个聊天机器人的可用性测试结果。设计/方法论/方法论所采用的方法论包括参与者、聊天机器人开发平台、研究方法论、软件开发方法论和可用性测试,以将研究结果置于情境中。结果基于系统可用性量表(SUS)的结果,并考虑到聊天机器人回复的准确性,得出结论:使用聊天机器人的满意度较高;因此,如果聊天机器人与中小微企业的通信系统/渠道很好地集成在一起,客户端将获得卓越、快速和高效的服务。原创性/价值本文分析了用于客户服务的聊天机器人,并给出了在中小微企业中实现的三个聊天机器人的可用性测试结果。
{"title":"Use of chatbots for customer service in MSMEs","authors":"Jorge Cordero, L. Barba-Guaman, Franco Guamán","doi":"10.1108/aci-06-2022-0148","DOIUrl":"https://doi.org/10.1108/aci-06-2022-0148","url":null,"abstract":"PurposeThis research work aims to arise from developing new communication channels for customer service in micro, small and medium enterprises (MSMEs), such as chatbots. In particular, the results of the usability testing of three chatbots implemented in MSMEs are presented.Design/methodology/approachThe methodology employed includes participants, chatbot development platform, research methodology, software development methodology and usability test to contextualize the study's results.FindingsBased on the results obtained from the System Usability Scale (SUS) and considering the accuracy of the chatbot's responses, it is concluded that the level of satisfaction in using chatbots is high; therefore, if the chatbot is well integrated with the communication systems/channels of the MSMEs, the client receives an excellent, fast and efficient service.Originality/valueThe paper analyzes chatbots for customer service and presents the usability testing results of three chatbots implemented in MSMEs.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48045273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
“Automatic” interpretation of multiple correspondence analysis (MCA) results for nonexpert users, using R programming 使用R编程为非专业用户“自动”解释多个对应分析(MCA)结果
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-10-05 DOI: 10.1108/aci-07-2022-0191
Stratos Moschidis, Angelos Markos, Athanasios C. Thanopoulos
PurposeThe purpose of this paper is to create an automatic interpretation of the results of the method of multiple correspondence analysis (MCA) for categorical variables, so that the nonexpert user can immediately and safely interpret the results, which concern, as the authors know, the categories of variables that strongly interact and determine the trends of the subject under investigation.Design/methodology/approachThis study is a novel theoretical approach to interpreting the results of the MCA method. The classical interpretation of MCA results is based on three indicators: the projection (F) of the category points of the variables in factorial axes, the point contribution to axis creation (CTR) and the correlation (COR) of a point with an axis. The synthetic use of the aforementioned indicators is arduous, particularly for nonexpert users, and frequently results in misinterpretations. The current study has achieved a synthesis of the aforementioned indicators, so that the interpretation of the results is based on a new indicator, as correspondingly on an index, the well-known method principal component analysis (PCA) for continuous variables is based.FindingsTwo (2) concepts were proposed in the new theoretical approach. The interpretative axis corresponding to the classical factorial axis and the interpretative plane corresponding to the factorial plane that as it will be seen offer clear and safe interpretative results in MCA.Research limitations/implicationsIt is obvious that in the development of the proposed automatic interpretation of the MCA results, the authors do not have in the interpretative axes the actual projections of the points as is the case in the original factorial axes, but this is not of interest to the simple user who is only interested in being able to distinguish the categories of variables that determine the interpretation of the most pronounced trends of the phenomenon being examined.Practical implicationsThe results of this research can have positive implications for the dissemination of MCA as a method and its use as an integrated exploratory data analysis approach.Originality/valueInterpreting the MCA results presents difficulties for the nonexpert user and sometimes lead to misinterpretations. The interpretative difficulty persists in the MCA's other interpretative proposals. The proposed method of interpreting the MCA results clearly and accurately allows for the interpretation of its results and thus contributes to the dissemination of the MCA as an integrated method of categorical data analysis and exploration.
目的本文的目的是对分类变量的多重对应分析(MCA)方法的结果进行自动解释,以便非专业用户能够立即、安全地解释结果,正如作者所知,这些结果涉及强烈相互作用并决定被调查对象趋势的变量类别。设计/方法论/方法本研究是一种解释MCA方法结果的新颖理论方法。MCA结果的经典解释基于三个指标:因子轴中变量类别点的投影(F)、点对轴创建的贡献(CTR)以及点与轴的相关性(COR)。综合使用上述指标是困难的,特别是对非专业用户来说,而且经常导致误解。目前的研究已经实现了对上述指标的综合,因此对结果的解释是基于一个新的指标,相应地,基于一个指数,连续变量的著名方法主成分分析(PCA)就是基于此。发现在新的理论方法中提出了两个概念。与经典析因轴对应的解释轴和与析因平面对应的解释平面将在MCA中提供清晰和安全的解释结果。研究局限性/含义很明显,在所提出的MCA结果的自动解释的发展中,作者在解释轴中没有像在原始因子轴中那样的点的实际投影,但这对只对能够区分变量类别感兴趣的简单用户来说是不感兴趣的,这些变量类别决定了对所研究现象最显著趋势的解释。实际意义这项研究的结果可以对MCA作为一种方法的传播及其作为一种综合探索性数据分析方法的使用产生积极意义。原创性/价值解释MCA结果给非专业用户带来了困难,有时还会导致误解。MCA的其他解释性建议仍然存在解释上的困难。所提出的清晰准确地解释MCA结果的方法允许对其结果进行解释,从而有助于MCA作为分类数据分析和探索的综合方法的传播。
{"title":"“Automatic” interpretation of multiple correspondence analysis (MCA) results for nonexpert users, using R programming","authors":"Stratos Moschidis, Angelos Markos, Athanasios C. Thanopoulos","doi":"10.1108/aci-07-2022-0191","DOIUrl":"https://doi.org/10.1108/aci-07-2022-0191","url":null,"abstract":"PurposeThe purpose of this paper is to create an automatic interpretation of the results of the method of multiple correspondence analysis (MCA) for categorical variables, so that the nonexpert user can immediately and safely interpret the results, which concern, as the authors know, the categories of variables that strongly interact and determine the trends of the subject under investigation.Design/methodology/approachThis study is a novel theoretical approach to interpreting the results of the MCA method. The classical interpretation of MCA results is based on three indicators: the projection (F) of the category points of the variables in factorial axes, the point contribution to axis creation (CTR) and the correlation (COR) of a point with an axis. The synthetic use of the aforementioned indicators is arduous, particularly for nonexpert users, and frequently results in misinterpretations. The current study has achieved a synthesis of the aforementioned indicators, so that the interpretation of the results is based on a new indicator, as correspondingly on an index, the well-known method principal component analysis (PCA) for continuous variables is based.FindingsTwo (2) concepts were proposed in the new theoretical approach. The interpretative axis corresponding to the classical factorial axis and the interpretative plane corresponding to the factorial plane that as it will be seen offer clear and safe interpretative results in MCA.Research limitations/implicationsIt is obvious that in the development of the proposed automatic interpretation of the MCA results, the authors do not have in the interpretative axes the actual projections of the points as is the case in the original factorial axes, but this is not of interest to the simple user who is only interested in being able to distinguish the categories of variables that determine the interpretation of the most pronounced trends of the phenomenon being examined.Practical implicationsThe results of this research can have positive implications for the dissemination of MCA as a method and its use as an integrated exploratory data analysis approach.Originality/valueInterpreting the MCA results presents difficulties for the nonexpert user and sometimes lead to misinterpretations. The interpretative difficulty persists in the MCA's other interpretative proposals. The proposed method of interpreting the MCA results clearly and accurately allows for the interpretation of its results and thus contributes to the dissemination of the MCA as an integrated method of categorical data analysis and exploration.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49127983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An empirical study on the use of a facial emotion recognition system in guidance counseling utilizing the technology acceptance model and the general comfort questionnaire 基于技术接受模型和一般舒适度问卷的面部情绪识别系统在指导咨询中的应用实证研究
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-10-04 DOI: 10.1108/aci-06-2022-0154
Dhong Fhel K. Gom-os, Kelvin Y. Yong
PurposeThe goal of this study is to test the real-world use of an emotion recognition system.Design/methodology/approachThe researchers chose an existing algorithm that displayed high accuracy and speed. Four emotions: happy, sadness, anger and surprise, are used from six of the universal emotions, associated by their own mood markers. The mood-matrix interface is then coded as a web application. Four guidance counselors and 10 students participated in the testing of the mood-matrix. Guidance counselors answered the technology acceptance model (TAM) to assess its usefulness, and the students answered the general comfort questionnaire (GCQ) to assess their comfort levels.FindingsResults from TAM found that the mood-matrix has significant use for the guidance counselors and the GCQ finds that the students were comfortable during testing.Originality/valueNo study yet has tested an emotion recognition system applied to counseling or any mental health or psychological transactions.
目的本研究的目的是测试情绪识别系统在现实世界中的应用。设计/方法论/方法研究人员选择了一种显示出高精度和高速度的现有算法。四种情绪:快乐、悲伤、愤怒和惊讶,来自六种普遍的情绪,由它们自己的情绪标记联系在一起。然后将情绪矩阵接口编码为网络应用程序。4名辅导员和10名学生参与了情绪矩阵的测试。指导顾问回答了技术接受模型(TAM)以评估其有用性,学生回答了一般舒适度问卷(GCQ)以评估他们的舒适度水平。TAM的结果发现,情绪矩阵对指导辅导员有重要作用,GCQ发现学生在测试过程中感到舒适。独创性/价值目前还没有研究测试用于咨询或任何心理健康或心理交易的情绪识别系统。
{"title":"An empirical study on the use of a facial emotion recognition system in guidance counseling utilizing the technology acceptance model and the general comfort questionnaire","authors":"Dhong Fhel K. Gom-os, Kelvin Y. Yong","doi":"10.1108/aci-06-2022-0154","DOIUrl":"https://doi.org/10.1108/aci-06-2022-0154","url":null,"abstract":"PurposeThe goal of this study is to test the real-world use of an emotion recognition system.Design/methodology/approachThe researchers chose an existing algorithm that displayed high accuracy and speed. Four emotions: happy, sadness, anger and surprise, are used from six of the universal emotions, associated by their own mood markers. The mood-matrix interface is then coded as a web application. Four guidance counselors and 10 students participated in the testing of the mood-matrix. Guidance counselors answered the technology acceptance model (TAM) to assess its usefulness, and the students answered the general comfort questionnaire (GCQ) to assess their comfort levels.FindingsResults from TAM found that the mood-matrix has significant use for the guidance counselors and the GCQ finds that the students were comfortable during testing.Originality/valueNo study yet has tested an emotion recognition system applied to counseling or any mental health or psychological transactions.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41447638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subject independent emotion recognition using EEG and physiological signals – a comparative study 基于脑电和生理信号的受试者自主情绪识别比较研究
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-09-29 DOI: 10.1108/aci-03-2022-0080
Manju Priya Arthanarisamy Ramaswamy, Suja Palaniswamy
PurposeThe aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG), electromyography (EMG), electrodermal activity (EDA), temperature, plethysmograph and respiration. The experiments are conducted on both modalities independently and in combination. This study arranges the physiological signals in order based on the prediction accuracy obtained on test data using time and frequency domain features.Design/methodology/approachDEAP dataset is used in this experiment. Time and frequency domain features of EEG and physiological signals are extracted, followed by correlation-based feature selection. Classifiers namely – Naïve Bayes, logistic regression, linear discriminant analysis, quadratic discriminant analysis, logit boost and stacking are trained on the selected features. Based on the performance of the classifiers on the test set, the best modality for each dimension of emotion is identified.Findings The experimental results with EEG as one modality and all physiological signals as another modality indicate that EEG signals are better at arousal prediction compared to physiological signals by 7.18%, while physiological signals are better at valence prediction compared to EEG signals by 3.51%. The valence prediction accuracy of EOG is superior to zygomaticus electromyography (zEMG) and EDA by 1.75% at the cost of higher number of electrodes. This paper concludes that valence can be measured from the eyes (EOG) while arousal can be measured from the changes in blood volume (plethysmograph). The sorted order of physiological signals based on arousal prediction accuracy is plethysmograph, EOG (hEOG + vEOG), vEOG, hEOG, zEMG, tEMG, temperature, EMG (tEMG + zEMG), respiration, EDA, while based on valence prediction accuracy the sorted order is EOG (hEOG + vEOG), EDA, zEMG, hEOG, respiration, tEMG, vEOG, EMG (tEMG + zEMG), temperature and plethysmograph.Originality/valueMany of the emotion recognition studies in literature are subject dependent and the limited subject independent emotion recognition studies in the literature report an average of leave one subject out (LOSO) validation result as accuracy. The work reported in this paper sets the baseline for subject independent emotion recognition using DEAP dataset by clearly specifying the subjects used in training and test set. In addition, this work specifies the cut-off score used to classify the scale as low or high in arousal and valence dimensions. Generally, statistical features are used for emotion recognition using physiological signals as a modality, whereas in this work, time and frequency domain features of physiological signals and EEG are used. This paper concludes that valence can be identified from EOG while arousal can be predicted from plethysmograph.
目的本研究旨在研究EEG和外周生理信号的受试者独立情绪识别能力,即:脑电图(EOG)、肌电图(EMG)、皮肤电活动(EDA)、温度、体积描记图和呼吸。实验是在两种模式下独立进行和组合进行的。本研究基于使用时域和频域特征在测试数据上获得的预测精度,按顺序排列生理信号。本实验采用设计/方法论/方法DEAP数据集。提取脑电和生理信号的时域和频域特征,然后进行基于相关性的特征选择。分类器,即Naïve Bayes、逻辑回归、线性判别分析、二次判别分析、logit-boost和堆叠,在所选特征上进行训练。基于分类器在测试集上的性能,识别情绪每个维度的最佳模态。结果EEG作为一种模态,所有生理信号作为另一种模态的实验结果表明,与生理信号相比,而生理信号的效价预测比EEG信号好3.51%。EOG的效价预测准确率比颧骨肌电图(zEMG)和EDA高1.75%,但代价是电极数量更多。本文的结论是,效价可以从眼睛(EOG)测量,而唤醒可以从血容量的变化(体积描记图)测量。基于唤醒预测准确度的生理信号的排序顺序是体积描记图、EOG(hEOG+vEOG)、vEOG、hEOG、zEMG、tEMG、温度、EMG(tEMG+zEMG)、呼吸、EDA,而基于效价预测准确度,排序顺序是EOG(h5OG+vEO)、EDA、zEMG、hEOG、呼吸、tEMG、vEOG、EMG、(tEMG+zEMG)、温度和体积描记器。原创性/价值文献中的许多情绪识别研究都是受试者依赖性的,文献中的有限受试者独立性情绪识别研究报告了平均遗漏一个受试者(LOSO)验证结果作为准确性。本文报告的工作通过明确指定训练和测试集中使用的受试者,使用DEAP数据集为受试者独立情绪识别设定了基线。此外,这项工作规定了用于将该量表在唤醒和效价维度上分为低或高的截止分数。通常,统计特征用于使用生理信号作为模态的情绪识别,而在这项工作中,使用生理信号和EEG的时域和频域特征。本文的结论是,化合价可以从EOG中识别,而唤醒可以从体积描记图中预测。
{"title":"Subject independent emotion recognition using EEG and physiological signals – a comparative study","authors":"Manju Priya Arthanarisamy Ramaswamy, Suja Palaniswamy","doi":"10.1108/aci-03-2022-0080","DOIUrl":"https://doi.org/10.1108/aci-03-2022-0080","url":null,"abstract":"PurposeThe aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG), electromyography (EMG), electrodermal activity (EDA), temperature, plethysmograph and respiration. The experiments are conducted on both modalities independently and in combination. This study arranges the physiological signals in order based on the prediction accuracy obtained on test data using time and frequency domain features.Design/methodology/approachDEAP dataset is used in this experiment. Time and frequency domain features of EEG and physiological signals are extracted, followed by correlation-based feature selection. Classifiers namely – Naïve Bayes, logistic regression, linear discriminant analysis, quadratic discriminant analysis, logit boost and stacking are trained on the selected features. Based on the performance of the classifiers on the test set, the best modality for each dimension of emotion is identified.Findings The experimental results with EEG as one modality and all physiological signals as another modality indicate that EEG signals are better at arousal prediction compared to physiological signals by 7.18%, while physiological signals are better at valence prediction compared to EEG signals by 3.51%. The valence prediction accuracy of EOG is superior to zygomaticus electromyography (zEMG) and EDA by 1.75% at the cost of higher number of electrodes. This paper concludes that valence can be measured from the eyes (EOG) while arousal can be measured from the changes in blood volume (plethysmograph). The sorted order of physiological signals based on arousal prediction accuracy is plethysmograph, EOG (hEOG + vEOG), vEOG, hEOG, zEMG, tEMG, temperature, EMG (tEMG + zEMG), respiration, EDA, while based on valence prediction accuracy the sorted order is EOG (hEOG + vEOG), EDA, zEMG, hEOG, respiration, tEMG, vEOG, EMG (tEMG + zEMG), temperature and plethysmograph.Originality/valueMany of the emotion recognition studies in literature are subject dependent and the limited subject independent emotion recognition studies in the literature report an average of leave one subject out (LOSO) validation result as accuracy. The work reported in this paper sets the baseline for subject independent emotion recognition using DEAP dataset by clearly specifying the subjects used in training and test set. In addition, this work specifies the cut-off score used to classify the scale as low or high in arousal and valence dimensions. Generally, statistical features are used for emotion recognition using physiological signals as a modality, whereas in this work, time and frequency domain features of physiological signals and EEG are used. This paper concludes that valence can be identified from EOG while arousal can be predicted from plethysmograph.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44210255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Measuring digital transformation in higher education institutions – content validity instrument 衡量高等教育机构数字化转型——内容效度工具
Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-09-09 DOI: 10.1108/aci-03-2022-0069
Lina María Castro Benavides, Johnny Alexander Tamayo Arias, D. Burgos, A. Martens
PurposeThis study aims to validate the content of an instrument which identifies the organizational, sociocultural and technological characteristics that foster digital transformation (DT) in higher education institutions (HEIs) through the Delphi method.Design/methodology/approachThe methodology is quantitative, non-experimental, and descriptive in scope. First, expert judges were selected; Second, Aiken's V coefficients were obtained. Nine experts were considered for the validation.FindingsThis study’s findings show that the instrument has content validity and there was strong consensus among the judges. The instrument consists of 29 questions; 13 items adjusted and 2 merged.Originality/valueA novel instrument for measuring the DT at HEIs was designed and has content validity, evidenced by Aiken's V coefficients of 0.91 with a 0.05 significance, and consensus among judges evidenced by consensus coefficient of 0.81.
目的本研究旨在验证一项工具的内容,该工具通过德尔菲方法确定了促进高等教育机构数字化转型的组织、社会文化和技术特征。设计/方法论/方法论是定量的、非实验性的、范围描述性的。首先,选出了专家法官;其次,得到了Aiken的V系数。考虑了九名专家进行验证。研究结果本研究的结果表明,该工具具有内容有效性,并在评委中达成了强烈的共识。该文书由29个问题组成;调整了13个项目,合并了2个项目。独创性/价值设计了一种用于测量高等教育机构DT的新型仪器,该仪器具有内容有效性,Aiken的V系数为0.91,显著性为0.05,法官之间的一致性系数为0.81。
{"title":"Measuring digital transformation in higher education institutions – content validity instrument","authors":"Lina María Castro Benavides, Johnny Alexander Tamayo Arias, D. Burgos, A. Martens","doi":"10.1108/aci-03-2022-0069","DOIUrl":"https://doi.org/10.1108/aci-03-2022-0069","url":null,"abstract":"PurposeThis study aims to validate the content of an instrument which identifies the organizational, sociocultural and technological characteristics that foster digital transformation (DT) in higher education institutions (HEIs) through the Delphi method.Design/methodology/approachThe methodology is quantitative, non-experimental, and descriptive in scope. First, expert judges were selected; Second, Aiken's V coefficients were obtained. Nine experts were considered for the validation.FindingsThis study’s findings show that the instrument has content validity and there was strong consensus among the judges. The instrument consists of 29 questions; 13 items adjusted and 2 merged.Originality/valueA novel instrument for measuring the DT at HEIs was designed and has content validity, evidenced by Aiken's V coefficients of 0.91 with a 0.05 significance, and consensus among judges evidenced by consensus coefficient of 0.81.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43097476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Applied Computing and Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1