Pub Date : 2024-05-01DOI: 10.1109/mitp.2024.3390191
{"title":"IEEE Computer Society Volunteer Service Awards","authors":"","doi":"10.1109/mitp.2024.3390191","DOIUrl":"https://doi.org/10.1109/mitp.2024.3390191","url":null,"abstract":"","PeriodicalId":49045,"journal":{"name":"IT Professional","volume":"17 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140833492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1109/mitp.2024.3375570
Stephen J. Andriole
Alternative sourcing models include insourcing, cosourcing, and outsourcing. Every C-Suite on the planet should understand the strengths, weaknesses, and appropriateness of each model. The major distinction is “brains” versus “brawn” where decisions must be made about what can be insourced, cosourced, and outsourced. The rule of thumb is to keep brains in house and leave the brawn to outsources, to keep strategy, innovation, and AI inhouse, and outsource operational requirements.
{"title":"Who’s Doing the Work? What C-Suites Should Know About Sourcing","authors":"Stephen J. Andriole","doi":"10.1109/mitp.2024.3375570","DOIUrl":"https://doi.org/10.1109/mitp.2024.3375570","url":null,"abstract":"Alternative sourcing models include insourcing, cosourcing, and outsourcing. Every C-Suite on the planet should understand the strengths, weaknesses, and appropriateness of each model. The major distinction is “brains” versus “brawn” where decisions must be made about what can be insourced, cosourced, and outsourced. The rule of thumb is to keep brains in house and leave the brawn to outsources, to keep strategy, innovation, and AI inhouse, and outsource operational requirements.","PeriodicalId":49045,"journal":{"name":"IT Professional","volume":"57 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140833608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1109/mitp.2024.3374096
Priyanka Rani, Abhijit Kumar Nag, Rifat Shahriyar
Unauthorized access to sensitive or confidential data results in a data breach, which can cause significant harm to an organization. Reporting breaches and reviewing prior records can help reduce damages. To aid in preparation, antivirus and security companies have published data breach reports, but they can be difficult to comprehend and require substantial effort to study. This article proposes a data breach incident classification framework using machine learning algorithms (naive Bayes, logistic regression, support vector machine, and random forest) on a dataset from the Privacy Rights Clearinghouse. The framework’s performance is evaluated using various metrics, including accuracy, F1 score, and confusion matrix. The article also employs topic modeling with latent Dirichlet allocation to enhance the classification’s accuracy.
{"title":"A Data-Driven Classification Framework for Cybersecurity Breaches","authors":"Priyanka Rani, Abhijit Kumar Nag, Rifat Shahriyar","doi":"10.1109/mitp.2024.3374096","DOIUrl":"https://doi.org/10.1109/mitp.2024.3374096","url":null,"abstract":"Unauthorized access to sensitive or confidential data results in a data breach, which can cause significant harm to an organization. Reporting breaches and reviewing prior records can help reduce damages. To aid in preparation, antivirus and security companies have published data breach reports, but they can be difficult to comprehend and require substantial effort to study. This article proposes a data breach incident classification framework using machine learning algorithms (naive Bayes, logistic regression, support vector machine, and random forest) on a dataset from the Privacy Rights Clearinghouse. The framework’s performance is evaluated using various metrics, including accuracy, F1 score, and confusion matrix. The article also employs topic modeling with latent Dirichlet allocation to enhance the classification’s accuracy.","PeriodicalId":49045,"journal":{"name":"IT Professional","volume":"32 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140833544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1109/mitp.2024.3375569
Nir Kshetri
This article examines investment trends and industry development in generative AI (GAI) while also evaluating its economic impact on diverse sectors and economies. It analyzes global variations across regions to provide comprehensive insights into the landscape of GAI adoption.
本文探讨了生成式人工智能(GAI)的投资趋势和行业发展,同时评估了其对不同行业和经济体的经济影响。文章分析了全球各地区的差异,对 GAI 的应用前景提供了全面的见解。
{"title":"Navigating the Landscape of Generative AI: Investment Trends, Industry Growth, and Economic Effects","authors":"Nir Kshetri","doi":"10.1109/mitp.2024.3375569","DOIUrl":"https://doi.org/10.1109/mitp.2024.3375569","url":null,"abstract":"This article examines investment trends and industry development in generative AI (GAI) while also evaluating its economic impact on diverse sectors and economies. It analyzes global variations across regions to provide comprehensive insights into the landscape of GAI adoption.","PeriodicalId":49045,"journal":{"name":"IT Professional","volume":"64 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140834102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1109/mitp.2023.3338007
Luca Piano, Fabio Garcea, Andrea Cavallone, Ignacio Aparicio Vazquez, Lia Morra, Fabrizio Lamberti
Dataset drift is a common challenge in machine learning, especially for models trained on unstructured data, such as images. In this article, we propose a new approach for the detection of data drift in black-box models, which is based on Hellinger distance and feature extraction methods. The proposed approach is aimed at detecting data drift without knowing the architecture of the model to monitor, the dataset on which it was trained, or both. The article analyzes three different use cases to evaluate the effectiveness of the proposed approach, encompassing a variety of tasks including document segmentation, classification, and handwriting recognition. The use cases considered for the drift are adversarial assaults, domain shifts, and dataset biases. The experimental results show the efficacy of our drift detection approach in identifying changes in distribution under various training settings.
{"title":"Drift Detection for Black-Box Deep Learning Models","authors":"Luca Piano, Fabio Garcea, Andrea Cavallone, Ignacio Aparicio Vazquez, Lia Morra, Fabrizio Lamberti","doi":"10.1109/mitp.2023.3338007","DOIUrl":"https://doi.org/10.1109/mitp.2023.3338007","url":null,"abstract":"Dataset drift is a common challenge in machine learning, especially for models trained on unstructured data, such as images. In this article, we propose a new approach for the detection of data drift in black-box models, which is based on Hellinger distance and feature extraction methods. The proposed approach is aimed at detecting data drift without knowing the architecture of the model to monitor, the dataset on which it was trained, or both. The article analyzes three different use cases to evaluate the effectiveness of the proposed approach, encompassing a variety of tasks including document segmentation, classification, and handwriting recognition. The use cases considered for the drift are adversarial assaults, domain shifts, and dataset biases. The experimental results show the efficacy of our drift detection approach in identifying changes in distribution under various training settings.","PeriodicalId":49045,"journal":{"name":"IT Professional","volume":"1 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140833666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1109/mitp.2024.3369948
Vishal Maniyal, Vijay Kumar
Deepfake is a type of artificial intelligence technology that makes use of deep learning to generate fake multimedia. A large number of images, audios, and videos have surfaced, particularly on social media, in which deepfake technology is used. This has raised concerns because it can be misleading or fraudulent media, can spread misinformation and propaganda, or potentially cause harm to individuals’ reputations. This article presents a comprehensive review of deepfake technology, focusing on its underlying principles and methodologies. The analysis highlights both the positive as well as negative implications of deepfake technology, shedding light on its potential benefits in filmmaking, digital art, and content creation, alongside its ethical and societal implications, including concerns about misinformation, privacy violations, and cyberthreats.
{"title":"Unveiling the Deepfake Dilemma: Framework, Classification, and Future Trajectories","authors":"Vishal Maniyal, Vijay Kumar","doi":"10.1109/mitp.2024.3369948","DOIUrl":"https://doi.org/10.1109/mitp.2024.3369948","url":null,"abstract":"Deepfake is a type of artificial intelligence technology that makes use of deep learning to generate fake multimedia. A large number of images, audios, and videos have surfaced, particularly on social media, in which deepfake technology is used. This has raised concerns because it can be misleading or fraudulent media, can spread misinformation and propaganda, or potentially cause harm to individuals’ reputations. This article presents a comprehensive review of deepfake technology, focusing on its underlying principles and methodologies. The analysis highlights both the positive as well as negative implications of deepfake technology, shedding light on its potential benefits in filmmaking, digital art, and content creation, alongside its ethical and societal implications, including concerns about misinformation, privacy violations, and cyberthreats.","PeriodicalId":49045,"journal":{"name":"IT Professional","volume":"57 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140833925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}