Clustering approaches are extensively used by many areas such as IR, Data Integration, Document Classification, Web Mining, Query Processing, and many other domains and disciplines. Nowadays, much literature describes clustering algorithms on multivariate data sets. However, there is limited literature that presented them with exhaustive and extensive theoretical analysis as well as experimental comparisons. This experimental survey paper deals with the basic principle, and techniques used, including important characteristics, application areas, run-time performance, internal, external, and stability validity of cluster quality, etc., on five different data sets of eleven clustering algorithms. This paper analyses how these algorithms behave with five different multivariate data sets in data representation. To answer this question, we compared the efficiency of eleven clustering approaches on five different data sets using three validity metrics-internal, external, and stability and found the optimal score to know the feasible solution of each algorithm. In addition, we have also included four popular and modern clustering algorithms with only their theoretical discussion. Our experimental results for only traditional clustering algorithms showed that different algorithms performed different behavior on different data sets in terms of running time (speed), accuracy and, the size of data set. This study emphasized the need for more adaptive algorithms and a deliberate balance between the running time and accuracy with their theoretical as well as implementation aspects.
{"title":"Experimental Comparisons of Clustering Approaches for Data Representation","authors":"S. Anand, Suresh Kumar","doi":"10.1145/3490384","DOIUrl":"https://doi.org/10.1145/3490384","url":null,"abstract":"Clustering approaches are extensively used by many areas such as IR, Data Integration, Document Classification, Web Mining, Query Processing, and many other domains and disciplines. Nowadays, much literature describes clustering algorithms on multivariate data sets. However, there is limited literature that presented them with exhaustive and extensive theoretical analysis as well as experimental comparisons. This experimental survey paper deals with the basic principle, and techniques used, including important characteristics, application areas, run-time performance, internal, external, and stability validity of cluster quality, etc., on five different data sets of eleven clustering algorithms. This paper analyses how these algorithms behave with five different multivariate data sets in data representation. To answer this question, we compared the efficiency of eleven clustering approaches on five different data sets using three validity metrics-internal, external, and stability and found the optimal score to know the feasible solution of each algorithm. In addition, we have also included four popular and modern clustering algorithms with only their theoretical discussion. Our experimental results for only traditional clustering algorithms showed that different algorithms performed different behavior on different data sets in terms of running time (speed), accuracy and, the size of data set. This study emphasized the need for more adaptive algorithms and a deliberate balance between the running time and accuracy with their theoretical as well as implementation aspects.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"21 1","pages":"1 - 33"},"PeriodicalIF":0.0,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78630865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Boolean satisfiability problem (SAT) is a fundamental NP-complete decision problem in automated reasoning and mathematical logic. As evidenced by the results of SAT competitions, the performance of SAT solvers varies substantially between different SAT categories (random, crafted, and industrial). A suggested explanation is that SAT solvers may exploit the underlying structure inherent to SAT instances. There have been attempts to define the structure of SAT in terms of structural measures such as phase transition, backbones, backdoors, small-world, scale-free, treewidth, centrality, community, self-similarity, and entropy. Still, the empirical evidence of structural measures for SAT has been provided for only some SAT categories. Furthermore, the evidence has not been theoretically proven. Also, the impact of structural measures on the behavior of SAT solvers has not been extensively examined. This work provides a comprehensive study on structural measures for SAT that have been presented in the literature. We provide an overview of the works on structural measures for SAT and their relatedness to the performance of SAT solvers. Accordingly, a taxonomy of structural measures for SAT is presented. We also review in detail important applications of structural measures for SAT, focusing mainly on enhancing SAT solvers, generating SAT instances, and classifying SAT instances.
{"title":"On the Structure of the Boolean Satisfiability Problem: A Survey","authors":"Tasniem Alyahya, M. Menai, H. Mathkour","doi":"10.1145/3491210","DOIUrl":"https://doi.org/10.1145/3491210","url":null,"abstract":"The Boolean satisfiability problem (SAT) is a fundamental NP-complete decision problem in automated reasoning and mathematical logic. As evidenced by the results of SAT competitions, the performance of SAT solvers varies substantially between different SAT categories (random, crafted, and industrial). A suggested explanation is that SAT solvers may exploit the underlying structure inherent to SAT instances. There have been attempts to define the structure of SAT in terms of structural measures such as phase transition, backbones, backdoors, small-world, scale-free, treewidth, centrality, community, self-similarity, and entropy. Still, the empirical evidence of structural measures for SAT has been provided for only some SAT categories. Furthermore, the evidence has not been theoretically proven. Also, the impact of structural measures on the behavior of SAT solvers has not been extensively examined. This work provides a comprehensive study on structural measures for SAT that have been presented in the literature. We provide an overview of the works on structural measures for SAT and their relatedness to the performance of SAT solvers. Accordingly, a taxonomy of structural measures for SAT is presented. We also review in detail important applications of structural measures for SAT, focusing mainly on enhancing SAT solvers, generating SAT instances, and classifying SAT instances.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"432 1","pages":"1 - 34"},"PeriodicalIF":0.0,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82867837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How to transfer the semantic information in a sentence to a computable numerical embedding form is a fundamental problem in natural language processing. An informative universal sentence embedding can greatly promote subsequent natural language processing tasks. However, unlike universal word embeddings, a widely accepted general-purpose sentence embedding technique has not been developed. This survey summarizes the current universal sentence-embedding methods, categorizes them into four groups from a linguistic view, and ultimately analyzes their reported performance. Sentence embeddings trained from words in a bottom-up manner are observed to have different, nearly opposite, performance patterns in downstream tasks compared to those trained from logical relationships between sentences. By comparing differences of training schemes in and between groups, we analyze possible essential reasons for different performance patterns. We additionally collect incentive strategies handling sentences from other models and propose potentially inspiring future research directions.
{"title":"A Brief Overview of Universal Sentence Representation Methods: A Linguistic View","authors":"Ruiqi Li, Xiang Zhao, M. Moens","doi":"10.1145/3482853","DOIUrl":"https://doi.org/10.1145/3482853","url":null,"abstract":"How to transfer the semantic information in a sentence to a computable numerical embedding form is a fundamental problem in natural language processing. An informative universal sentence embedding can greatly promote subsequent natural language processing tasks. However, unlike universal word embeddings, a widely accepted general-purpose sentence embedding technique has not been developed. This survey summarizes the current universal sentence-embedding methods, categorizes them into four groups from a linguistic view, and ultimately analyzes their reported performance. Sentence embeddings trained from words in a bottom-up manner are observed to have different, nearly opposite, performance patterns in downstream tasks compared to those trained from logical relationships between sentences. By comparing differences of training schemes in and between groups, we analyze possible essential reasons for different performance patterns. We additionally collect incentive strategies handling sentences from other models and propose potentially inspiring future research directions.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"378 1","pages":"1 - 42"},"PeriodicalIF":0.0,"publicationDate":"2022-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76434959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Plopski, Teresa Hirzle, Nahal Norouzi, Long Qian, G. Bruder, T. Langlotz
With innovations in the field of gaze and eye tracking, a new concentration of research in the area of gaze-tracked systems and user interfaces has formed in the field of Extended Reality (XR). Eye trackers are being used to explore novel forms of spatial human–computer interaction, to understand human attention and behavior, and to test expectations and human responses. In this article, we review gaze interaction and eye tracking research related to XR that has been published since 1985, which includes a total of 215 publications. We outline efforts to apply eye gaze for direct interaction with virtual content and design of attentive interfaces that adapt the presented content based on eye gaze behavior and discuss how eye gaze has been utilized to improve collaboration in XR. We outline trends and novel directions and discuss representative high-impact papers in detail.
{"title":"The Eye in Extended Reality: A Survey on Gaze Interaction and Eye Tracking in Head-worn Extended Reality","authors":"Alexander Plopski, Teresa Hirzle, Nahal Norouzi, Long Qian, G. Bruder, T. Langlotz","doi":"10.1145/3491207","DOIUrl":"https://doi.org/10.1145/3491207","url":null,"abstract":"With innovations in the field of gaze and eye tracking, a new concentration of research in the area of gaze-tracked systems and user interfaces has formed in the field of Extended Reality (XR). Eye trackers are being used to explore novel forms of spatial human–computer interaction, to understand human attention and behavior, and to test expectations and human responses. In this article, we review gaze interaction and eye tracking research related to XR that has been published since 1985, which includes a total of 215 publications. We outline efforts to apply eye gaze for direct interaction with virtual content and design of attentive interfaces that adapt the presented content based on eye gaze behavior and discuss how eye gaze has been utilized to improve collaboration in XR. We outline trends and novel directions and discuss representative high-impact papers in detail.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"8 1","pages":"1 - 39"},"PeriodicalIF":0.0,"publicationDate":"2022-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75042118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alzheimer's Disease (AD) is a devastating neurodegenerative brain disorder with no cure. An early identification helps patients with AD sustain a normal living. We have outlined machine learning (ML) methodologies with different schemes of feature extraction to synergize complementary and correlated characteristics of data acquired from multiple modalities of neuroimaging. A variety of feature selection, scaling, and fusion methodologies along with confronted challenges are elaborated for designing an ML-based AD diagnosis system. Additionally, thematic analysis has been provided to compare the ML workflow for possible diagnostic solutions. This comprehensive report adds value to the further advancement of computer-aided early diagnosis system based on multi-modal neuroimaging data from patients with AD.
{"title":"A Comprehensive Report on Machine Learning-based Early Detection of Alzheimer's Disease using Multi-modal Neuroimaging Data","authors":"Shallu Sharma, P. Mandal","doi":"10.1145/3492865","DOIUrl":"https://doi.org/10.1145/3492865","url":null,"abstract":"Alzheimer's Disease (AD) is a devastating neurodegenerative brain disorder with no cure. An early identification helps patients with AD sustain a normal living. We have outlined machine learning (ML) methodologies with different schemes of feature extraction to synergize complementary and correlated characteristics of data acquired from multiple modalities of neuroimaging. A variety of feature selection, scaling, and fusion methodologies along with confronted challenges are elaborated for designing an ML-based AD diagnosis system. Additionally, thematic analysis has been provided to compare the ML workflow for possible diagnostic solutions. This comprehensive report adds value to the further advancement of computer-aided early diagnosis system based on multi-modal neuroimaging data from patients with AD.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"12 1","pages":"1 - 44"},"PeriodicalIF":0.0,"publicationDate":"2022-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85176979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ajay Mittal, S. Dhalla, Savita Gupta, Aastha Gupta
Leukemia, the malignancy of blood-forming tissues, becomes fatal if not detected in the early stages. It is detected through a blood smear test that involves the morphological analysis of the stained blood slide. The manual microscopic examination of slides is tedious, time-consuming, error-prone, and subject to inter-observer and intra-observer bias. Several computerized methods to automate this task have been developed to alleviate these problems during the past few years. However, no exclusive comprehensive review of these methods has been presented to date. Such a review shall be highly beneficial for novice readers interested in pursuing research in this domain. This article fills the void by presenting a comprehensive review of 149 papers detailing the methods used to analyze blood smear images and detect leukemia. The primary focus of the review is on presenting the underlying techniques used and their reported performance, along with their merits and demerits. It also enumerates the research issues that have been satisfactorily solved and open challenges still existing in the domain.
{"title":"Automated Analysis of Blood Smear Images for Leukemia Detection: A Comprehensive Review","authors":"Ajay Mittal, S. Dhalla, Savita Gupta, Aastha Gupta","doi":"10.1145/3514495","DOIUrl":"https://doi.org/10.1145/3514495","url":null,"abstract":"Leukemia, the malignancy of blood-forming tissues, becomes fatal if not detected in the early stages. It is detected through a blood smear test that involves the morphological analysis of the stained blood slide. The manual microscopic examination of slides is tedious, time-consuming, error-prone, and subject to inter-observer and intra-observer bias. Several computerized methods to automate this task have been developed to alleviate these problems during the past few years. However, no exclusive comprehensive review of these methods has been presented to date. Such a review shall be highly beneficial for novice readers interested in pursuing research in this domain. This article fills the void by presenting a comprehensive review of 149 papers detailing the methods used to analyze blood smear images and detect leukemia. The primary focus of the review is on presenting the underlying techniques used and their reported performance, along with their merits and demerits. It also enumerates the research issues that have been satisfactorily solved and open challenges still existing in the domain.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"47 1","pages":"1 - 37"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77056425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer games, one of the most popular forms of entertainment in the world, are increasingly online multiplayer, connecting geographically dispersed players in the same virtual world over a network. Network latency between players and the server can decrease responsiveness and increase inconsistency across players, degrading player performance and quality of experience. Latency compensation techniques are software-based solutions that seek to ameliorate the negative effects of network latency by manipulating player input and/or game states in response to network delays. We search, find, and survey more than 80 papers on latency compensation, organizing their latency compensation techniques into a novel taxonomy. Our hierarchical taxonomy has 11 base technique types organized into four main groups. Illustrative examples of each technique are provided, as well as demonstrated use of the techniques in commercial games.
{"title":"A Survey and Taxonomy of Latency Compensation Techniques for Network Computer Games","authors":"Shengmei Liu, Xiaokun Xu, M. Claypool","doi":"10.1145/3519023","DOIUrl":"https://doi.org/10.1145/3519023","url":null,"abstract":"Computer games, one of the most popular forms of entertainment in the world, are increasingly online multiplayer, connecting geographically dispersed players in the same virtual world over a network. Network latency between players and the server can decrease responsiveness and increase inconsistency across players, degrading player performance and quality of experience. Latency compensation techniques are software-based solutions that seek to ameliorate the negative effects of network latency by manipulating player input and/or game states in response to network delays. We search, find, and survey more than 80 papers on latency compensation, organizing their latency compensation techniques into a novel taxonomy. Our hierarchical taxonomy has 11 base technique types organized into four main groups. Illustrative examples of each technique are provided, as well as demonstrated use of the techniques in commercial games.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"204 1","pages":"1 - 34"},"PeriodicalIF":0.0,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77026197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Antonelli, D. Avola, L. Cinque, Donato Crisostomi, G. Foresti, Fabio Galasso, Marco Raoul Marini, Alessio Mecca, D. Pannone
Deep learning approaches have recently raised the bar in many fields, from Natural Language Processing to Computer Vision, by leveraging large amounts of data. However, they could fail when the retrieved information is not enough to fit the vast number of parameters, frequently resulting in overfitting and therefore in poor generalizability. Few-Shot Learning aims at designing models that can effectively operate in a scarce data regime, yielding learning strategies that only need few supervised examples to be trained. These procedures are of both practical and theoretical importance, as they are crucial for many real-life scenarios in which data is either costly or even impossible to retrieve. Moreover, they bridge the distance between current data-hungry models and human-like generalization capability. Computer vision offers various tasks that can be few-shot inherent, such as person re-identification. This survey, which to the best of our knowledge is the first tackling this problem, is focused on Few-Shot Object Detection, which has received far less attention compared to Few-Shot Classification due to the intrinsic challenge level. In this regard, this review presents an extensive description of the approaches that have been tested in the current literature, discussing their pros and cons, and classifying them according to a rigorous taxonomy.
{"title":"Few-Shot Object Detection: A Survey","authors":"Simone Antonelli, D. Avola, L. Cinque, Donato Crisostomi, G. Foresti, Fabio Galasso, Marco Raoul Marini, Alessio Mecca, D. Pannone","doi":"10.1145/3519022","DOIUrl":"https://doi.org/10.1145/3519022","url":null,"abstract":"Deep learning approaches have recently raised the bar in many fields, from Natural Language Processing to Computer Vision, by leveraging large amounts of data. However, they could fail when the retrieved information is not enough to fit the vast number of parameters, frequently resulting in overfitting and therefore in poor generalizability. Few-Shot Learning aims at designing models that can effectively operate in a scarce data regime, yielding learning strategies that only need few supervised examples to be trained. These procedures are of both practical and theoretical importance, as they are crucial for many real-life scenarios in which data is either costly or even impossible to retrieve. Moreover, they bridge the distance between current data-hungry models and human-like generalization capability. Computer vision offers various tasks that can be few-shot inherent, such as person re-identification. This survey, which to the best of our knowledge is the first tackling this problem, is focused on Few-Shot Object Detection, which has received far less attention compared to Few-Shot Classification due to the intrinsic challenge level. In this regard, this review presents an extensive description of the approaches that have been tested in the current literature, discussing their pros and cons, and classifying them according to a rigorous taxonomy.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"232 ","pages":"1 - 37"},"PeriodicalIF":0.0,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91458742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Left ventricle segmentation in short-axis cardiac magnetic resonance images is important to diagnose heart disease. However, repetitive manual segmentation of these images requires considerable human effort and can decrease diagnostic accuracy. In recent years, several fully and semi-automatic approaches have been proposed, mainly using image-based, atlas, graph, deformable model, and artificial intelligence methods. This article presents a systematic mapping on left ventricle segmentation, considering 74 studies published in the past decade. The main contributions of this review are definition of the main segmentation challenges in these images; proposal of a new schematization, dividing the segmentation process into stages; categorization and analysis of the segmentation methods, including hybrid combinations; and analysis of the evaluation process, metrics, and databases. The performance of the methods in the most used public database is assessed, and the main limitations, weaknesses, and strengths of each method category are presented. Finally, trends, challenges, and research opportunities are discussed. The analysis indicates that methods from all categories can achieve good performance, and hybrid methods combining deep learning and deformable models obtain the best results. Methods still fail in specific slices, segment wrong regions, and produce anatomically impossible segmentations.
{"title":"Left Ventricle Segmentation in Cardiac MR: A Systematic Mapping of the Past Decade","authors":"Matheus A. O. Ribeiro, Fátima L. S. Nunes","doi":"10.1145/3517190","DOIUrl":"https://doi.org/10.1145/3517190","url":null,"abstract":"Left ventricle segmentation in short-axis cardiac magnetic resonance images is important to diagnose heart disease. However, repetitive manual segmentation of these images requires considerable human effort and can decrease diagnostic accuracy. In recent years, several fully and semi-automatic approaches have been proposed, mainly using image-based, atlas, graph, deformable model, and artificial intelligence methods. This article presents a systematic mapping on left ventricle segmentation, considering 74 studies published in the past decade. The main contributions of this review are definition of the main segmentation challenges in these images; proposal of a new schematization, dividing the segmentation process into stages; categorization and analysis of the segmentation methods, including hybrid combinations; and analysis of the evaluation process, metrics, and databases. The performance of the methods in the most used public database is assessed, and the main limitations, weaknesses, and strengths of each method category are presented. Finally, trends, challenges, and research opportunities are discussed. The analysis indicates that methods from all categories can achieve good performance, and hybrid methods combining deep learning and deformable models obtain the best results. Methods still fail in specific slices, segment wrong regions, and produce anatomically impossible segmentations.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"42 1","pages":"1 - 38"},"PeriodicalIF":0.0,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89837350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bushra Jamil, H. Ijaz, M. Shojafar, K. Munir, R. Buyya
The Internet of Everything paradigm is being rapidly adopted in developing applications for different domains like smart agriculture, smart city, big data streaming, and so on. These IoE applications are leveraging cloud computing resources for execution. Fog computing, which emerged as an extension of cloud computing, supports mobility, heterogeneity, geographical distribution, context awareness, and services such as storage, processing, networking, and analytics on nearby fog nodes. The resource-limited, heterogeneous, dynamic, and uncertain fog environment makes task scheduling a great challenge that needs to be investigated. The article is motivated by this consideration and presents a systematic, comprehensive, and detailed comparative study by discussing the merits and demerits of different scheduling algorithms, focused optimization metrics, and evaluation tools in the fog computing and IoE environment. The goal of this survey article is fivefold. First, we review the fog computing and IoE paradigms. Second, we delineate the optimization metric engaged with fog computing and IoE environment. Third, we review, classify, and compare existing scheduling algorithms dealing with fog computing and IoE environment paradigms by leveraging some examples. Fourth, we rationalize the scheduling algorithms and point out the lesson learned from the survey. Fifth, we discuss the open issues and future research directions to improve scheduling in fog computing and the IoE environment.
在智能农业、智慧城市、大数据流等不同领域的应用开发中,万物互联(Internet of Everything)范式正被迅速采用。这些物联网应用程序正在利用云计算资源进行执行。雾计算作为云计算的延伸而出现,它支持移动性、异构性、地理分布、上下文感知以及附近雾节点上的存储、处理、网络和分析等服务。资源有限、异构、动态、不确定的雾环境使任务调度成为一个需要研究的巨大挑战。本文正是出于这一考虑,通过讨论雾计算和物联网环境中不同调度算法、重点优化指标和评估工具的优缺点,进行了系统、全面和详细的比较研究。这篇调查文章的目标有五个方面。首先,我们回顾了雾计算和IoE范式。其次,我们描述了雾计算和物联网环境下的优化度量。第三,我们通过一些例子回顾、分类和比较了处理雾计算和IoE环境范例的现有调度算法。第四,对调度算法进行了合理化,并指出了调查的经验教训。第五,讨论了在雾计算和物联网环境下改进调度的开放性问题和未来的研究方向。
{"title":"Resource Allocation and Task Scheduling in Fog Computing and Internet of Everything Environments: A Taxonomy, Review, and Future Directions","authors":"Bushra Jamil, H. Ijaz, M. Shojafar, K. Munir, R. Buyya","doi":"10.1145/3513002","DOIUrl":"https://doi.org/10.1145/3513002","url":null,"abstract":"The Internet of Everything paradigm is being rapidly adopted in developing applications for different domains like smart agriculture, smart city, big data streaming, and so on. These IoE applications are leveraging cloud computing resources for execution. Fog computing, which emerged as an extension of cloud computing, supports mobility, heterogeneity, geographical distribution, context awareness, and services such as storage, processing, networking, and analytics on nearby fog nodes. The resource-limited, heterogeneous, dynamic, and uncertain fog environment makes task scheduling a great challenge that needs to be investigated. The article is motivated by this consideration and presents a systematic, comprehensive, and detailed comparative study by discussing the merits and demerits of different scheduling algorithms, focused optimization metrics, and evaluation tools in the fog computing and IoE environment. The goal of this survey article is fivefold. First, we review the fog computing and IoE paradigms. Second, we delineate the optimization metric engaged with fog computing and IoE environment. Third, we review, classify, and compare existing scheduling algorithms dealing with fog computing and IoE environment paradigms by leveraging some examples. Fourth, we rationalize the scheduling algorithms and point out the lesson learned from the survey. Fifth, we discuss the open issues and future research directions to improve scheduling in fog computing and the IoE environment.","PeriodicalId":7000,"journal":{"name":"ACM Computing Surveys (CSUR)","volume":"14 1","pages":"1 - 38"},"PeriodicalIF":0.0,"publicationDate":"2022-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77749045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}