Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420575
Seyyed Mohammad Amir Mirizadeh, Mohammad Mahdi Emadi Kouchak, Mohammad Mahdi Panahi
quantum computing is the emerging technology in the new era and is very promising, not only quantum computers substantially accelerate what classical computers are able to do nowadays but they are also capable of providing answers that classical computers never could. Using reversible logic in designing quantum circuits has many advantages such as lowering power consumption, reducing heat dissemination, and decreasing quantum cost, ancilla inputs, and garbage outputs that lead to even higher performance in quantum computers. Decoders have many utilizations in digital circuits any function in form of SOP or POS can be implemented using decoders, counters, and ROMs have also used decoder modules in their designs in this article two novel designs for 2:4 and 3:8 decoder have been proposed that has been proved to have less quantum cost, unused outputs, and ancilla inputs when it comes to comparison with recent researches that have been done concerning this field.
{"title":"A Novel Design of Quantum 3:8 Decoder Circuit using Reversible Logic for Improvement in Key Quantum Circuit Design Parameters","authors":"Seyyed Mohammad Amir Mirizadeh, Mohammad Mahdi Emadi Kouchak, Mohammad Mahdi Panahi","doi":"10.1109/CSICC52343.2021.9420575","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420575","url":null,"abstract":"quantum computing is the emerging technology in the new era and is very promising, not only quantum computers substantially accelerate what classical computers are able to do nowadays but they are also capable of providing answers that classical computers never could. Using reversible logic in designing quantum circuits has many advantages such as lowering power consumption, reducing heat dissemination, and decreasing quantum cost, ancilla inputs, and garbage outputs that lead to even higher performance in quantum computers. Decoders have many utilizations in digital circuits any function in form of SOP or POS can be implemented using decoders, counters, and ROMs have also used decoder modules in their designs in this article two novel designs for 2:4 and 3:8 decoder have been proposed that has been proved to have less quantum cost, unused outputs, and ancilla inputs when it comes to comparison with recent researches that have been done concerning this field.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121087545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420608
A. Rezaeieh, Hoda Roodaki
The most recent video coding standard, named Versatile Video Coding (VVC), greatly improved the compression rate compared to its predecessor, High Efficiency Video Coding (HEVC) using some new coding tools. Though these new option provide appreciable coding gain, its computational complexity is relatively high since the performance of these coding tools need to be evaluated for each Coding Tree Units (CTU) through the Rate-Distortion Optimization (RDO) process. To address this issue, in this paper, first, the effectiveness of the coding tools in various parts of the frame, such as the borderline and central CTU, is investigated. The results of this study show that the coding efficiency of some of these coding tools is much higher for the borderline CTUs due to their specific features. Hence, these coding tools would be only considered enable for the borderline CTUs in rate-distortion process to decrease the computational complexity, without affecting the coding gain considerably. Simulation results show that using this method, the compression efficiency decreased only by 0.64% in average, but the computational complexity is reduced considerably, by 28.31%, in average.
{"title":"A Method for Rate-Distortion-Complexity Optimization in Versatile Video Coding Standard","authors":"A. Rezaeieh, Hoda Roodaki","doi":"10.1109/CSICC52343.2021.9420608","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420608","url":null,"abstract":"The most recent video coding standard, named Versatile Video Coding (VVC), greatly improved the compression rate compared to its predecessor, High Efficiency Video Coding (HEVC) using some new coding tools. Though these new option provide appreciable coding gain, its computational complexity is relatively high since the performance of these coding tools need to be evaluated for each Coding Tree Units (CTU) through the Rate-Distortion Optimization (RDO) process. To address this issue, in this paper, first, the effectiveness of the coding tools in various parts of the frame, such as the borderline and central CTU, is investigated. The results of this study show that the coding efficiency of some of these coding tools is much higher for the borderline CTUs due to their specific features. Hence, these coding tools would be only considered enable for the borderline CTUs in rate-distortion process to decrease the computational complexity, without affecting the coding gain considerably. Simulation results show that using this method, the compression efficiency decreased only by 0.64% in average, but the computational complexity is reduced considerably, by 28.31%, in average.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116856713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420594
Fatereh Sadat Mousavi, S. Yousefi, H. Abghari, Ardalan Ghasemzadeh
Floods are a complex phenomenon that is difficult to predict because of their non-linear and dynamic nature. Gauging stations that transmit measured data to the server are often placed in very harsh and far environments that make the risk of missing data so high. The purpose of this study is to develop a real-time reliable flood monitoring and detection system using deep learning. This paper proposed an Internet of Things (IoT) approach for utilizing LoRaWAN as a reliable, low power, wide area communication technology by considering the effect of radius and transmission rate on packet loss. Besides, we evaluate an artificial neural network (ANN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) neural network models for flood forecasting. The data from 2013 to 2019 were collected from four gauging stations at Brandywine-Christina watershed, Pennsylvania. Our results show that the deep learning models are more accurate than the physical and statistical models. These results can help to provide and implement flood detection systems that would be able to predict floods at rescue time and reduce financial, human, and infrastructural damage.
{"title":"Design of an IoT-based Flood Early Detection System using Machine Learning","authors":"Fatereh Sadat Mousavi, S. Yousefi, H. Abghari, Ardalan Ghasemzadeh","doi":"10.1109/CSICC52343.2021.9420594","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420594","url":null,"abstract":"Floods are a complex phenomenon that is difficult to predict because of their non-linear and dynamic nature. Gauging stations that transmit measured data to the server are often placed in very harsh and far environments that make the risk of missing data so high. The purpose of this study is to develop a real-time reliable flood monitoring and detection system using deep learning. This paper proposed an Internet of Things (IoT) approach for utilizing LoRaWAN as a reliable, low power, wide area communication technology by considering the effect of radius and transmission rate on packet loss. Besides, we evaluate an artificial neural network (ANN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) neural network models for flood forecasting. The data from 2013 to 2019 were collected from four gauging stations at Brandywine-Christina watershed, Pennsylvania. Our results show that the deep learning models are more accurate than the physical and statistical models. These results can help to provide and implement flood detection systems that would be able to predict floods at rescue time and reduce financial, human, and infrastructural damage.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115091335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420629
Amene Naghdipour, S. Hasheminejad, M. Keyvanpour
The software design phase is important and challenging due to its high impact on other phases of software development life cycle. Design pattern is a proven solution based on software developers’ experience to solve recurring problems, which used to acquire quality software design. However, the large number of design patterns has made it difficult to select the right one for a particular design problem. To overcome this difficulty, several approaches with different methods have been proposed to automate the design pattern selection process. The purpose of this paper is to suggest a framework called "DPSA" which includes the classification of existing approaches, a comparison between approaches based on provided criteria, and also analyzing each approach based on these criteria. DPSA helps future research to a) employing the existing approaches taking into account the specification of each one and b) comparing the current works with the future.
{"title":"DPSA: A Brief Review for Design Pattern Selection Approaches","authors":"Amene Naghdipour, S. Hasheminejad, M. Keyvanpour","doi":"10.1109/CSICC52343.2021.9420629","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420629","url":null,"abstract":"The software design phase is important and challenging due to its high impact on other phases of software development life cycle. Design pattern is a proven solution based on software developers’ experience to solve recurring problems, which used to acquire quality software design. However, the large number of design patterns has made it difficult to select the right one for a particular design problem. To overcome this difficulty, several approaches with different methods have been proposed to automate the design pattern selection process. The purpose of this paper is to suggest a framework called \"DPSA\" which includes the classification of existing approaches, a comparison between approaches based on provided criteria, and also analyzing each approach based on these criteria. DPSA helps future research to a) employing the existing approaches taking into account the specification of each one and b) comparing the current works with the future.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127201380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420547
Amirhossein Nouranizadeh, Mohammadjavad Matinkia, M. Rahmati
As a generalization of convolutional neural networks to graph-structured data, graph convolutional networks learn feature embeddings based on the information of each nodes local neighborhood. However, due to the inherent irregularity of such data, extracting hierarchical representations of a graph becomes a challenging task. Several pooling approaches have been introduced to address this issue. In this paper, we propose a novel topology-aware graph signal sampling method to specify the nodes that represent the communities of a graph. Our method selects the sampling set based on the local variation of the signal of each node while considering vertex-domain distances of the nodes in the sampling set. In addition to the interpretability of the sampled nodes provided by our method, the experimental results both on stochastic block models and real-world dataset benchmarks show that our method achieves competitive results compared to the state-of-the-art in the graph classification task.
{"title":"Topology-Aware Graph Signal Sampling for Pooling in Graph Neural Networks","authors":"Amirhossein Nouranizadeh, Mohammadjavad Matinkia, M. Rahmati","doi":"10.1109/CSICC52343.2021.9420547","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420547","url":null,"abstract":"As a generalization of convolutional neural networks to graph-structured data, graph convolutional networks learn feature embeddings based on the information of each nodes local neighborhood. However, due to the inherent irregularity of such data, extracting hierarchical representations of a graph becomes a challenging task. Several pooling approaches have been introduced to address this issue. In this paper, we propose a novel topology-aware graph signal sampling method to specify the nodes that represent the communities of a graph. Our method selects the sampling set based on the local variation of the signal of each node while considering vertex-domain distances of the nodes in the sampling set. In addition to the interpretability of the sampled nodes provided by our method, the experimental results both on stochastic block models and real-world dataset benchmarks show that our method achieves competitive results compared to the state-of-the-art in the graph classification task.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127237624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420583
Masoud Fathi, S. Khoshnevis
Reusability is one of the most important objectives in software development and especially, in software product line (SPL) engineering, involving analysis, design, implementation, testing, and maintenance activities. Therefore, in software product line testing, as well as other activities, it is crucial that we pay special attention to reusability. In SPL testing, reusability can be defined and measured in different ways. In this paper, we first introduce four different reusability metrics for SPL testing (SPLT); and then, as a first step toward improving reusability in SPLT, we experimentally examine how a search-based software testing (SBST) approach for optimizing an existing SPL domain test suite can affect (improve) two of the proposed reusability metrics. The results of the experimentation on 20 SPL feature models of size 5000 showed a significant improvement in the two selected test reusability metrics, namely, TSRR (test suite reusability regarding test requirements) and TCRR (test case reusability regarding test requirements) in optimized solutions compared with non-optimized solutions.
{"title":"Reusability Metrics in Search-Based Testing of Software Product Lines: An Experimentation","authors":"Masoud Fathi, S. Khoshnevis","doi":"10.1109/CSICC52343.2021.9420583","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420583","url":null,"abstract":"Reusability is one of the most important objectives in software development and especially, in software product line (SPL) engineering, involving analysis, design, implementation, testing, and maintenance activities. Therefore, in software product line testing, as well as other activities, it is crucial that we pay special attention to reusability. In SPL testing, reusability can be defined and measured in different ways. In this paper, we first introduce four different reusability metrics for SPL testing (SPLT); and then, as a first step toward improving reusability in SPLT, we experimentally examine how a search-based software testing (SBST) approach for optimizing an existing SPL domain test suite can affect (improve) two of the proposed reusability metrics. The results of the experimentation on 20 SPL feature models of size 5000 showed a significant improvement in the two selected test reusability metrics, namely, TSRR (test suite reusability regarding test requirements) and TCRR (test case reusability regarding test requirements) in optimized solutions compared with non-optimized solutions.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126052620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420581
Davod Karimpour, M. Z. Chahooki, Ali Hashemi
Today, social networks and messengers have attracted the attention of many different businesses. Every day, a lot of information is produced in these environments. Analyzing this information is very useful for connecting different businesses. This information is very valuable for marketers to find the target community. Telegram is a messenger based on cloud computing. This messenger is used as a social network in some countries, including Iran. Telegram, while used as a social network, does not offer all the capabilities of a social network. The capabilities provided in this messenger include creating a channel, group, and bot. The shortfall in most messengers, such as Telegram, is the limited search service of groups and a community of users. In this paper, we have recommended groups according to the users ' interests, using the graph of users' membership and analyzing their membership records. The proposed method, considering the users' status, models their records in each group. We obtained users’ migration by analyzing their records in each group. Users' migration is analyzed based on the maximum number of users leaving each group and entering another group. In this study, information about 70 million users and 700,000 Telegram supergroups have been used. The evaluation of the proposed model has been done on 30 high-quality groups in Telegram. Selected groups had between 5,000 and 15,000 members. The proposed method showed an error reduction of 0.0237 in RMSE compared to a base method.
{"title":"Telegram group recommendation based on users' migration","authors":"Davod Karimpour, M. Z. Chahooki, Ali Hashemi","doi":"10.1109/CSICC52343.2021.9420581","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420581","url":null,"abstract":"Today, social networks and messengers have attracted the attention of many different businesses. Every day, a lot of information is produced in these environments. Analyzing this information is very useful for connecting different businesses. This information is very valuable for marketers to find the target community. Telegram is a messenger based on cloud computing. This messenger is used as a social network in some countries, including Iran. Telegram, while used as a social network, does not offer all the capabilities of a social network. The capabilities provided in this messenger include creating a channel, group, and bot. The shortfall in most messengers, such as Telegram, is the limited search service of groups and a community of users. In this paper, we have recommended groups according to the users ' interests, using the graph of users' membership and analyzing their membership records. The proposed method, considering the users' status, models their records in each group. We obtained users’ migration by analyzing their records in each group. Users' migration is analyzed based on the maximum number of users leaving each group and entering another group. In this study, information about 70 million users and 700,000 Telegram supergroups have been used. The evaluation of the proposed model has been done on 30 high-quality groups in Telegram. Selected groups had between 5,000 and 15,000 members. The proposed method showed an error reduction of 0.0237 in RMSE compared to a base method.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114970974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420626
Ali Salehi Najafabadi, A. Ghomsheh
Multimodal attention mechanisms in computer vision applications enable rich feature extraction by attending to specific image regions, highlighted through a second mode of data regarded as auxiliary information. The correspondence between image regions and auxiliary data can be defined as the similarity between parts of the two modes. In this paper, we propose a similarity measure that maximizes the posterior for matching high-level object attributes with image regions. In contrast to previous methods, we rely on attribute space rather than textual descriptions. We evaluate our results on the CUB dataset. The results show that the proposed method better minimizes the similarity loss function compared to the text-image similarity measurement.
{"title":"Attribute-Image Similarity Measure for Multimodal Attention Mechanism","authors":"Ali Salehi Najafabadi, A. Ghomsheh","doi":"10.1109/CSICC52343.2021.9420626","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420626","url":null,"abstract":"Multimodal attention mechanisms in computer vision applications enable rich feature extraction by attending to specific image regions, highlighted through a second mode of data regarded as auxiliary information. The correspondence between image regions and auxiliary data can be defined as the similarity between parts of the two modes. In this paper, we propose a similarity measure that maximizes the posterior for matching high-level object attributes with image regions. In contrast to previous methods, we rely on attribute space rather than textual descriptions. We evaluate our results on the CUB dataset. The results show that the proposed method better minimizes the similarity loss function compared to the text-image similarity measurement.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128578878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420611
Milad Shoryabi, A. Foroutannia, A. Rowhanimanesh
In this paper, a deep learning approach is proposed based on a 3D Convolutional Neural Network for the classification of gait abnormalities. Six gait classes are considered, including Trendelenburg, Steppage, Stiff-legged, Lurching, and Antalgic gait abnormalities as well as normal gait. The proposed scheme is applied to a recently-published dataset from the literature. This dataset consists of the gait data recorded by multiple Microsoft Kinect v2 sensor from 25 joints of a person during walking on a specified walkway. In this dataset, for each of the 6 gait classes, ten people have attended the data collection procedure; and for each participant, 120 walking instances have been recorded. Each instance includes the spatial and temporal information of the walking, and it is converted to two 3D images, which respectively display the changes of the Coronal (X-Z) and Sagittal (Y-Z) views of the originally captured data over time. These two 3D images are used as the input of the proposed 3D convolutional neural network. There are a total of 14400 3D images in this dataset. In order to demonstrate the accuracy of the proposed approach, it is compared with four well-known neural classifiers from the literature.
{"title":"A 3D Deep Learning Approach for Classification of Gait Abnormalities Using Microsoft Kinect V2 Sensor","authors":"Milad Shoryabi, A. Foroutannia, A. Rowhanimanesh","doi":"10.1109/CSICC52343.2021.9420611","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420611","url":null,"abstract":"In this paper, a deep learning approach is proposed based on a 3D Convolutional Neural Network for the classification of gait abnormalities. Six gait classes are considered, including Trendelenburg, Steppage, Stiff-legged, Lurching, and Antalgic gait abnormalities as well as normal gait. The proposed scheme is applied to a recently-published dataset from the literature. This dataset consists of the gait data recorded by multiple Microsoft Kinect v2 sensor from 25 joints of a person during walking on a specified walkway. In this dataset, for each of the 6 gait classes, ten people have attended the data collection procedure; and for each participant, 120 walking instances have been recorded. Each instance includes the spatial and temporal information of the walking, and it is converted to two 3D images, which respectively display the changes of the Coronal (X-Z) and Sagittal (Y-Z) views of the originally captured data over time. These two 3D images are used as the input of the proposed 3D convolutional neural network. There are a total of 14400 3D images in this dataset. In order to demonstrate the accuracy of the proposed approach, it is compared with four well-known neural classifiers from the literature.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127705262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420557
Maryam Sadat Mastoori, Ghazal Rahmanian
An effective resource management, called scheduling, is essential for the performance of large-scale distributed systems. One scheduling technique is gang scheduling, performing scheduling for parallel jobs in gang type. In this paper, a new algorithm for gang scheduling is proposed. This method aims to reduce the average response time of gangs by increasing the serviceability of gangs in the shortest execution time possible. The performance of the proposed algorithm is examined and compared to the basic gang scheduling algorithm within the simulation. The results of the simulation indicated that the response time of the proposed modification compared to the basic method is reduced up to 40% with low values of multiprogramming and high pressure of workload (short inter-arrival time) in Adapted First Come First Served and Largest Gang First Served policies.
{"title":"The improved greedy gang scheduling by minimizing context switch condition","authors":"Maryam Sadat Mastoori, Ghazal Rahmanian","doi":"10.1109/CSICC52343.2021.9420557","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420557","url":null,"abstract":"An effective resource management, called scheduling, is essential for the performance of large-scale distributed systems. One scheduling technique is gang scheduling, performing scheduling for parallel jobs in gang type. In this paper, a new algorithm for gang scheduling is proposed. This method aims to reduce the average response time of gangs by increasing the serviceability of gangs in the shortest execution time possible. The performance of the proposed algorithm is examined and compared to the basic gang scheduling algorithm within the simulation. The results of the simulation indicated that the response time of the proposed modification compared to the basic method is reduced up to 40% with low values of multiprogramming and high pressure of workload (short inter-arrival time) in Adapted First Come First Served and Largest Gang First Served policies.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133240389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}