Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964904
Milad Abaspoor, S. Meshgini, T. Y. Rezaii, A. Farzamnia
The main idea of this article is to provide a numerical diagnostic method for breast cancer diagnosis of the MRI images. To achieve this goal, we used the region’s growth method to identify the target area. In the area’s growth method, based on the similarity or homogeneity of the adjacent pixels, the image is subdivided into distinct areas according to the criteria used for homogeneity analysis to determine their belonging to the corresponding region. In this paper, we used manual methods and use of FCM as the function of genetic algorithm fitness. The presented algorithm is performed for 212 healthy and 110 patients. Results show that GA-FCM method have better performance than hand method to select initial points. The sensitivity of presented method is 0.67. The results of the comparison of the fuzzy fitness function in the genetic algorithm with other technique show that the proposed model is better suited to the Jaccard index with the highest Jaccard values and the lowest Jaccard distance. Among the techniques, the presented works well because of the similarity of techniques and the lowest Jaccard distance. Values close to 0.9 are close to 0.8.
{"title":"A Novel Method for Detecting Breast Cancer Location Based on Growing GA-FCM Approach","authors":"Milad Abaspoor, S. Meshgini, T. Y. Rezaii, A. Farzamnia","doi":"10.1109/ICCKE48569.2019.8964904","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964904","url":null,"abstract":"The main idea of this article is to provide a numerical diagnostic method for breast cancer diagnosis of the MRI images. To achieve this goal, we used the region’s growth method to identify the target area. In the area’s growth method, based on the similarity or homogeneity of the adjacent pixels, the image is subdivided into distinct areas according to the criteria used for homogeneity analysis to determine their belonging to the corresponding region. In this paper, we used manual methods and use of FCM as the function of genetic algorithm fitness. The presented algorithm is performed for 212 healthy and 110 patients. Results show that GA-FCM method have better performance than hand method to select initial points. The sensitivity of presented method is 0.67. The results of the comparison of the fuzzy fitness function in the genetic algorithm with other technique show that the proposed model is better suited to the Jaccard index with the highest Jaccard values and the lowest Jaccard distance. Among the techniques, the presented works well because of the similarity of techniques and the lowest Jaccard distance. Values close to 0.9 are close to 0.8.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"30 1","pages":"238-242"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81313424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964800
Sadegh Sehhatbakhsh, Yasser Sedaghat
The scheduling for mixed criticality systems, where multiple functionalities with different criticality levels are integrated into a shared hardware platform, is an important research area. Reconfigurable platforms, which combine the advantages of software flexibility and performance efficiencies, are recognized as a suitable processing platform for real-time embedded systems. In this paper, we consider the scheduling of mixed criticality systems with two criticality levels on reconfigurable platforms. Partitioned fixed-priority preemptive scheduling is used to schedule tasks. Since the context switch overhead in reconfigurable platforms is not as small as that of multiprocessors, it has been taken into account in our schedulability analysis. Furthermore, a context-switch-aware partitioning algorithm is presented to improve the schedulability of tasks in platforms that context switch cost cannot be neglected. The experiments results show that our proposed partitioning algorithm gives higher schedulability ratios when compared to the classical partitioning algorithms.
{"title":"Scheduling Mixed-criticality Systems on Reconfigurable Platforms","authors":"Sadegh Sehhatbakhsh, Yasser Sedaghat","doi":"10.1109/ICCKE48569.2019.8964800","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964800","url":null,"abstract":"The scheduling for mixed criticality systems, where multiple functionalities with different criticality levels are integrated into a shared hardware platform, is an important research area. Reconfigurable platforms, which combine the advantages of software flexibility and performance efficiencies, are recognized as a suitable processing platform for real-time embedded systems. In this paper, we consider the scheduling of mixed criticality systems with two criticality levels on reconfigurable platforms. Partitioned fixed-priority preemptive scheduling is used to schedule tasks. Since the context switch overhead in reconfigurable platforms is not as small as that of multiprocessors, it has been taken into account in our schedulability analysis. Furthermore, a context-switch-aware partitioning algorithm is presented to improve the schedulability of tasks in platforms that context switch cost cannot be neglected. The experiments results show that our proposed partitioning algorithm gives higher schedulability ratios when compared to the classical partitioning algorithms.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"31 1","pages":"431-436"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90176664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965084
Mahnaz Rahmani, F. Razzazi
In this paper, we utilized a set of long short term memory (LSTM) deep neural networks to distinguish a particular speaker from the rest of the speakers in a single channel recorded speech. The structure of this network is modified to provide the suitable result. The proposed architecture models the sequence of spectral data in each frame as the key feature. Each network has two memory cells and accepts an 8 band spectral window as the input. The results of the reconstructions of different bands are merged to rebuild the speaker’s utterance. We evaluated the intended speaker's reconstruction performance of the proposed system with PESQ and MSE measures. Using all utterances of each speaker in TIMIT dataset as the training data to build an LSTM based attention auto-encoder model, we achieved 3.66 in PESQ measure to rebuild the intended speaker. In contrast, the PESQ was 1.92 in average for other speakers when we used the mentioned speaker’s network. This test was successfully repeated for different utterances of different speakers.
{"title":"An LSTM Auto-Encoder for Single-Channel Speaker Attention System","authors":"Mahnaz Rahmani, F. Razzazi","doi":"10.1109/ICCKE48569.2019.8965084","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965084","url":null,"abstract":"In this paper, we utilized a set of long short term memory (LSTM) deep neural networks to distinguish a particular speaker from the rest of the speakers in a single channel recorded speech. The structure of this network is modified to provide the suitable result. The proposed architecture models the sequence of spectral data in each frame as the key feature. Each network has two memory cells and accepts an 8 band spectral window as the input. The results of the reconstructions of different bands are merged to rebuild the speaker’s utterance. We evaluated the intended speaker's reconstruction performance of the proposed system with PESQ and MSE measures. Using all utterances of each speaker in TIMIT dataset as the training data to build an LSTM based attention auto-encoder model, we achieved 3.66 in PESQ measure to rebuild the intended speaker. In contrast, the PESQ was 1.92 in average for other speakers when we used the mentioned speaker’s network. This test was successfully repeated for different utterances of different speakers.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"1 1","pages":"110-115"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90197320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964840
Milad Ghahramani, Abolfazl Laakdashti
The differential evolution algorithm is one of the fast, efficient, and strong population-based algorithms, which has extended applications in solving various problems. Although the velocity, power, and efficiency of this algorithm have been demonstrated in solving many optimization problems, this algorithm, like other metaheuristic algorithms, is not guaranteed to achieve the global optimal points of the optimization problems and may be ceased at optimal local points. One of the reasons for stopping the algorithm at the local optimum points is the imbalance between the exploration and exploitation abilities of the algorithm. One of the operators of the differential evolution algorithm, which plays an essential role in establishing the proper balance between the exploitation and exploitation of the algorithm, is the mutation operator. In this paper, a new mutation method is proposed to improve the efficiency of the differential evolution algorithm to make an appropriate balance between the exploitation and exploitation abilities of the algorithm. Comparing the results of the proposed mutation method with other mutation methods indicates that the proposed method has better speed and accuracy convergence rather than other methods, and it can be employed to solve large-scale optimization problems.
{"title":"Efficiency Improvement of Differential Evolution Algorithm Using a Novel Mutation Method","authors":"Milad Ghahramani, Abolfazl Laakdashti","doi":"10.1109/ICCKE48569.2019.8964840","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964840","url":null,"abstract":"The differential evolution algorithm is one of the fast, efficient, and strong population-based algorithms, which has extended applications in solving various problems. Although the velocity, power, and efficiency of this algorithm have been demonstrated in solving many optimization problems, this algorithm, like other metaheuristic algorithms, is not guaranteed to achieve the global optimal points of the optimization problems and may be ceased at optimal local points. One of the reasons for stopping the algorithm at the local optimum points is the imbalance between the exploration and exploitation abilities of the algorithm. One of the operators of the differential evolution algorithm, which plays an essential role in establishing the proper balance between the exploitation and exploitation of the algorithm, is the mutation operator. In this paper, a new mutation method is proposed to improve the efficiency of the differential evolution algorithm to make an appropriate balance between the exploitation and exploitation abilities of the algorithm. Comparing the results of the proposed mutation method with other mutation methods indicates that the proposed method has better speed and accuracy convergence rather than other methods, and it can be employed to solve large-scale optimization problems.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"97 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83596751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964893
Mohammad Ali Labbaf Khaniki, Amir Hossein Asnavandi, M. Manthouri
One of the most important types of DC-DC converters is Boost converters. They increase the voltage level, stabilizing and reducing the voltage ripples at output. The nature of this system is nonlinear and uncertainty is unavoidable in modeling it. This study presented a fractional order fuzzy PI (FOFPI) controller to control the system. The Imperialist Competitive Algorithm (ICA) Optimization is used to optimize the parameters of proposed controllers. The fractional order of integral is achieved by ICA. The results are compared with fuzzy PI (FPI) controller. They show the FOFPI has less fluctuations, overshoot and settling time compared to FPI. Additionally, the value of Power Factor Correction (PFC) is closer to one. In fact, FOFPI has more flexibility and good performance in dealing with uncertainty in comparison with FPI. The results reveal the performance of the proposed method against other methods.
{"title":"Boost PFC Converter Control using Fractional Order Fuzzy PI Controller Optimized via ICA","authors":"Mohammad Ali Labbaf Khaniki, Amir Hossein Asnavandi, M. Manthouri","doi":"10.1109/ICCKE48569.2019.8964893","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964893","url":null,"abstract":"One of the most important types of DC-DC converters is Boost converters. They increase the voltage level, stabilizing and reducing the voltage ripples at output. The nature of this system is nonlinear and uncertainty is unavoidable in modeling it. This study presented a fractional order fuzzy PI (FOFPI) controller to control the system. The Imperialist Competitive Algorithm (ICA) Optimization is used to optimize the parameters of proposed controllers. The fractional order of integral is achieved by ICA. The results are compared with fuzzy PI (FPI) controller. They show the FOFPI has less fluctuations, overshoot and settling time compared to FPI. Additionally, the value of Power Factor Correction (PFC) is closer to one. In fact, FOFPI has more flexibility and good performance in dealing with uncertainty in comparison with FPI. The results reveal the performance of the proposed method against other methods.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"18 1","pages":"131-136"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76379101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964692
Reza Akhoundzade, Kourosh Hashemi Devin
Sentiment analysis, is a subfield of natural language processing that aims at opinion mining to analyze thoughts, orientation and, evaluation of users within some texts. The solution to this problem includes two main steps: extracting aspects and determining users’ positive or negative sentiments with respect to the aspects. Two main challenges of sentiment analysis in the Persian language are lack of comprehensive tagged data sets and use of colloquial language in texts. In this paper we propose, a system to specify and extract sentiment words using unsupervised methods in the Persian language that also support colloquial words. Additionally, we also proposed and implemented a state-of-art technique to expand Persian sentiment lexicon. Our proposed method utilized neural network (Word2Vec model) with the help of rule-based methods. F1 measure for sentiment words extraction in our proposed method is 0.58.
{"title":"Persian Sentiment Lexicon Expansion Using Unsupervised Learning Methods","authors":"Reza Akhoundzade, Kourosh Hashemi Devin","doi":"10.1109/ICCKE48569.2019.8964692","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964692","url":null,"abstract":"Sentiment analysis, is a subfield of natural language processing that aims at opinion mining to analyze thoughts, orientation and, evaluation of users within some texts. The solution to this problem includes two main steps: extracting aspects and determining users’ positive or negative sentiments with respect to the aspects. Two main challenges of sentiment analysis in the Persian language are lack of comprehensive tagged data sets and use of colloquial language in texts. In this paper we propose, a system to specify and extract sentiment words using unsupervised methods in the Persian language that also support colloquial words. Additionally, we also proposed and implemented a state-of-art technique to expand Persian sentiment lexicon. Our proposed method utilized neural network (Word2Vec model) with the help of rule-based methods. F1 measure for sentiment words extraction in our proposed method is 0.58.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"38 1","pages":"461-465"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91324918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964911
Masoumeh Siar, M. Teshnehlab
Estimating the age of the brains of individuals from brain images can be very useful in many applications. The brain’s age has greatly contributed to predicting and preventing early deaths in the medical community. It can also be very useful for diagnosing diseases, such as Alzheimer’s. According to the authors knowledge, this paper is one of the first researches that have been done in age detection by brain images using Deep Learning (DL). In this paper, the convolution neural network (CNN), used for age detection from brain magnetic resonance images (MRI). The images used in this paper are from the imaging centers and collected by the author of the paper. In this paper 1290 images have been collected, 941 images for train data and 349 images for test images. Images collected at the centers were labeled age. In this paper, the Alexnet model is used in CNN architecture. The used architecture of the architecture has 5 Convolutional layers and 3 Sub-sampling layers that the last layer has been used to categorize the image. The CNN that the last layer has been used to categorize the images into five age classes.The accuracy of the CNN is obtained by the Softmax classifier 79%, Support Vector Machine (SVM) classifier 75% and the Decision Tree (DT) classifier, 49%. In addition to the accuracy criterion, we use the benchmarks of Recall, Precision and F1-Score to evaluate network performance.
{"title":"Age Detection from Brain MRI Images Using the Deep Learning","authors":"Masoumeh Siar, M. Teshnehlab","doi":"10.1109/ICCKE48569.2019.8964911","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964911","url":null,"abstract":"Estimating the age of the brains of individuals from brain images can be very useful in many applications. The brain’s age has greatly contributed to predicting and preventing early deaths in the medical community. It can also be very useful for diagnosing diseases, such as Alzheimer’s. According to the authors knowledge, this paper is one of the first researches that have been done in age detection by brain images using Deep Learning (DL). In this paper, the convolution neural network (CNN), used for age detection from brain magnetic resonance images (MRI). The images used in this paper are from the imaging centers and collected by the author of the paper. In this paper 1290 images have been collected, 941 images for train data and 349 images for test images. Images collected at the centers were labeled age. In this paper, the Alexnet model is used in CNN architecture. The used architecture of the architecture has 5 Convolutional layers and 3 Sub-sampling layers that the last layer has been used to categorize the image. The CNN that the last layer has been used to categorize the images into five age classes.The accuracy of the CNN is obtained by the Softmax classifier 79%, Support Vector Machine (SVM) classifier 75% and the Decision Tree (DT) classifier, 49%. In addition to the accuracy criterion, we use the benchmarks of Recall, Precision and F1-Score to evaluate network performance.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"34 1","pages":"369-374"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89204174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964846
Masoumeh Siar, M. Teshnehlab
Brain tumor can be classified into two types: benign and malignant. Timely and prompt disease detection and treatment plan leads to improved quality of life and increased life expectancy in these patients. One of the most practical and important methods is to use Deep Neural Network (DNN). In this paper, a Convolutional Neural Network (CNN) has been used to detect a tumor through brain Magnetic Resonance Imaging (MRI) images. Images were first applied to the CNN. The accuracy of Softmax Fully Connected layer used to classify images obtained 98.67%. Also, the accuracy of the CNN is obtained with the Radial Basis Function (RBF) classifier 97.34% and the Decision Tree (DT) classifier, is 94.24%. In addition to the accuracy criterion, we use the benchmarks of Sensitivity, Specificity and Precision evaluate network performance. According to the results obtained from the categorizers, the Softmax classifier has the best accuracy in the CNN according to the results obtained from network accuracy on the image testing. This is a new method based on the combination of feature extraction techniques with the CNN for tumor detection from brain images. The method proposed accuracy 99.12% on the test data. Due to the importance of the diagnosis given by the physician, the accuracy of the doctors help in diagnosing the tumor and treating the patient increased.
脑肿瘤可分为良性和恶性两种。及时和迅速的疾病检测和治疗计划可以改善这些患者的生活质量和延长预期寿命。其中最实用和重要的方法之一是使用深度神经网络(DNN)。在本文中,卷积神经网络(CNN)已被用于通过脑磁共振成像(MRI)图像检测肿瘤。图像首先应用于CNN。Softmax full Connected layer用于图像分类的准确率达到了98.67%。采用径向基函数(RBF)分类器和决策树(DT)分类器分别获得了97.34%和94.24%的准确率。除了准确性标准外,我们还使用灵敏度,特异性和精度的基准来评估网络性能。从分类器得到的结果来看,在图像测试上从网络精度得到的结果来看,在CNN中Softmax分类器的准确率是最好的。这是一种基于特征提取技术与CNN相结合的脑图像肿瘤检测新方法。该方法在测试数据上的准确率为99.12%。由于医生诊断的重要性,医生帮助诊断肿瘤和治疗病人的准确性提高了。
{"title":"Brain Tumor Detection Using Deep Neural Network and Machine Learning Algorithm","authors":"Masoumeh Siar, M. Teshnehlab","doi":"10.1109/ICCKE48569.2019.8964846","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964846","url":null,"abstract":"Brain tumor can be classified into two types: benign and malignant. Timely and prompt disease detection and treatment plan leads to improved quality of life and increased life expectancy in these patients. One of the most practical and important methods is to use Deep Neural Network (DNN). In this paper, a Convolutional Neural Network (CNN) has been used to detect a tumor through brain Magnetic Resonance Imaging (MRI) images. Images were first applied to the CNN. The accuracy of Softmax Fully Connected layer used to classify images obtained 98.67%. Also, the accuracy of the CNN is obtained with the Radial Basis Function (RBF) classifier 97.34% and the Decision Tree (DT) classifier, is 94.24%. In addition to the accuracy criterion, we use the benchmarks of Sensitivity, Specificity and Precision evaluate network performance. According to the results obtained from the categorizers, the Softmax classifier has the best accuracy in the CNN according to the results obtained from network accuracy on the image testing. This is a new method based on the combination of feature extraction techniques with the CNN for tumor detection from brain images. The method proposed accuracy 99.12% on the test data. Due to the importance of the diagnosis given by the physician, the accuracy of the doctors help in diagnosing the tumor and treating the patient increased.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"1 1","pages":"363-368"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89761090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965163
Mohammad Kamyar Arbab, Mahmoud Naghibzadeh, S. R. Kamel Tabbakh
scientific workflows are suitable models in many applications for recognition and execution of tasks in parallel, especially in the Cloud. Different aspects of resource provisioning and the resource pricing model in the cloud environment cause the scheduling problem of workflow very complex. One of the main aims of the scheduling algorithms is to satisfy the users’ different quality of service requirements. Communication delay between tasks is an important affecting factor in optimal scheduling of workflows. It can also highly increase the cost of workflow scheduling if proper actions are not taken. To solve this problem, we propose a tasks duplication-based list scheduling algorithm called Communication-Critical Task Duplication (CCTD). We first define the concept of communication critical task (CCT) for a workflow. Then, by presenting a ranking-based approach, we identify communication critical tasks in a workflow as well as duplicating candidates. Task duplication in idle time slots of leased virtual machines which their children tasks are mapped to. This idea, while eliminating the cost and time of data transfer between parent-child tasks, reduces the time of execution of tasks and effectively uses leased time intervals of resources. According to the proposed scheduling algorithm, a new heuristic method has been proposed for the budget distribution. This method distribute overall budget to tasks in proportional to the workload and duplication rank of each task. The proposed algorithm was evaluated and verified using four well-known scientific workflows. The simulation results show that the CCTD algorithm, while respecting the user budget constraint, reduces the workflow overall completion time.
{"title":"Communication-Critical Task Duplication for Cloud Workflow Scheduling with Time and Budget Concerns","authors":"Mohammad Kamyar Arbab, Mahmoud Naghibzadeh, S. R. Kamel Tabbakh","doi":"10.1109/ICCKE48569.2019.8965163","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965163","url":null,"abstract":"scientific workflows are suitable models in many applications for recognition and execution of tasks in parallel, especially in the Cloud. Different aspects of resource provisioning and the resource pricing model in the cloud environment cause the scheduling problem of workflow very complex. One of the main aims of the scheduling algorithms is to satisfy the users’ different quality of service requirements. Communication delay between tasks is an important affecting factor in optimal scheduling of workflows. It can also highly increase the cost of workflow scheduling if proper actions are not taken. To solve this problem, we propose a tasks duplication-based list scheduling algorithm called Communication-Critical Task Duplication (CCTD). We first define the concept of communication critical task (CCT) for a workflow. Then, by presenting a ranking-based approach, we identify communication critical tasks in a workflow as well as duplicating candidates. Task duplication in idle time slots of leased virtual machines which their children tasks are mapped to. This idea, while eliminating the cost and time of data transfer between parent-child tasks, reduces the time of execution of tasks and effectively uses leased time intervals of resources. According to the proposed scheduling algorithm, a new heuristic method has been proposed for the budget distribution. This method distribute overall budget to tasks in proportional to the workload and duplication rank of each task. The proposed algorithm was evaluated and verified using four well-known scientific workflows. The simulation results show that the CCTD algorithm, while respecting the user budget constraint, reduces the workflow overall completion time.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"24 1","pages":"255-262"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87529148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964824
F. Z. Boroujeni, Simindokht Jahangard, R. Rahmat
Extracting an accurate skeletal representation of coronary arteries is an important step for subsequent analysis of angiography images such as image registration and 3D reconstruction of the arterial tree. This step is usually performed by enhancing vessel-like objects in the image, in order to differentiate between blood vessels and background, followed by applying the thinning algorithm to obtain the final output. Another approach is direct extraction of centerline points using exploratory tracing algorithm preceded by a seed point detection schema to provide a set of reliable starting points for the tracing algorithm. A large number of methods fall in these two approaches and this paper aims to contrast them through a brief review of their innate characteristics, associated limitations and current challenges and issues.
{"title":"A Brief Review on Vessel Extraction and Tracking Methods","authors":"F. Z. Boroujeni, Simindokht Jahangard, R. Rahmat","doi":"10.1109/ICCKE48569.2019.8964824","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964824","url":null,"abstract":"Extracting an accurate skeletal representation of coronary arteries is an important step for subsequent analysis of angiography images such as image registration and 3D reconstruction of the arterial tree. This step is usually performed by enhancing vessel-like objects in the image, in order to differentiate between blood vessels and background, followed by applying the thinning algorithm to obtain the final output. Another approach is direct extraction of centerline points using exploratory tracing algorithm preceded by a seed point detection schema to provide a set of reliable starting points for the tracing algorithm. A large number of methods fall in these two approaches and this paper aims to contrast them through a brief review of their innate characteristics, associated limitations and current challenges and issues.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"8 1","pages":"46-52"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85360074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}