Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964943
M. Jafari, Lida Daryani, M. Feizi-Derakhshi
In the applications of real-life, the importance of having a flexible optimization algorithm is obvious. Commonly in these issues, Evolutionary Multi-Objective Optimization (EMO) algorithms and particularly Pareto optimization method as one of the most significant and useful classes have been used extensively. Often optimization algorithms that have used the EMO algorithm in their own as posteriori Decision-making (DM) algorithms, in weighted objectives problems, have suffered from the uniform prioritization of objectives. In this paper, we propose a lightweight angle-based updating Pareto front (PF) algorithm which considers the preferences of desired objectives expressed using the Favorite Region (FR). Actually, the FR has been created in the objective space according to the prior-fixed angle of priority objectives. Thus, the solutions in PF will be able to tend towards FR during the evolutionary process. Consequently, other solutions that are not in the favorite region will not go away, but the Fronts levels of solutions via an update process will change rearwardly. The updating process in Pareto method, during the evolution process, causes that the solutions in the first and second fronts lead to the exploration and exploitation of appropriate solutions in the favorite regions with uniform distribution for first Pareto Front, while the solutions’ density in the undesirable region become impaired. The experimental results on benchmark multi-objective problems show that the proposed algorithm in addition to providing preference decision-making, by providing a tradeoff between convergence performance and computational complexity, can give the best convergence performance.
在实际应用中,灵活的优化算法的重要性是显而易见的。在这些问题中,进化多目标优化算法,尤其是帕累托优化算法作为最重要和最有用的一类算法得到了广泛的应用。在加权目标问题中,通常使用EMO算法作为后验决策(DM)算法的优化算法会受到目标统一优先级的影响。在本文中,我们提出了一种轻量级的基于角度的更新Pareto front (PF)算法,该算法考虑了用最喜欢区域(FR)表示的期望目标的偏好。实际上,FR是根据优先目标的预先确定的角度在目标空间中产生的。因此,在进化过程中,PF中的解将能够趋向于FR。因此,其他不在最喜欢区域的解决方案不会消失,但通过更新过程的解决方案的前线级别将向后改变。帕累托方法的更新过程,在演化过程中,导致第一和第二战线上的解导致在第一帕累托战线均匀分布的有利区域中寻找和利用合适的解,而在不利区域中解的密度降低。在基准多目标问题上的实验结果表明,该算法在提供偏好决策的同时,通过在收敛性能和计算复杂度之间进行权衡,可以获得最佳的收敛性能。
{"title":"A Prior Preference-Based Decision-Making Algorithm in Pareto Optimization","authors":"M. Jafari, Lida Daryani, M. Feizi-Derakhshi","doi":"10.1109/ICCKE48569.2019.8964943","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964943","url":null,"abstract":"In the applications of real-life, the importance of having a flexible optimization algorithm is obvious. Commonly in these issues, Evolutionary Multi-Objective Optimization (EMO) algorithms and particularly Pareto optimization method as one of the most significant and useful classes have been used extensively. Often optimization algorithms that have used the EMO algorithm in their own as posteriori Decision-making (DM) algorithms, in weighted objectives problems, have suffered from the uniform prioritization of objectives. In this paper, we propose a lightweight angle-based updating Pareto front (PF) algorithm which considers the preferences of desired objectives expressed using the Favorite Region (FR). Actually, the FR has been created in the objective space according to the prior-fixed angle of priority objectives. Thus, the solutions in PF will be able to tend towards FR during the evolutionary process. Consequently, other solutions that are not in the favorite region will not go away, but the Fronts levels of solutions via an update process will change rearwardly. The updating process in Pareto method, during the evolution process, causes that the solutions in the first and second fronts lead to the exploration and exploitation of appropriate solutions in the favorite regions with uniform distribution for first Pareto Front, while the solutions’ density in the undesirable region become impaired. The experimental results on benchmark multi-objective problems show that the proposed algorithm in addition to providing preference decision-making, by providing a tradeoff between convergence performance and computational complexity, can give the best convergence performance.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"24 1","pages":"98-103"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81921070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964956
Mehrdad Noori, Ali Bahri, K. Mohammadi
Gliomas are the most common and aggressive among brain tumors, which cause a short life expectancy in their highest grade. Therefore, treatment assessment is a key stage to enhance the quality of the patients’ lives. Recently, deep convolutional neural networks (DCNNs) have achieved a remarkable performance in brain tumor segmentation, but this task is still difficult owing to high varying intensity and appearance of gliomas. Most of the existing methods, especially UNet-based networks, integrate low-level and high-level features in a naive way, which may result in confusion for the model. Moreover, most approaches employ 3D architectures to benefit from 3D contextual information of input images. These architectures contain more parameters and computational complexity than 2D architectures. On the other hand, using 2D models causes not to benefit from 3D contextual information of input images. In order to address the mentioned issues, we design a low-parameter network based on 2D UNet in which we employ two techniques. The first technique is an attention mechanism, which is adopted after concatenation of low-level and high-level features. This technique prevents confusion for the model by weighting each of the channels adaptively. The second technique is the Multi-View Fusion. By adopting this technique, we can benefit from 3D contextual information of input images despite using a 2D model. Experimental results demonstrate that our method performs favorably against 2017 and 2018 state-of-the-art methods.
{"title":"Attention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation","authors":"Mehrdad Noori, Ali Bahri, K. Mohammadi","doi":"10.1109/ICCKE48569.2019.8964956","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964956","url":null,"abstract":"Gliomas are the most common and aggressive among brain tumors, which cause a short life expectancy in their highest grade. Therefore, treatment assessment is a key stage to enhance the quality of the patients’ lives. Recently, deep convolutional neural networks (DCNNs) have achieved a remarkable performance in brain tumor segmentation, but this task is still difficult owing to high varying intensity and appearance of gliomas. Most of the existing methods, especially UNet-based networks, integrate low-level and high-level features in a naive way, which may result in confusion for the model. Moreover, most approaches employ 3D architectures to benefit from 3D contextual information of input images. These architectures contain more parameters and computational complexity than 2D architectures. On the other hand, using 2D models causes not to benefit from 3D contextual information of input images. In order to address the mentioned issues, we design a low-parameter network based on 2D UNet in which we employ two techniques. The first technique is an attention mechanism, which is adopted after concatenation of low-level and high-level features. This technique prevents confusion for the model by weighting each of the channels adaptively. The second technique is the Multi-View Fusion. By adopting this technique, we can benefit from 3D contextual information of input images despite using a 2D model. Experimental results demonstrate that our method performs favorably against 2017 and 2018 state-of-the-art methods.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"18 1","pages":"269-275"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85128299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965177
Saeid Rafiei Taghanaki, K. Jamshidi, Ali Bohlooli
The RPL protocol was provided for routing in the Internet of Things (IoT) network. This protocol may be under attack. One of the attacks in the RPL protocol is the sinkhole attack that, an attacker tries to attract nearby nodes and, as a result, it causes that many nodes pass their traffic through the attacker node. In the previous methods for detecting a sinkhole attack in the RPL protocol, the accuracy of the detection parameter has been important. In the present study, by providing a local detection method called DEEM and improving the overhead in terms of energy consumption associated with the detection method, also a proper detection accuracy was obtained. DEEM has two phases in each node called Information Gathering and Detection Phases. We implemented DEEM on Contiki OS and evaluated it using the Cooja simulator. Our assessment shows that, in simulated scenarios, DEEM has a low overhead in term of energy consumption, a high true positive rate, and a good detection speed, and this is a scalable method. The cost of DEEM overhead is small enough to be deployed in resource-constrained nodes.
{"title":"DEEM: A Decentralized and Energy Efficient Method for detecting sinkhole attacks on the internet of things","authors":"Saeid Rafiei Taghanaki, K. Jamshidi, Ali Bohlooli","doi":"10.1109/ICCKE48569.2019.8965177","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965177","url":null,"abstract":"The RPL protocol was provided for routing in the Internet of Things (IoT) network. This protocol may be under attack. One of the attacks in the RPL protocol is the sinkhole attack that, an attacker tries to attract nearby nodes and, as a result, it causes that many nodes pass their traffic through the attacker node. In the previous methods for detecting a sinkhole attack in the RPL protocol, the accuracy of the detection parameter has been important. In the present study, by providing a local detection method called DEEM and improving the overhead in terms of energy consumption associated with the detection method, also a proper detection accuracy was obtained. DEEM has two phases in each node called Information Gathering and Detection Phases. We implemented DEEM on Contiki OS and evaluated it using the Cooja simulator. Our assessment shows that, in simulated scenarios, DEEM has a low overhead in term of energy consumption, a high true positive rate, and a good detection speed, and this is a scalable method. The cost of DEEM overhead is small enough to be deployed in resource-constrained nodes.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"2 1","pages":"325-330"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78520656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964806
Sanaz Saki Norouzi, A. Akbari, B. Nasersharif
In recent years, neural networks have been widely used for language modeling in different tasks of natural language processing. Results show that long short-term memory (LSTM) neural networks are appropriate for language modeling due to their ability to process long sequences. Furthermore, many studies are shown that extra information improve language models (LMs) performance. In this research, we propose parallel structures for incorporating part-of-speech tags into language modeling task using both the unidirectional and bidirectional type of LSTMs. Words and part-of-speech tags are given to the network as parallel inputs. In this way, to concatenate these two paths, two different structures are proposed according to the type of network used in the parallel part. We analyze the efficiency on Penn Treebank (PTB) dataset using perplexity measure. These two proposed structures show improvements in comparison to the baseline models. Not only does the bidirectional LSTM method gain the lowest perplexity, but it also has the lowest training parameters among our proposed methods. The perplexity of proposed structures has reduced 1.5% and %13 for unidirectional and bidirectional LSTMs, respectively.
{"title":"Language Modeling Using Part-of-speech and Long Short-Term Memory Networks","authors":"Sanaz Saki Norouzi, A. Akbari, B. Nasersharif","doi":"10.1109/ICCKE48569.2019.8964806","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964806","url":null,"abstract":"In recent years, neural networks have been widely used for language modeling in different tasks of natural language processing. Results show that long short-term memory (LSTM) neural networks are appropriate for language modeling due to their ability to process long sequences. Furthermore, many studies are shown that extra information improve language models (LMs) performance. In this research, we propose parallel structures for incorporating part-of-speech tags into language modeling task using both the unidirectional and bidirectional type of LSTMs. Words and part-of-speech tags are given to the network as parallel inputs. In this way, to concatenate these two paths, two different structures are proposed according to the type of network used in the parallel part. We analyze the efficiency on Penn Treebank (PTB) dataset using perplexity measure. These two proposed structures show improvements in comparison to the baseline models. Not only does the bidirectional LSTM method gain the lowest perplexity, but it also has the lowest training parameters among our proposed methods. The perplexity of proposed structures has reduced 1.5% and %13 for unidirectional and bidirectional LSTMs, respectively.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"12 1","pages":"182-187"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77969315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/iccke48569.2019.8965068
{"title":"ICCKE 2019 Program Committee","authors":"","doi":"10.1109/iccke48569.2019.8965068","DOIUrl":"https://doi.org/10.1109/iccke48569.2019.8965068","url":null,"abstract":"","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78899552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964946
Ahmad Alzeyadi, N. Farzaneh
Considering the development of modern information technology, the emergence of fog computing has gained equipment computing power and supplied new solutions for modern traditional industrial applications. Generally, providing communication among devices in smart factories structure is one of the most controversial issues. Since reciprocating lots of messages among existent various tools and intelligent agents is required in the smart factories, and the connections are naturally wireless, they will not have much to offer. If the intelligent agents tend to use broadcasting in sending their messages, the process will be costly with little outcome. Hence, in this paper, an effective solution is presented to gain optimum connections among these elements, while considering the complex issues on energy consumption, network efficiency, traffic, and latency in the exchange of messages. The proposed method is a scheduling awareness of the communicative fog while focusing on complicated energy consumption problems of manufacturing clusters. In the proposed algorithm, four criteria are considered: energy, dynamic threshold, waiting time of tasks, and communication delay among smart factors. These criteria are divided into two categories. The criteria according to which two scheduling and load adjusting procedures are performed depend on the user's opinion. The results of the experiments show that the workload in the proposed method is more balanced than the base method in the robot. This load balancing has reduced the amount of workload in each robot, which reduces the waiting time for each product to be packaged. Also, the amount of communication in the network in the proposed method has decreased about 63% compared to ELBS.
{"title":"A Novel Energy-aware Scheduling and Load-balancing Technique based on Fog Computing","authors":"Ahmad Alzeyadi, N. Farzaneh","doi":"10.1109/ICCKE48569.2019.8964946","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964946","url":null,"abstract":"Considering the development of modern information technology, the emergence of fog computing has gained equipment computing power and supplied new solutions for modern traditional industrial applications. Generally, providing communication among devices in smart factories structure is one of the most controversial issues. Since reciprocating lots of messages among existent various tools and intelligent agents is required in the smart factories, and the connections are naturally wireless, they will not have much to offer. If the intelligent agents tend to use broadcasting in sending their messages, the process will be costly with little outcome. Hence, in this paper, an effective solution is presented to gain optimum connections among these elements, while considering the complex issues on energy consumption, network efficiency, traffic, and latency in the exchange of messages. The proposed method is a scheduling awareness of the communicative fog while focusing on complicated energy consumption problems of manufacturing clusters. In the proposed algorithm, four criteria are considered: energy, dynamic threshold, waiting time of tasks, and communication delay among smart factors. These criteria are divided into two categories. The criteria according to which two scheduling and load adjusting procedures are performed depend on the user's opinion. The results of the experiments show that the workload in the proposed method is more balanced than the base method in the robot. This load balancing has reduced the amount of workload in each robot, which reduces the waiting time for each product to be packaged. Also, the amount of communication in the network in the proposed method has decreased about 63% compared to ELBS.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"46 1","pages":"104-109"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87561288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964875
Hussien Al-haj Ahmad, Yasser Sedaghat, M. Moradiyan
Recently, numerous safety-critical systems have employed a variety of fault tolerance techniques, which are considered an essential requirement to keep the system fault-tolerant. While the current trend in processors technology has increased their effectiveness and performance, the sensitivity of processors to soft errors has increased significantly, making their fault tolerance ability questionable. In this context, fault injection is considered as one of the most popular, rapid, and cost-effective techniques which enables the designers to assess the fault tolerance of systems under faults before their deployment. In this paper, a pure software fault injection technique called LDSFI (a Lightweight Dynamic Software-based Fault Injection) is presented and evaluated. Due to the dynamic aspect of LDSFI, faults are automatically injected into binary code at runtime. Thereby, the proposed technique does not impose any program runtime overhead since the intended source code is not required. The effectiveness of LDSFI was validated through performing exhaustive fault injection experiments using well-known benchmarks. The experiments were carried out using a Core 2 Duo processor, as an Intel x86 Dual-Core PC with 4GB RAM running Ubuntu Linux 14.04 with the GNU Compiler Collection (GCC) version 4.9. Since LDSFI relies on the GNU, it is highly portable and can be adapted for different platforms.
{"title":"LDSFI: a Lightweight Dynamic Software-based Fault Injection","authors":"Hussien Al-haj Ahmad, Yasser Sedaghat, M. Moradiyan","doi":"10.1109/ICCKE48569.2019.8964875","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964875","url":null,"abstract":"Recently, numerous safety-critical systems have employed a variety of fault tolerance techniques, which are considered an essential requirement to keep the system fault-tolerant. While the current trend in processors technology has increased their effectiveness and performance, the sensitivity of processors to soft errors has increased significantly, making their fault tolerance ability questionable. In this context, fault injection is considered as one of the most popular, rapid, and cost-effective techniques which enables the designers to assess the fault tolerance of systems under faults before their deployment. In this paper, a pure software fault injection technique called LDSFI (a Lightweight Dynamic Software-based Fault Injection) is presented and evaluated. Due to the dynamic aspect of LDSFI, faults are automatically injected into binary code at runtime. Thereby, the proposed technique does not impose any program runtime overhead since the intended source code is not required. The effectiveness of LDSFI was validated through performing exhaustive fault injection experiments using well-known benchmarks. The experiments were carried out using a Core 2 Duo processor, as an Intel x86 Dual-Core PC with 4GB RAM running Ubuntu Linux 14.04 with the GNU Compiler Collection (GCC) version 4.9. Since LDSFI relies on the GNU, it is highly portable and can be adapted for different platforms.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"63 1","pages":"207-213"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86588662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965130
Alireza Khalilipour, Moharram Challenger
One of the approaches to increase the performance of Distributed Systems, such as Service Oriented Systems, is the asynchronous invocation of the services. This is applicable because in many cases the clients do not need the result immediately and need not be blocked after the invocation. Without this mechanism, a developer should use a set of complex techniques to gain the mentioned performance. However, this procedure is both error-prone and cumbersome for the developers. Also, the state-of-the-art tools and languages which provide asynchronous mechanisms, involve the developers with the details of these invocations, as the mechanism is not transparent in these languages. In this paper, a new approach is proposed which generates an event-based middleware based on the candidate invocations available in the source/initial code. The approach also modifies the initial code to adapt with the middleware. This middleware plays the role of an interface between clients and web services. Using this approach, the invocations are transparent for the developers. This means that the synchronous and asynchronous invocations of web services are done in the same manner from the developer perspective. This is realized by automatic transformation of synchronous invocations to asynchronous ones via the generated middleware. The experimental evaluations show that the approach transforms the invocations successfully. Also, the results show that the performance of the asynchronous invocations are by far better than the synchronous ones.
{"title":"An Event-based Approach on Automatic Synchronous-to-Asynchronous Transformation of Web Service Invocations","authors":"Alireza Khalilipour, Moharram Challenger","doi":"10.1109/ICCKE48569.2019.8965130","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965130","url":null,"abstract":"One of the approaches to increase the performance of Distributed Systems, such as Service Oriented Systems, is the asynchronous invocation of the services. This is applicable because in many cases the clients do not need the result immediately and need not be blocked after the invocation. Without this mechanism, a developer should use a set of complex techniques to gain the mentioned performance. However, this procedure is both error-prone and cumbersome for the developers. Also, the state-of-the-art tools and languages which provide asynchronous mechanisms, involve the developers with the details of these invocations, as the mechanism is not transparent in these languages. In this paper, a new approach is proposed which generates an event-based middleware based on the candidate invocations available in the source/initial code. The approach also modifies the initial code to adapt with the middleware. This middleware plays the role of an interface between clients and web services. Using this approach, the invocations are transparent for the developers. This means that the synchronous and asynchronous invocations of web services are done in the same manner from the developer perspective. This is realized by automatic transformation of synchronous invocations to asynchronous ones via the generated middleware. The experimental evaluations show that the approach transforms the invocations successfully. Also, the results show that the performance of the asynchronous invocations are by far better than the synchronous ones.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"118 1","pages":"162-169"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73637215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965048
Z. Shirmohammadi, Masoumeh Taali, H. Sabzi
The reliability in many-core and multicore systems is dependent on the correct functionality of communication structure between cores. However, data transfer between cores in these on chip communications can seriously face with crosstalk faults. Timing delay is the most effect of crosstalk faults and so providing a model to predict the delay can reduce the time for designers to provide more efficient mechanisms to decrease crosstalk faults. Accordingly, this paper proposes a probability inductance-based model named InduM to reduce the timing delay in the communication of chips. The main advantages of InduM are: 1) it considers the inductance effects in the model;2) it is based on 5-wire that is more accurate 3) it can be applied to a communication channel with any arbitrary width. To validate the proposed model, SPICE simulations are performed in a various working conditions and delays obtained from simulations compared with those resulting from InduM model. Comparisons show that InduM can efficiently estimate the delay of communication channels with 4-5 % error rate.
{"title":"InduM: An Accurate probablity Inductance-based Model to Predict Delay in Chips","authors":"Z. Shirmohammadi, Masoumeh Taali, H. Sabzi","doi":"10.1109/ICCKE48569.2019.8965048","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965048","url":null,"abstract":"The reliability in many-core and multicore systems is dependent on the correct functionality of communication structure between cores. However, data transfer between cores in these on chip communications can seriously face with crosstalk faults. Timing delay is the most effect of crosstalk faults and so providing a model to predict the delay can reduce the time for designers to provide more efficient mechanisms to decrease crosstalk faults. Accordingly, this paper proposes a probability inductance-based model named InduM to reduce the timing delay in the communication of chips. The main advantages of InduM are: 1) it considers the inductance effects in the model;2) it is based on 5-wire that is more accurate 3) it can be applied to a communication channel with any arbitrary width. To validate the proposed model, SPICE simulations are performed in a various working conditions and delays obtained from simulations compared with those resulting from InduM model. Comparisons show that InduM can efficiently estimate the delay of communication channels with 4-5 % error rate.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"8 1","pages":"414-419"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83807500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965014
S. Mohammadi, Sina Ghofrani Majelan, S. B. Shokouhi
Despite the fact that notable improvements have been made recently in the field of feature extraction and classification, human action recognition is still challenging, especially in images, in which, unlike videos, there is no motion. Thus, the methods proposed for recognizing human actions in videos cannot be applied to still images. A big challenge in action recognition in still images is the lack of large enough datasets, which is problematic for training deep Convolutional Neural Networks (CNNs) due to the overfitting issue. In this paper, by taking advantage of pre-trained CNNs, we employ the transfer learning technique to tackle the lack of massive labeled action recognition datasets. Furthermore, since the last layer of the CNN has class-specific information, we apply an attention mechanism on the output feature maps of the CNN to extract more discriminative and powerful features for classification of human actions. Moreover, we use eight different pre-trained CNNs in our framework and investigate their performance on Stanford 40 dataset. Finally, we propose using the Ensemble Learning technique to enhance the overall accuracy of action classification by combining the predictions of multiple models. The best setting of our method is able to achieve 93.17% accuracy on the Stanford 40 dataset.
{"title":"Ensembles of Deep Neural Networks for Action Recognition in Still Images","authors":"S. Mohammadi, Sina Ghofrani Majelan, S. B. Shokouhi","doi":"10.1109/ICCKE48569.2019.8965014","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965014","url":null,"abstract":"Despite the fact that notable improvements have been made recently in the field of feature extraction and classification, human action recognition is still challenging, especially in images, in which, unlike videos, there is no motion. Thus, the methods proposed for recognizing human actions in videos cannot be applied to still images. A big challenge in action recognition in still images is the lack of large enough datasets, which is problematic for training deep Convolutional Neural Networks (CNNs) due to the overfitting issue. In this paper, by taking advantage of pre-trained CNNs, we employ the transfer learning technique to tackle the lack of massive labeled action recognition datasets. Furthermore, since the last layer of the CNN has class-specific information, we apply an attention mechanism on the output feature maps of the CNN to extract more discriminative and powerful features for classification of human actions. Moreover, we use eight different pre-trained CNNs in our framework and investigate their performance on Stanford 40 dataset. Finally, we propose using the Ensemble Learning technique to enhance the overall accuracy of action classification by combining the predictions of multiple models. The best setting of our method is able to achieve 93.17% accuracy on the Stanford 40 dataset.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"27 1","pages":"315-318"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80642828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}