L. Weigang, E. Sandes, Jianya Zheng, A. D. de Melo, L. Uden
Online social networks (OSNs) offer people the opportunity to join communities where they share a common interest or objective. This kind of community is useful for studying the human behavior, diffusion of information, and dynamics of groups. As the members of a community are always changing, an efficient solution is needed to query information in real time. This paper introduces the Follow Model to present the basic relationship between users in OSNs, and combines it with the MapReduce solution to develop new algorithms with parallel paradigms for querying. Two models for reverse relation and high-order relation of the users were implemented in the Hadoop system. Based on 75 GB message data and 26 GB relation network data from Twitter, a case study was realized using two dynamic discussion communities: #musicmonday and #beatcancer. The querying performance demonstrates that the new solution with the implementation in Hadoop significantly improves the ability to find useful information from OSNs.
{"title":"Querying dynamic communities in online social networks","authors":"L. Weigang, E. Sandes, Jianya Zheng, A. D. de Melo, L. Uden","doi":"10.1631/jzus.C1300281","DOIUrl":"https://doi.org/10.1631/jzus.C1300281","url":null,"abstract":"Online social networks (OSNs) offer people the opportunity to join communities where they share a common interest or objective. This kind of community is useful for studying the human behavior, diffusion of information, and dynamics of groups. As the members of a community are always changing, an efficient solution is needed to query information in real time. This paper introduces the Follow Model to present the basic relationship between users in OSNs, and combines it with the MapReduce solution to develop new algorithms with parallel paradigms for querying. Two models for reverse relation and high-order relation of the users were implemented in the Hadoop system. Based on 75 GB message data and 26 GB relation network data from Twitter, a case study was realized using two dynamic discussion communities: #musicmonday and #beatcancer. The querying performance demonstrates that the new solution with the implementation in Hadoop significantly improves the ability to find useful information from OSNs.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"2 1","pages":"81 - 90"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300281","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67534831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In real applications of inductive learning for classification, labeled instances are often deficient, and labeling them by an oracle is often expensive and time-consuming. Active learning on a single task aims to select only informative unlabeled instances for querying to improve the classification accuracy while decreasing the querying cost. However, an inevitable problem in active learning is that the informative measures for selecting queries are commonly based on the initial hypotheses sampled from only a few labeled instances. In such a circumstance, the initial hypotheses are not reliable and may deviate from the true distribution underlying the target task. Consequently, the informative measures will possibly select irrelevant instances. A promising way to compensate this problem is to borrow useful knowledge from other sources with abundant labeled information, which is called transfer learning. However, a significant challenge in transfer learning is how to measure the similarity between the source and the target tasks. One needs to be aware of different distributions or label assignments from unrelated source tasks; otherwise, they will lead to degenerated performance while transferring. Also, how to design an effective strategy to avoid selecting irrelevant samples to query is still an open question. To tackle these issues, we propose a hybrid algorithm for active learning with the help of transfer learning by adopting a divergence measure to alleviate the negative transfer caused by distribution differences. To avoid querying irrelevant instances, we also present an adaptive strategy which could eliminate unnecessary instances in the input space and models in the model space. Extensive experiments on both the synthetic and the real data sets show that the proposed algorithm is able to query fewer instances with a higher accuracy and that it converges faster than the state-of-the-art methods.
{"title":"Transfer active learning by querying committee","authors":"Hao Shao, Feng Tao, Rui Xu","doi":"10.1631/jzus.C1300167","DOIUrl":"https://doi.org/10.1631/jzus.C1300167","url":null,"abstract":"In real applications of inductive learning for classification, labeled instances are often deficient, and labeling them by an oracle is often expensive and time-consuming. Active learning on a single task aims to select only informative unlabeled instances for querying to improve the classification accuracy while decreasing the querying cost. However, an inevitable problem in active learning is that the informative measures for selecting queries are commonly based on the initial hypotheses sampled from only a few labeled instances. In such a circumstance, the initial hypotheses are not reliable and may deviate from the true distribution underlying the target task. Consequently, the informative measures will possibly select irrelevant instances. A promising way to compensate this problem is to borrow useful knowledge from other sources with abundant labeled information, which is called transfer learning. However, a significant challenge in transfer learning is how to measure the similarity between the source and the target tasks. One needs to be aware of different distributions or label assignments from unrelated source tasks; otherwise, they will lead to degenerated performance while transferring. Also, how to design an effective strategy to avoid selecting irrelevant samples to query is still an open question. To tackle these issues, we propose a hybrid algorithm for active learning with the help of transfer learning by adopting a divergence measure to alleviate the negative transfer caused by distribution differences. To avoid querying irrelevant instances, we also present an adaptive strategy which could eliminate unnecessary instances in the input space and models in the model space. Extensive experiments on both the synthetic and the real data sets show that the proposed algorithm is able to query fewer instances with a higher accuracy and that it converges faster than the state-of-the-art methods.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"36 1","pages":"107 - 118"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300167","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67533860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model classification is essential to the management and reuse of 3D CAD models. Manual model classification is laborious and error prone. At the same time, the automatic classification methods are scarce due to the intrinsic complexity of 3D CAD models. In this paper, we propose an automatic 3D CAD model classification approach based on deep neural networks. According to prior knowledge of the CAD domain, features are selected and extracted from 3D CAD models first, and then preprocessed as high dimensional input vectors for category recognition. By analogy with the thinking process of engineers, a deep neural network classifier for 3D CAD models is constructed with the aid of deep learning techniques. To obtain an optimal solution, multiple strategies are appropriately chosen and applied in the training phase, which makes our classifier achieve better performance. We demonstrate the efficiency and effectiveness of our approach through experiments on 3D CAD model datasets.
{"title":"A deep learning approach to the classification of 3D CAD models","authors":"Fei Qin, Lu-ye Li, Shu-ming Gao, Xiaoling Yang, Xiang Chen","doi":"10.1631/jzus.C1300185","DOIUrl":"https://doi.org/10.1631/jzus.C1300185","url":null,"abstract":"Model classification is essential to the management and reuse of 3D CAD models. Manual model classification is laborious and error prone. At the same time, the automatic classification methods are scarce due to the intrinsic complexity of 3D CAD models. In this paper, we propose an automatic 3D CAD model classification approach based on deep neural networks. According to prior knowledge of the CAD domain, features are selected and extracted from 3D CAD models first, and then preprocessed as high dimensional input vectors for category recognition. By analogy with the thinking process of engineers, a deep neural network classifier for 3D CAD models is constructed with the aid of deep learning techniques. To obtain an optimal solution, multiple strategies are appropriately chosen and applied in the training phase, which makes our classifier achieve better performance. We demonstrate the efficiency and effectiveness of our approach through experiments on 3D CAD model datasets.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"15 1","pages":"91 - 106"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300185","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67533672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Isazadeh, J. Karimpour, Islam Elgedawy, Habib Izadkhah
One way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment. We argue that the execution speedup primarily depends on the concurrency degree between the identified segments as well as communication overhead between the segments. To guarantee the best speedup, we have to obtain the maximum possible concurrency degree between the identified segments, taking communication overhead into consideration. Existing code distributor and multi-threading approaches do not fulfill such requirements; hence, they cannot provide expected distributability gains in advance. To overcome such limitations, we propose a novel approach for verifying the distributability of sequential object-oriented programs. The proposed approach enables users to see the maximum speedup gains before the actual distributability implementations, as it computes an objective function which is used to measure different distribution values from the same program, taking into consideration both remote and sequential calls. Experimental results showed that the proposed approach successfully determines the distributability of different real-life software applications compared with their real-life sequential and distributed implementations.
{"title":"An analytical model for source code distributability verification","authors":"A. Isazadeh, J. Karimpour, Islam Elgedawy, Habib Izadkhah","doi":"10.1631/jzus.C1300066","DOIUrl":"https://doi.org/10.1631/jzus.C1300066","url":null,"abstract":"One way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment. We argue that the execution speedup primarily depends on the concurrency degree between the identified segments as well as communication overhead between the segments. To guarantee the best speedup, we have to obtain the maximum possible concurrency degree between the identified segments, taking communication overhead into consideration. Existing code distributor and multi-threading approaches do not fulfill such requirements; hence, they cannot provide expected distributability gains in advance. To overcome such limitations, we propose a novel approach for verifying the distributability of sequential object-oriented programs. The proposed approach enables users to see the maximum speedup gains before the actual distributability implementations, as it computes an objective function which is used to measure different distribution values from the same program, taking into consideration both remote and sequential calls. Experimental results showed that the proposed approach successfully determines the distributability of different real-life software applications compared with their real-life sequential and distributed implementations.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"15 1","pages":"126 - 138"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300066","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67532011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In view of the high energy consumption and low response speed of the traditional hydraulic system for an injection molding machine, a servo motor driven constant pump hydraulic system is designed for a precision injection molding process, which uses a servo motor, a constant pump, and a pressure sensor, instead of a common motor, a constant pump, a pressure proportion valve, and a flow proportion valve. A model predictive control strategy based on neurodynamic optimization is proposed to control this new hydraulic system in the injection molding process. Simulation results showed that this control method has good control precision and quick response.
{"title":"Model predictive control of servo motor driven constant pump hydraulic system in injection molding process based on neurodynamic optimization","authors":"Yong-gang Peng, Jun Wang, Wei Wei","doi":"10.1631/jzus.C1300182","DOIUrl":"https://doi.org/10.1631/jzus.C1300182","url":null,"abstract":"In view of the high energy consumption and low response speed of the traditional hydraulic system for an injection molding machine, a servo motor driven constant pump hydraulic system is designed for a precision injection molding process, which uses a servo motor, a constant pump, and a pressure sensor, instead of a common motor, a constant pump, a pressure proportion valve, and a flow proportion valve. A model predictive control strategy based on neurodynamic optimization is proposed to control this new hydraulic system in the injection molding process. Simulation results showed that this control method has good control precision and quick response.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"15 1","pages":"139 - 146"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300182","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67533578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye-tian Fan, Wei Wu, Wenyu Yang, Qin-wei Fan, Jian Wang
Compared with traditional learning methods such as the back propagation (BP) method, extreme learning machine provides much faster learning speed and needs less human intervention, and thus has been widely used. In this paper we combine the L1/2 regularization method with extreme learning machine to prune extreme learning machine. A variable learning coefficient is employed to prevent too large a learning increment. A numerical experiment demonstrates that a network pruned L1/2 regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2 regularization.
{"title":"A pruning algorithm with L1/2 regularizer for extreme learning machine","authors":"Ye-tian Fan, Wei Wu, Wenyu Yang, Qin-wei Fan, Jian Wang","doi":"10.1631/jzus.C1300197","DOIUrl":"https://doi.org/10.1631/jzus.C1300197","url":null,"abstract":"Compared with traditional learning methods such as the back propagation (BP) method, extreme learning machine provides much faster learning speed and needs less human intervention, and thus has been widely used. In this paper we combine the L1/2 regularization method with extreme learning machine to prune extreme learning machine. A variable learning coefficient is employed to prevent too large a learning increment. A numerical experiment demonstrates that a network pruned L1/2 regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2 regularization.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"15 1","pages":"119 - 125"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300197","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67534085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-speed, fixed-latency serial links find application in distributed data acquisition and control systems, such as the timing trigger and control (TTC) system for high energy physics experiments. However, most high-speed serial transceivers do not keep the same chip latency after each power-up or reset, as there is no deterministic phase relationship between the transmitted and received clocks after each power-up. In this paper, we propose a fixed-latency serial link based on high-speed transceivers embedded in Xilinx field programmable gate arrays (FPGAs). First, we modify the configuration and clock distribution of the transceiver to eliminate the phase difference between the clock domains in the transmitter/receiver. Second, we use the internal alignment circuit of the transceiver and a digital clock manager (DCM)/phase-locked loop (PLL) based clock generator to eliminate the phase difference between the clock domains in the transmitter and receiver. The test results of the link latency are shown. Compared with existing solutions, our design not only implements fixed chip latency, but also reduces the average system lock time.
{"title":"High-speed, fixed-latency serial links with Xilinx FPGAs","authors":"Xue Liu, Qing Deng, Bo-ning Hou, Ze-ke Wang","doi":"10.1631/jzus.C1300249","DOIUrl":"https://doi.org/10.1631/jzus.C1300249","url":null,"abstract":"High-speed, fixed-latency serial links find application in distributed data acquisition and control systems, such as the timing trigger and control (TTC) system for high energy physics experiments. However, most high-speed serial transceivers do not keep the same chip latency after each power-up or reset, as there is no deterministic phase relationship between the transmitted and received clocks after each power-up. In this paper, we propose a fixed-latency serial link based on high-speed transceivers embedded in Xilinx field programmable gate arrays (FPGAs). First, we modify the configuration and clock distribution of the transceiver to eliminate the phase difference between the clock domains in the transmitter/receiver. Second, we use the internal alignment circuit of the transceiver and a digital clock manager (DCM)/phase-locked loop (PLL) based clock generator to eliminate the phase difference between the clock domains in the transmitter and receiver. The test results of the link latency are shown. Compared with existing solutions, our design not only implements fixed chip latency, but also reduces the average system lock time.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"15 1","pages":"153 - 160"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300249","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67534339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linjun Fan, Yunxiang Ling, Xing-tao Zhang, Jun Tang
Appropriate maintenance technologies that facilitate model consistency in distributed simulation systems are relevant but generally unavailable. To resolve this problem, we analyze the main factors that cause model inconsistency. The analysis methods used for traditional distributed simulations are mostly empirical and qualitative, and disregard the dynamic characteristics of factor evolution in model operational running. Furthermore, distributed simulation applications (DSAs) are rapidly evolving in terms of large-scale, distributed, service-oriented, compositional, and dynamic features. Such developments present difficulty in the use of traditional analysis methods in DSAs, for the analysis of factorial effects on simulation models. To solve these problems, we construct a dynamic evolution mechanism of model consistency, called the connected model hyper-digraph (CMH). CMH is developed using formal methods that accurately specify the evolutional processes and activities of models (i.e., self-evolution, interoperability, compositionality, and authenticity). We also develop an algorithm of model consistency evolution (AMCE) based on CMH to quantitatively and dynamically evaluate influencing factors. Experimental results demonstrate that non-combination (33.7% on average) is the most influential factor, non-single-directed understanding (26.6%) is the second most influential, and non-double-directed understanding (5.0%) is the least influential. Unlike previous analysis methods, AMCE provides good feasibility and effectiveness. This research can serve as guidance for designers of consistency maintenance technologies toward achieving a high level of consistency in future DSAs.
{"title":"Quantitative evaluation of model consistency evolution in compositional service-oriented simulation using a connected hyper-digraph","authors":"Linjun Fan, Yunxiang Ling, Xing-tao Zhang, Jun Tang","doi":"10.1631/jzus.C1300089","DOIUrl":"https://doi.org/10.1631/jzus.C1300089","url":null,"abstract":"Appropriate maintenance technologies that facilitate model consistency in distributed simulation systems are relevant but generally unavailable. To resolve this problem, we analyze the main factors that cause model inconsistency. The analysis methods used for traditional distributed simulations are mostly empirical and qualitative, and disregard the dynamic characteristics of factor evolution in model operational running. Furthermore, distributed simulation applications (DSAs) are rapidly evolving in terms of large-scale, distributed, service-oriented, compositional, and dynamic features. Such developments present difficulty in the use of traditional analysis methods in DSAs, for the analysis of factorial effects on simulation models. To solve these problems, we construct a dynamic evolution mechanism of model consistency, called the connected model hyper-digraph (CMH). CMH is developed using formal methods that accurately specify the evolutional processes and activities of models (i.e., self-evolution, interoperability, compositionality, and authenticity). We also develop an algorithm of model consistency evolution (AMCE) based on CMH to quantitatively and dynamically evaluate influencing factors. Experimental results demonstrate that non-combination (33.7% on average) is the most influential factor, non-single-directed understanding (26.6%) is the second most influential, and non-double-directed understanding (5.0%) is the least influential. Unlike previous analysis methods, AMCE provides good feasibility and effectiveness. This research can serve as guidance for designers of consistency maintenance technologies toward achieving a high level of consistency in future DSAs.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"15 1","pages":"1 - 12"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300089","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67533388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exponential stability and robust exponential stability relating to switched systems consisting of stable and unstable nonlinear subsystems are considered in this study. At each switching time instant, the impulsive increments which are nonlinear functions of the states are extended from switched linear systems to switched nonlinear systems. Using the average dwell time method and piecewise Lyapunov function approach, when the total active time of unstable subsystems compared to the total active time of stable subsystems is less than a certain proportion, the exponential stability of the switched system is guaranteed. The switching law is designed which includes the average dwell time of the switched system. Switched systems with uncertainties are also studied. Sufficient conditions of the exponential stability and robust exponential stability are provided for switched nonlinear systems. Finally, simulations show the effectiveness of the result.
{"title":"Exponential stability of nonlinear impulsive switched systems with stable and unstable subsystems","authors":"Xiao-li Zhang, Anhui Lin, Jianjian Zeng","doi":"10.1631/jzus.C1300123","DOIUrl":"https://doi.org/10.1631/jzus.C1300123","url":null,"abstract":"Exponential stability and robust exponential stability relating to switched systems consisting of stable and unstable nonlinear subsystems are considered in this study. At each switching time instant, the impulsive increments which are nonlinear functions of the states are extended from switched linear systems to switched nonlinear systems. Using the average dwell time method and piecewise Lyapunov function approach, when the total active time of unstable subsystems compared to the total active time of stable subsystems is less than a certain proportion, the exponential stability of the switched system is guaranteed. The switching law is designed which includes the average dwell time of the switched system. Switched systems with uncertainties are also studied. Sufficient conditions of the exponential stability and robust exponential stability are provided for switched nonlinear systems. Finally, simulations show the effectiveness of the result.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"15 1","pages":"31 - 42"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67533500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bin Chen, Lao-bing Zhang, Xiao-cheng Liu, H. Vangheluwe
Improving simulation performance using activity tracking has attracted attention in the modeling field in recent years. The reference to activity has been successfully used to predict and promote the simulation performance. Tracking activity, however, uses only the inherent performance information contained in the models. To extend activity prediction in modeling, we propose the activity enhanced modeling with an activity meta-model at the meta-level. The meta-model provides a set of interfaces to model activity in a specific domain. The activity model transformation in subsequence is devised to deal with the simulation difference due to the heterogeneous activity model. Finally, the resource-aware simulation framework is implemented to integrate the activity models in activity-based simulation. The case study shows the improvement brought on by activity-based simulation using discrete event system specification (DEVS).
{"title":"Activity-based simulation using DEVS: increasing performance by an activity model in parallel DEVS simulation","authors":"Bin Chen, Lao-bing Zhang, Xiao-cheng Liu, H. Vangheluwe","doi":"10.1631/jzus.C1300121","DOIUrl":"https://doi.org/10.1631/jzus.C1300121","url":null,"abstract":"Improving simulation performance using activity tracking has attracted attention in the modeling field in recent years. The reference to activity has been successfully used to predict and promote the simulation performance. Tracking activity, however, uses only the inherent performance information contained in the models. To extend activity prediction in modeling, we propose the activity enhanced modeling with an activity meta-model at the meta-level. The meta-model provides a set of interfaces to model activity in a specific domain. The activity model transformation in subsequence is devised to deal with the simulation difference due to the heterogeneous activity model. Finally, the resource-aware simulation framework is implemented to integrate the activity models in activity-based simulation. The case study shows the improvement brought on by activity-based simulation using discrete event system specification (DEVS).","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"49 1","pages":"13 - 30"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300121","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67533458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}