Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614664
Chih-Pai Chang, Chun-Te Chen, Tsung-Hui Lu, I. Lin, Po-Tsun Huang, Hua-Shyun Lu
As Information and Communication Technology grows the mobile communication devices became to necessary instrument in our daily living. According to DIGITIMES Research on 2012: the smart phone reached 635 millions a year and this number will be growing up 30% on 2013. It took 43.9% market share on overall cellular phone. By integrate operating system, camera, recorder, network, Bluetooth, calendar, multi-media texting and millions of Apps.; the powerful hardware and software of mobile devices provide highly mobility with powerful function to meet every kind of services and invite modern criminal use mobile devices as tools to achieve their activities. This study based on a guideline recommended by NSLEC (British National Specialist Law Enforcement Centre) and tried to propose a standard operating procedure of digital evidence forensic on smart phone; mean while, uses common handheld device forensic tools used by law enforcement departments as primary instruments to optimize and liabilities of digital evidences during preservation, collection, validation, identification, analysis, interpretation, documentation and presentation processes. Therefore strengthen the procedures used by forensic operator to complete the forensic processes quickly; and enhance the provable and authenticable of digital evidences.
{"title":"Study on constructing forensic procedure of digital evidence on smart handheld device","authors":"Chih-Pai Chang, Chun-Te Chen, Tsung-Hui Lu, I. Lin, Po-Tsun Huang, Hua-Shyun Lu","doi":"10.1109/ICSSE.2013.6614664","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614664","url":null,"abstract":"As Information and Communication Technology grows the mobile communication devices became to necessary instrument in our daily living. According to DIGITIMES Research on 2012: the smart phone reached 635 millions a year and this number will be growing up 30% on 2013. It took 43.9% market share on overall cellular phone. By integrate operating system, camera, recorder, network, Bluetooth, calendar, multi-media texting and millions of Apps.; the powerful hardware and software of mobile devices provide highly mobility with powerful function to meet every kind of services and invite modern criminal use mobile devices as tools to achieve their activities. This study based on a guideline recommended by NSLEC (British National Specialist Law Enforcement Centre) and tried to propose a standard operating procedure of digital evidence forensic on smart phone; mean while, uses common handheld device forensic tools used by law enforcement departments as primary instruments to optimize and liabilities of digital evidences during preservation, collection, validation, identification, analysis, interpretation, documentation and presentation processes. Therefore strengthen the procedures used by forensic operator to complete the forensic processes quickly; and enhance the provable and authenticable of digital evidences.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"52 15","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131604508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614636
Chan-Yun Yang, G. Jan, Kuo-Ho Su
In statistical decision theory, the admissibility is the first issue to fulfill the feasibility of a decision rule. Without the admissibility, the decision rule is impractical for discriminations. The study decomposes first the fuzzy support vector machine (fuzzy SVM), which is a crucial innovation due to its robust capability to resist the input contaminated noise, into a regularized optimization expression arg minf∈H Ω[f]+λRRemp[f] and exploits the regularization of loss function from the expression mathematically. The decomposition is beneficial to the programming of empirical risk minimization which uses the empirical risk instead of the true expected risk to learn a hypothesis. The empirical risk, composed elementally by the loss function, here indeed is the key for achieving the success of the fuzzy SVM. Because of the important causality, the study examines preliminarily the admissibility of loss functions which is recruited to form the fuzzy SVM. The examination is issued first by a loss function associated risk, called □-risk. By a step-by-step derivation of a sufficient and necessary condition for the □-risk to agree equivalently an unbiased Bayes risk, the admissibility of the loss function can then be confirmed and abbreviated as a simple rule in the study. Experimental chart examination is also issued simultaneously for an easy and clear observation to validate the admissibility of the loss function regularized fuzzy SVM.
在统计决策理论中,可采性是决定规则是否可行的首要问题。如果没有可采性,则判定规则对于歧视是不切实际的。本研究首先将模糊支持向量机(fuzzy support vector machine, fuzzy SVM)分解为正则化优化表达式arg minf∈H Ω[f]+λRRemp[f],并从数学上利用该表达式对损失函数进行正则化。模糊支持向量机具有抗输入污染噪声的鲁棒性,是一项重要的创新。这种分解有利于利用经验风险而不是真实期望风险来学习假设的经验风险最小化规划。由损失函数组成的经验风险在这里确实是模糊支持向量机取得成功的关键。由于二者之间存在重要的因果关系,本文对损失函数的可接受性进行了初步检验,并将损失函数引入模糊支持向量机。检查首先由损失函数相关风险发出,称为□风险。通过逐步推导出风险等于无偏贝叶斯风险的充分必要条件,损失函数的可容许性可以被确认并简化为研究中的一个简单规则。同时给出了实验图检查,便于观察,验证损失函数正则化模糊支持向量机的可接受性。
{"title":"Admissibility of fuzzy support vector machine through loss function","authors":"Chan-Yun Yang, G. Jan, Kuo-Ho Su","doi":"10.1109/ICSSE.2013.6614636","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614636","url":null,"abstract":"In statistical decision theory, the admissibility is the first issue to fulfill the feasibility of a decision rule. Without the admissibility, the decision rule is impractical for discriminations. The study decomposes first the fuzzy support vector machine (fuzzy SVM), which is a crucial innovation due to its robust capability to resist the input contaminated noise, into a regularized optimization expression arg minf∈H Ω[f]+λRRemp[f] and exploits the regularization of loss function from the expression mathematically. The decomposition is beneficial to the programming of empirical risk minimization which uses the empirical risk instead of the true expected risk to learn a hypothesis. The empirical risk, composed elementally by the loss function, here indeed is the key for achieving the success of the fuzzy SVM. Because of the important causality, the study examines preliminarily the admissibility of loss functions which is recruited to form the fuzzy SVM. The examination is issued first by a loss function associated risk, called □-risk. By a step-by-step derivation of a sufficient and necessary condition for the □-risk to agree equivalently an unbiased Bayes risk, the admissibility of the loss function can then be confirmed and abbreviated as a simple rule in the study. Experimental chart examination is also issued simultaneously for an easy and clear observation to validate the admissibility of the loss function regularized fuzzy SVM.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121641060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614677
Jiun-Hung Li, Chih-Li Huo, Yu-Hsiang Yu, Tsung-Ying Sun
In this paper, time sequence based lane-marking identification method is proposed to deal with the classification and robust improvement of lane-marking detection mechanism which is proposed in our previous works. The proposed method collects information of several consecutive image frames to perform identification. The experimental results show that the developed system can effectively identify lane-marking in various driving environment.
{"title":"Time sequence based lane-marking identification","authors":"Jiun-Hung Li, Chih-Li Huo, Yu-Hsiang Yu, Tsung-Ying Sun","doi":"10.1109/ICSSE.2013.6614677","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614677","url":null,"abstract":"In this paper, time sequence based lane-marking identification method is proposed to deal with the classification and robust improvement of lane-marking detection mechanism which is proposed in our previous works. The proposed method collects information of several consecutive image frames to perform identification. The experimental results show that the developed system can effectively identify lane-marking in various driving environment.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123829475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614696
Yeong-Hwa Chang, W. Chan, Cheng-Yuan Yang, C. Tao, S. Su
This paper presents a new robust adaptive control method for multi-robot systems, where the kinematic model of a differentially driven wheeled mobile robot is considered. Particularly, the situations involving partial loss of actuator effectiveness are addressed. Distributed controllers are derived based on dynamic surface control techniques over networked multiple robots. In addition, adaptive mechanisms are applied to estimate the bounds of effectiveness factor and uncertainty bounds. The robust stability of the multi-robot systems are preserved by using the Lyapunov theorem. The proposed controller can make the robots reach a desired formation following a designate trajectory. Simulation results indicate that the proposed control scheme has superior responses compared to conventional dynamic surface control.
{"title":"Adaptive dynamic surface control for fault-tolerant multi-robot systems","authors":"Yeong-Hwa Chang, W. Chan, Cheng-Yuan Yang, C. Tao, S. Su","doi":"10.1109/ICSSE.2013.6614696","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614696","url":null,"abstract":"This paper presents a new robust adaptive control method for multi-robot systems, where the kinematic model of a differentially driven wheeled mobile robot is considered. Particularly, the situations involving partial loss of actuator effectiveness are addressed. Distributed controllers are derived based on dynamic surface control techniques over networked multiple robots. In addition, adaptive mechanisms are applied to estimate the bounds of effectiveness factor and uncertainty bounds. The robust stability of the multi-robot systems are preserved by using the Lyapunov theorem. The proposed controller can make the robots reach a desired formation following a designate trajectory. Simulation results indicate that the proposed control scheme has superior responses compared to conventional dynamic surface control.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115220186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614710
P. Huang, P. Lin, Cheng-Hsiung Lee, C. Kuo
In this paper, we present a classification system for differentiating malignant pulmonary nodules from benign nodules in computed tomography (CT) images based on a set of fractal features derived from the fractional Brownian motion (fBm) model. In a set of 107 CT images obtained from 107 different patients with each image containing a solitary pulmonary nodule, our experimental result show that the accuracy rate of classification and the area under the Receiver Operating Characteristic (ROC) curve are 83.11% and 0.8437, respectively, by using the proposed fractal-based feature set and a support vector machine classifier. Such a result demonstrates that our classification system has highly satisfactory diagnostic performance by analyzing the fractal features of lung nodules in CT images taken from a single post-contrast CT scan.
{"title":"A classification system of lung nodules in CT images based on fractional Brownian motion model","authors":"P. Huang, P. Lin, Cheng-Hsiung Lee, C. Kuo","doi":"10.1109/ICSSE.2013.6614710","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614710","url":null,"abstract":"In this paper, we present a classification system for differentiating malignant pulmonary nodules from benign nodules in computed tomography (CT) images based on a set of fractal features derived from the fractional Brownian motion (fBm) model. In a set of 107 CT images obtained from 107 different patients with each image containing a solitary pulmonary nodule, our experimental result show that the accuracy rate of classification and the area under the Receiver Operating Characteristic (ROC) curve are 83.11% and 0.8437, respectively, by using the proposed fractal-based feature set and a support vector machine classifier. Such a result demonstrates that our classification system has highly satisfactory diagnostic performance by analyzing the fractal features of lung nodules in CT images taken from a single post-contrast CT scan.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127362662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614644
S. Khatab, A. Traechtler
A method will be presented which not only optimizes the trajectory of a road vehicle but also develops a gain-scheduled feedback controller. A major difficulty when applying this method (known as differential dynamic programming-DDP) to complex systems is the calculation of the first and second derivatives of the system equations. This calculation is here carried out by automatic differentiation, thereby increasing the flexibility and area of application of the DDP algorithm. The control policy was then successfully tested on a multi-body simulation of a road vehicle.
{"title":"Trajectory optimization and optimal control of vehicle dynamics under critically stable driving conditions","authors":"S. Khatab, A. Traechtler","doi":"10.1109/ICSSE.2013.6614644","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614644","url":null,"abstract":"A method will be presented which not only optimizes the trajectory of a road vehicle but also develops a gain-scheduled feedback controller. A major difficulty when applying this method (known as differential dynamic programming-DDP) to complex systems is the calculation of the first and second derivatives of the system equations. This calculation is here carried out by automatic differentiation, thereby increasing the flexibility and area of application of the DDP algorithm. The control policy was then successfully tested on a multi-body simulation of a road vehicle.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125991291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614645
B. Târnauca, D. Puiu, D. Damian, V. Comnac
As the number of vehicles present on today's roads constantly increases along with the number of intelligent traffic infrastructure elements capable of providing traffic related data, the need for making the best use of this data in order to provide proper traffic and resources management increases at the same pace. Traffic Management System designers are facing thus the problem of handling vast amounts of traffic related data, coming from a variety of sources in the least time consuming manners. Complex Event Processing is one of the latest techniques which provides the ability to deal with this kind of constraints. In this paper we present a Complex Event Processing based method for traffic condition monitoring along with its evaluation based on the simulation environment we have developed to support our work.
{"title":"Traffic condition monitoring using complex event processing","authors":"B. Târnauca, D. Puiu, D. Damian, V. Comnac","doi":"10.1109/ICSSE.2013.6614645","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614645","url":null,"abstract":"As the number of vehicles present on today's roads constantly increases along with the number of intelligent traffic infrastructure elements capable of providing traffic related data, the need for making the best use of this data in order to provide proper traffic and resources management increases at the same pace. Traffic Management System designers are facing thus the problem of handling vast amounts of traffic related data, coming from a variety of sources in the least time consuming manners. Complex Event Processing is one of the latest techniques which provides the ability to deal with this kind of constraints. In this paper we present a Complex Event Processing based method for traffic condition monitoring along with its evaluation based on the simulation environment we have developed to support our work.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121963795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614667
A. Michalíková, S. Volentierova
Digital literacy includes the ability to understand the information and use it in various formats from various sources presented by modern information and communication technologies. It is known that different regions of the country have different value of digital literacy index. The state tends to support the development of digital literacy in communities with the lowest index. Due to the large number of data that must be processed to determine the digital literacy index up to now research were done only in particular regions. In this paper we presented the program that can determine the value of digital literacy index in territorial unit smaller than the region. We studied the value of this parameter for any municipality and for the data processing we used the neural networks.
{"title":"Classification of slovak municipalities by neural networks with regard to the degree of digital literacy index","authors":"A. Michalíková, S. Volentierova","doi":"10.1109/ICSSE.2013.6614667","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614667","url":null,"abstract":"Digital literacy includes the ability to understand the information and use it in various formats from various sources presented by modern information and communication technologies. It is known that different regions of the country have different value of digital literacy index. The state tends to support the development of digital literacy in communities with the lowest index. Due to the large number of data that must be processed to determine the digital literacy index up to now research were done only in particular regions. In this paper we presented the program that can determine the value of digital literacy index in territorial unit smaller than the region. We studied the value of this parameter for any municipality and for the data processing we used the neural networks.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128244445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614651
Sheng-Hua Chen, Ya-Chien Chung, Tzyy-Yih Yang, Yu-Jen Chiu, Jin-Yih Lin, C. Chi
System integration is crucial to fuel cells. In addition to considering relative control rules and treatment of the reactants, the design and control of the balance of plant (BOP) are vitally important for a self-sustainable and optimized fuel cell system. In this study, a fuel cell control system is developed by utilizing an instrument-based system integration platform and sensor-less fuel concentration estimation and verification. A control measure is determined by both consumption and concentration change of the fuel. The feasibility of the sensor-less fuel concentration estimation is verified by experimental results. A testing platform, integrating software and hardware, which is used for the direct liquid fuel cell system, has developed in this study. It is an beneficial and flexible tool for developing relative controlling algorithms, evaluating efficiency of cells, or designing BOP of fuel cells.
{"title":"An instrument-based testing platform and fuel control algorithm verification for direct methanol fuel cell","authors":"Sheng-Hua Chen, Ya-Chien Chung, Tzyy-Yih Yang, Yu-Jen Chiu, Jin-Yih Lin, C. Chi","doi":"10.1109/ICSSE.2013.6614651","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614651","url":null,"abstract":"System integration is crucial to fuel cells. In addition to considering relative control rules and treatment of the reactants, the design and control of the balance of plant (BOP) are vitally important for a self-sustainable and optimized fuel cell system. In this study, a fuel cell control system is developed by utilizing an instrument-based system integration platform and sensor-less fuel concentration estimation and verification. A control measure is determined by both consumption and concentration change of the fuel. The feasibility of the sensor-less fuel concentration estimation is verified by experimental results. A testing platform, integrating software and hardware, which is used for the direct liquid fuel cell system, has developed in this study. It is an beneficial and flexible tool for developing relative controlling algorithms, evaluating efficiency of cells, or designing BOP of fuel cells.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130567238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICSSE.2013.6614640
A. Yao, Y. Pan
An optimal production scheduling solution to meet the order is a must for enterprise to gain profit. This paper presents a novel Petri nets and Genetic Algorithm (PNGA) optimal scheduling method for job shop manufacturing systems. Using the job shop production of a mold factory as a case study, we examined the capability of the proposed PNGA method and compared its results with the ordinary Genetic Algorithm (GA) and Hybrid Taguchi-Genetic Algorithm (HTGA) methods. The MATLAB software was adopted to model the Petri nets in this study. Taguchi's method was used to optimize these experiment parameters. The optimal parameter settings were then programmed into the PNGA program. In conjunction with the Petri nets model, the process time was then estimated. The simulation results show that the average process time of PNGA is about 287 (unit time). It is less than 289.55 of the GA and 288.8 of the HTGA. The standard deviation of process time of PNGA is about 5.20. It is less than 6.0 of the GA and 5.88 of the HTGA. That is, the proposed PNGA is able to provide a better production scheduling solution.
{"title":"A Petri nets and genetic algorithm based optimal scheduling for job shop manufacturing systems","authors":"A. Yao, Y. Pan","doi":"10.1109/ICSSE.2013.6614640","DOIUrl":"https://doi.org/10.1109/ICSSE.2013.6614640","url":null,"abstract":"An optimal production scheduling solution to meet the order is a must for enterprise to gain profit. This paper presents a novel Petri nets and Genetic Algorithm (PNGA) optimal scheduling method for job shop manufacturing systems. Using the job shop production of a mold factory as a case study, we examined the capability of the proposed PNGA method and compared its results with the ordinary Genetic Algorithm (GA) and Hybrid Taguchi-Genetic Algorithm (HTGA) methods. The MATLAB software was adopted to model the Petri nets in this study. Taguchi's method was used to optimize these experiment parameters. The optimal parameter settings were then programmed into the PNGA program. In conjunction with the Petri nets model, the process time was then estimated. The simulation results show that the average process time of PNGA is about 287 (unit time). It is less than 289.55 of the GA and 288.8 of the HTGA. The standard deviation of process time of PNGA is about 5.20. It is less than 6.0 of the GA and 5.88 of the HTGA. That is, the proposed PNGA is able to provide a better production scheduling solution.","PeriodicalId":124317,"journal":{"name":"2013 International Conference on System Science and Engineering (ICSSE)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128348354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}