Nowadays, interest in Knowledge Management (KM) from both industry and academia has been growing rapidly. There is no doubt that KM has came to play an important role in organizations. However, if organizations especially small medium enterprise (SMEs) do not fully comprehend what drives the need for KM and how to select the necessary technological infrastructure, they may fall into the trap of creating an inefficient KM strategy and will face the investment risks like loss money. This paper presents an application of the extent analysis using Analytic Hierarchy Process (AHP) as a model to select Knowledge Management Technology (KMT) under fuzzy environment. In addition, the model considers the positive-negative aspects of selecting through incorporate Benefit, Costs and Risks (BCR) concept.
{"title":"Evaluating the Best of Knowledge Management Technology for Small Medium Enterprise Based on Fuzzy Analytic Hierarchy Process","authors":"F. M. Hamundu, L. Siregar, R. Budiarto","doi":"10.1109/AMS.2009.118","DOIUrl":"https://doi.org/10.1109/AMS.2009.118","url":null,"abstract":"Nowadays, interest in Knowledge Management (KM) from both industry and academia has been growing rapidly. There is no doubt that KM has came to play an important role in organizations. However, if organizations especially small medium enterprise (SMEs) do not fully comprehend what drives the need for KM and how to select the necessary technological infrastructure, they may fall into the trap of creating an inefficient KM strategy and will face the investment risks like loss money. This paper presents an application of the extent analysis using Analytic Hierarchy Process (AHP) as a model to select Knowledge Management Technology (KMT) under fuzzy environment. In addition, the model considers the positive-negative aspects of selecting through incorporate Benefit, Costs and Risks (BCR) concept.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"225 1","pages":"200-205"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77285474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantum and high-field effects present in a nanoscale MOSFET are modeled and data processed using MATLAB. The drift response to the electric field is modeled after the intrinsic velocity that is shown to be the ultimate limit to the saturation velocity in a very high electric field. The ballistic intrinsic velocity arises from the fact that randomly oriented velocity vectors in zero electric field are streamlined and become unidirectional. The presence of a quantum emission lowers the saturation velocity. The drain carrier velocity is revealed to be smaller than the saturation velocity due to the presence of the finite electric field at the drain of a MOSFET. The velocity so obtained is considered in modeling the current-voltage characteristics of a MOSFET channel in the inversion regime and excellent agreement is obtained with the experimental data on an 80-nm channel.
{"title":"Invited Paper: Modeling of Nanoscale MOSFET Using MATLAB","authors":"V. Arora","doi":"10.1109/AMS.2009.21","DOIUrl":"https://doi.org/10.1109/AMS.2009.21","url":null,"abstract":"Quantum and high-field effects present in a nanoscale MOSFET are modeled and data processed using MATLAB. The drift response to the electric field is modeled after the intrinsic velocity that is shown to be the ultimate limit to the saturation velocity in a very high electric field. The ballistic intrinsic velocity arises from the fact that randomly oriented velocity vectors in zero electric field are streamlined and become unidirectional. The presence of a quantum emission lowers the saturation velocity. The drain carrier velocity is revealed to be smaller than the saturation velocity due to the presence of the finite electric field at the drain of a MOSFET. The velocity so obtained is considered in modeling the current-voltage characteristics of a MOSFET channel in the inversion regime and excellent agreement is obtained with the experimental data on an 80-nm channel.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"1 1","pages":"739-744"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81450416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traffic congestion has become a major problem in many countries. One of the main causes is the failure to manage a journey. Vehicles tend to choose the shortest paths which ended up congesting a certain area. If a journey is well managed, using alternative paths may lead to the same destination in a shorter period of time. Therefore, with the importance of sustainable environment, intelligent route planning system has become a popular research agenda among researchers and industries. In this paper, an enhanced algorithm based on A*-search algorithm is proposed. The enhanced algorithm is used to study the traffic congestion level as well as to suggest the best route to a destination. The algorithm takes the traffic congestion level into consideration, which makes choosing the best route possible in minimal travel time.
{"title":"Design of an Intelligent Route Planning System Using an Enhanced A*-search Algorithm","authors":"P. L. Wong, M. A. Osman, Maziani Sabudin","doi":"10.1109/AMS.2009.132","DOIUrl":"https://doi.org/10.1109/AMS.2009.132","url":null,"abstract":"Traffic congestion has become a major problem in many countries. One of the main causes is the failure to manage a journey. Vehicles tend to choose the shortest paths which ended up congesting a certain area. If a journey is well managed, using alternative paths may lead to the same destination in a shorter period of time. Therefore, with the importance of sustainable environment, intelligent route planning system has become a popular research agenda among researchers and industries. In this paper, an enhanced algorithm based on A*-search algorithm is proposed. The enhanced algorithm is used to study the traffic congestion level as well as to suggest the best route to a destination. The algorithm takes the traffic congestion level into consideration, which makes choosing the best route possible in minimal travel time.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"76 1","pages":"40-44"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83862563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anechoic chambers signal absorption capability is directly correlated to its performance. One of the main components in enabling signal absorption is the design and material used in designing the absorbers. Two of the main types are pyramidal and wedge-shaped ones. In this work, truncated pyramidal and truncated wedge absorbers for anechoic chamber application have been designed to operate effectively in the microwave frequencies from 1 - 10 GHz. The absorbers were simulated using a fusion of carbon-based material with different features and coating thickness to improve their performance and significant material cost savings. Material consideration for the basic pyramidal and wedge structure has also been taken into account. The designs are simulated using CST Microwave Studio. Simulated results showed that the coating of half the absorber height produced the best performance in terms of signal absorption.
{"title":"Performance Simulation of Pyramidal and Wedge Microwave Absorbers","authors":"Hassan Nornikman, P. Soh, A. Azremi, M. S. Anuar","doi":"10.1109/AMS.2009.13","DOIUrl":"https://doi.org/10.1109/AMS.2009.13","url":null,"abstract":"Anechoic chambers signal absorption capability is directly correlated to its performance. One of the main components in enabling signal absorption is the design and material used in designing the absorbers. Two of the main types are pyramidal and wedge-shaped ones. In this work, truncated pyramidal and truncated wedge absorbers for anechoic chamber application have been designed to operate effectively in the microwave frequencies from 1 - 10 GHz. The absorbers were simulated using a fusion of carbon-based material with different features and coating thickness to improve their performance and significant material cost savings. Material consideration for the basic pyramidal and wedge structure has also been taken into account. The designs are simulated using CST Microwave Studio. Simulated results showed that the coating of half the absorber height produced the best performance in terms of signal absorption.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"10 1","pages":"649-654"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81467007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matlab is a well-known tool for processing images. It provides engineers, scientists, and researchers with an intuitive, flexible environment for solving complex imaging problems. For measurement of human’s fatigue level, the images for the eyelid blinking or mouth yawning can be processed using Matlab in order to exhibit that the human is fatigue or not. This paper discusses a framework that combines 2 factors (eyelid blinking and mouth yawning) for measurement of human’s fatigue level. The purpose of combining these 2 factors is to get a better measurement of the level of human fatigue due to drowsiness. The process and activities in this framework are elaborated in details as a guide and reference to build a prototype using Matlab programming language software. Insight acquired through this study is expected to be useful for the development of simulation. This research and invention will be a technological solution to address some human problems such as accident prevention and safety for transportation used and also for educational area.
{"title":"Image Processing Using Eyelid Blinking and Mouth Yawning to Measure Human's Fatigue Level","authors":"H. Noor, R. Ibrahim","doi":"10.1109/AMS.2009.138","DOIUrl":"https://doi.org/10.1109/AMS.2009.138","url":null,"abstract":"Matlab is a well-known tool for processing images. It provides engineers, scientists, and researchers with an intuitive, flexible environment for solving complex imaging problems. For measurement of human’s fatigue level, the images for the eyelid blinking or mouth yawning can be processed using Matlab in order to exhibit that the human is fatigue or not. This paper discusses a framework that combines 2 factors (eyelid blinking and mouth yawning) for measurement of human’s fatigue level. The purpose of combining these 2 factors is to get a better measurement of the level of human fatigue due to drowsiness. The process and activities in this framework are elaborated in details as a guide and reference to build a prototype using Matlab programming language software. Insight acquired through this study is expected to be useful for the development of simulation. This research and invention will be a technological solution to address some human problems such as accident prevention and safety for transportation used and also for educational area.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"38 1","pages":"326-331"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82450179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The lecture aims to provide an insight into modern approaches to simulation-based analysis of supply chains. It gives an overview of using modeling and simulation (MS supply chain resilience; sustainable logistics; logistics security; logistics safety; logistics bottleneck saturation; supply chain dynamics; and supply chain management simulation-based training.
{"title":"Advances in Supply Chain Simulation","authors":"A. Bruzzone, Y. Merkuryev","doi":"10.1109/AMS.2009.151","DOIUrl":"https://doi.org/10.1109/AMS.2009.151","url":null,"abstract":"The lecture aims to provide an insight into modern approaches to simulation-based analysis of supply chains. It gives an overview of using modeling and simulation (MS supply chain resilience; sustainable logistics; logistics security; logistics safety; logistics bottleneck saturation; supply chain dynamics; and supply chain management simulation-based training.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"132 1","pages":"5-6"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73003754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We design and develop a software system to answer the specific needs of stakeholders. As software engineers, it is our very first job to document their needs. This initial document of requirements is only the beginning of the development of the software system. There are many stakeholders involved in designing complex systems, each of them holding only part of the needs-based information required for system design. As software engineers, it is our prime job to extract data on all of the needs and knowledge from all of the involved stakeholders and implement this knowledge in the system design. This paper presents a software engineering methodology, called Object-Process Methodology or OPM. It is a complete system engineering, architecting, and system lifecycle modeling approach that can be used by the software engineering community to achieve the desired results for stakeholders expeditiously and as accurately as possible.
{"title":"Object-Process Methodology: A Hidden Secret for SOA Service Design","authors":"Atif Farid Mohammad","doi":"10.1109/AMS.2009.36","DOIUrl":"https://doi.org/10.1109/AMS.2009.36","url":null,"abstract":"We design and develop a software system to answer the specific needs of stakeholders. As software engineers, it is our very first job to document their needs. This initial document of requirements is only the beginning of the development of the software system. There are many stakeholders involved in designing complex systems, each of them holding only part of the needs-based information required for system design. As software engineers, it is our prime job to extract data on all of the needs and knowledge from all of the involved stakeholders and implement this knowledge in the system design. This paper presents a software engineering methodology, called Object-Process Methodology or OPM. It is a complete system engineering, architecting, and system lifecycle modeling approach that can be used by the software engineering community to achieve the desired results for stakeholders expeditiously and as accurately as possible.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"34 1","pages":"170-175"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73127759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Kunovsky, Martina Drozdová, J. Kopriva, Milan Pindryc
In this paper an outline is presented of historical and current developments in the application of recurrent Taylor series to the integration of systems of ordinary differential equations. The idea of an extremely accurate and fast method for numerical solutions of differential equations is presented in the paper. In general Taylor series are not included or not even mentioned in surveys on numerical integration techniques as the programs were written by mathematicians with the main objective of demonstrating the feasibility of the concept and with the goal of providing integration algorithms of very high accuracy. For this reason the programs should be looked upon as a stimulus for writing more advanced software employing Taylor series better able to compete with programs using other methods. An attempt in this direction is TKSL, a program the results of which will be dealt with.
{"title":"Methodology of the Taylor Series Based Computations","authors":"J. Kunovsky, Martina Drozdová, J. Kopriva, Milan Pindryc","doi":"10.1109/AMS.2009.73","DOIUrl":"https://doi.org/10.1109/AMS.2009.73","url":null,"abstract":"In this paper an outline is presented of historical and current developments in the application of recurrent Taylor series to the integration of systems of ordinary differential equations. The idea of an extremely accurate and fast method for numerical solutions of differential equations is presented in the paper. In general Taylor series are not included or not even mentioned in surveys on numerical integration techniques as the programs were written by mathematicians with the main objective of demonstrating the feasibility of the concept and with the goal of providing integration algorithms of very high accuracy. For this reason the programs should be looked upon as a stimulus for writing more advanced software employing Taylor series better able to compete with programs using other methods. An attempt in this direction is TKSL, a program the results of which will be dealt with.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"50 1","pages":"206-211"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75272328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Sandhya, A. Agarwal, Raghavendra Rao, R. Wankar
Automatic extraction and vectorization of contour lines from color topographic maps is an important precursor to obtaining useful information for many vector based GIS applications. In this work, a novel hybridized algorithm is developed for reconstructing the extracted contour lines from color topographic map. The extraction of contour lines from a topographic map leads to broken contour lines due to inherent characteristics of the map, thus posing a challenging problem of identifying gaps and then filling them. This has been addressed by developing algorithms based on connected components, graph theory, Expectation Maximization (EM) and numerical methods. Our algorithm operates by isolating the segments of those contours which have gaps and achieves in reducing the complexity of the matching of such segments by employing the EM algorithm. We also present a new scheme of filling gaps present in thick contours without the application of thinning algorithms.
{"title":"Automatic Gap Identification towards Efficient Contour Line Reconstruction in Topographic Maps","authors":"B. Sandhya, A. Agarwal, Raghavendra Rao, R. Wankar","doi":"10.1109/AMS.2009.25","DOIUrl":"https://doi.org/10.1109/AMS.2009.25","url":null,"abstract":"Automatic extraction and vectorization of contour lines from color topographic maps is an important precursor to obtaining useful information for many vector based GIS applications. In this work, a novel hybridized algorithm is developed for reconstructing the extracted contour lines from color topographic map. The extraction of contour lines from a topographic map leads to broken contour lines due to inherent characteristics of the map, thus posing a challenging problem of identifying gaps and then filling them. This has been addressed by developing algorithms based on connected components, graph theory, Expectation Maximization (EM) and numerical methods. Our algorithm operates by isolating the segments of those contours which have gaps and achieves in reducing the complexity of the matching of such segments by employing the EM algorithm. We also present a new scheme of filling gaps present in thick contours without the application of thinning algorithms.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"34 1","pages":"309-314"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73408424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. P. Dampahala, H. D. D. D. Premadasa, P. W. W. Ranasinghe, J. N. P. Weerasinghe, K. Wimalawarne
Many mathematical calculations in the field of computational finance consume a lot of time and resources for processing. Some of the Short rate models used in quantitative finance which have been taken into consideration in this paper have been optimized for performance within a cluster computing environment. The back-end cluster has been seamlessly integrated with an easy-to-use front-end which can be used by finance professionals who are not aware of the details of the computational and database cluster. Furthermore, many techniques that have been utilized to improve the efficiency of the models have also been described. This paper also describes the generalization of a High Performance Computing Cluster designed for One-factor Short rate models and how it can be used easily to be further extended for other mathematical models in quantitative finance. The ultimate objective is to come up with a generalized framework for quantitative finance.
{"title":"Efficient High Performance Computing Framework for Short Rate Models","authors":"T. P. Dampahala, H. D. D. D. Premadasa, P. W. W. Ranasinghe, J. N. P. Weerasinghe, K. Wimalawarne","doi":"10.1109/AMS.2009.27","DOIUrl":"https://doi.org/10.1109/AMS.2009.27","url":null,"abstract":"Many mathematical calculations in the field of computational finance consume a lot of time and resources for processing. Some of the Short rate models used in quantitative finance which have been taken into consideration in this paper have been optimized for performance within a cluster computing environment. The back-end cluster has been seamlessly integrated with an easy-to-use front-end which can be used by finance professionals who are not aware of the details of the computational and database cluster. Furthermore, many techniques that have been utilized to improve the efficiency of the models have also been described. This paper also describes the generalization of a High Performance Computing Cluster designed for One-factor Short rate models and how it can be used easily to be further extended for other mathematical models in quantitative finance. The ultimate objective is to come up with a generalized framework for quantitative finance.","PeriodicalId":6461,"journal":{"name":"2009 Third Asia International Conference on Modelling & Simulation","volume":"17 1","pages":"608-613"},"PeriodicalIF":0.0,"publicationDate":"2009-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76027157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}