Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514292
V. Tiwari, Arti Arya, Sudha Chaturvedi
This paper uses location data traces (from GPS, Mobile Signals etc.) of past trips of vehicles to develop algorithm for predicting the end-to-end route of a vehicle. Focus is on overall route prediction rather than predicting road segments in short term. Researches in past for route prediction makes use of raw location data traces data decomposed into trips for such route predictions. This paper introduces an additional step to convert trips composed of location data traces points to trips of road network edges. This requires the algorithm to make use of road networks. We show that efficiency in storage and time complexity can be achieved without sacrificing the accuracy by doing so. Moreover, its well-known that location traces data has inherent inaccuracies due to hardware limitations of devices. Most of the researches don't handle it. This paper presents the results of route prediction algorithms under inaccuracies in data.
{"title":"Route prediction using trip observations and map matching","authors":"V. Tiwari, Arti Arya, Sudha Chaturvedi","doi":"10.1109/IADCC.2013.6514292","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514292","url":null,"abstract":"This paper uses location data traces (from GPS, Mobile Signals etc.) of past trips of vehicles to develop algorithm for predicting the end-to-end route of a vehicle. Focus is on overall route prediction rather than predicting road segments in short term. Researches in past for route prediction makes use of raw location data traces data decomposed into trips for such route predictions. This paper introduces an additional step to convert trips composed of location data traces points to trips of road network edges. This requires the algorithm to make use of road networks. We show that efficiency in storage and time complexity can be achieved without sacrificing the accuracy by doing so. Moreover, its well-known that location traces data has inherent inaccuracies due to hardware limitations of devices. Most of the researches don't handle it. This paper presents the results of route prediction algorithms under inaccuracies in data.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124339334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514235
A. Banerjee, P. Dutta, Subhankar Ghosh
A mobile ad hoc network is an infrastructure less network where the nodes can move freely in any direction. Since the nodes have limited battery power, energy efficient route discovery mechanisms are critical for proper performance of this kind of networks. Experience-based Energy-efficient Routing Protocol (EXERP) [1] is one such protocol that intelligently addresses this issue and it requires two caches, namely, history cache (H-cache) and packet cache (P-cache). These two caches are dependent upon one another. In this article, we propose a fuzzy controlled cache management (FCM) technique for EXERP in ad hoc networks. Simulation results establish that the proposed scheme achieves a high hit ratio at low complexity than other cache management schemes.
{"title":"Fuzzy-controlled cache management (FCM) in Experience-based Energy-efficient Routing Protocol in ad hoc networks","authors":"A. Banerjee, P. Dutta, Subhankar Ghosh","doi":"10.1109/IADCC.2013.6514235","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514235","url":null,"abstract":"A mobile ad hoc network is an infrastructure less network where the nodes can move freely in any direction. Since the nodes have limited battery power, energy efficient route discovery mechanisms are critical for proper performance of this kind of networks. Experience-based Energy-efficient Routing Protocol (EXERP) [1] is one such protocol that intelligently addresses this issue and it requires two caches, namely, history cache (H-cache) and packet cache (P-cache). These two caches are dependent upon one another. In this article, we propose a fuzzy controlled cache management (FCM) technique for EXERP in ad hoc networks. Simulation results establish that the proposed scheme achieves a high hit ratio at low complexity than other cache management schemes.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"1 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124378449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514334
S. Vernekar, A. Buchade
Log files are primary source of information for identifying the System threats and problems that occur in the System at any point of time. These threats and problem in the system can be identified by analyzing the log file and finding the patterns for possible suspicious behavior. The concern administrator can then be provided with appropriate alter or warning regarding these security threats and problems in the system, which are generated after the log files are analyzed. Based upon this alters or warnings the administrator can take appropriate actions. Many tools or approaches are available for this purpose, some are proprietary and some are open source. This paper presents a new approach which uses a MapReduce algorithm for the purpose of log file analysis, providing appropriate security alerts or warning. The results of this system can then be compared with the tools available.
{"title":"MapReduce based log file analysis for system threats and problem identification","authors":"S. Vernekar, A. Buchade","doi":"10.1109/IADCC.2013.6514334","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514334","url":null,"abstract":"Log files are primary source of information for identifying the System threats and problems that occur in the System at any point of time. These threats and problem in the system can be identified by analyzing the log file and finding the patterns for possible suspicious behavior. The concern administrator can then be provided with appropriate alter or warning regarding these security threats and problems in the system, which are generated after the log files are analyzed. Based upon this alters or warnings the administrator can take appropriate actions. Many tools or approaches are available for this purpose, some are proprietary and some are open source. This paper presents a new approach which uses a MapReduce algorithm for the purpose of log file analysis, providing appropriate security alerts or warning. The results of this system can then be compared with the tools available.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128182267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514368
R. Bhushan, R. Nath
Now a days, user rely on the web for information, but the currently available search engines often gives a long list of results, much of which are not always relevant to the user's requirement. Web Logs are important information repositories, which record user activities on the search results. The mining of these logs can improve the performance of search engines, since a user has a specific goal when searching for information. Optimized search may provide the results that accurately satisfy user's specific goal for the search. In this paper, we propose a web recommendation approach which is based on learning from web logs and recommends user a list of pages which are relevant to him by comparing with user's historic pattern. Finally, search result list is optimized by re-ranking the result pages. The proposed system proves to be efficient as the pages desired by the user, are on the top in the result list and thus reducing the search time.
{"title":"Recommendation of optimized web pages to users using Web Log mining techniques","authors":"R. Bhushan, R. Nath","doi":"10.1109/IADCC.2013.6514368","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514368","url":null,"abstract":"Now a days, user rely on the web for information, but the currently available search engines often gives a long list of results, much of which are not always relevant to the user's requirement. Web Logs are important information repositories, which record user activities on the search results. The mining of these logs can improve the performance of search engines, since a user has a specific goal when searching for information. Optimized search may provide the results that accurately satisfy user's specific goal for the search. In this paper, we propose a web recommendation approach which is based on learning from web logs and recommends user a list of pages which are relevant to him by comparing with user's historic pattern. Finally, search result list is optimized by re-ranking the result pages. The proposed system proves to be efficient as the pages desired by the user, are on the top in the result list and thus reducing the search time.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130458572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514293
S. Samreen, G. Narasimha
Mobile Ad hoc networks (MANETs) are susceptible to having their effective operation compromised by a variety of security attacks because of the features like unreliability of wireless links between nodes, constantly changing topology, restricted battery power, lack of centralized control and others. Nodes may misbehave either because they are malicious and deliberately wish to disrupt the network, or because they are selfish and wish to conserve their own limited resources such as power. In this paper, we present a mechanism that enables the detection of nodes that exhibit packet forwarding misbehavior. The approach is based on the usage of two techniques which will be used in parallel in such a way that the results generated by one of them are further processed by the other to finally generate the list of misbehaving nodes. The first part detects the misbehaving links using the 2ACK technique and this information is fed into the second part which uses the principle of conservation of flow (PFC) technique to detect the misbehaving node. The problem with the 2ACK algorithm is that it can detect the misbehaving link but cannot decide upon which one of the nodes associated with that link are misbehaving. Hence we use the principle of conservation of flow, PFC for the second part which detects the misbehaving nodes associated with that of the misbehaving link.
{"title":"An efficient approach for the detection of node misbehaviour in a MANET based on link misbehaviour","authors":"S. Samreen, G. Narasimha","doi":"10.1109/IADCC.2013.6514293","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514293","url":null,"abstract":"Mobile Ad hoc networks (MANETs) are susceptible to having their effective operation compromised by a variety of security attacks because of the features like unreliability of wireless links between nodes, constantly changing topology, restricted battery power, lack of centralized control and others. Nodes may misbehave either because they are malicious and deliberately wish to disrupt the network, or because they are selfish and wish to conserve their own limited resources such as power. In this paper, we present a mechanism that enables the detection of nodes that exhibit packet forwarding misbehavior. The approach is based on the usage of two techniques which will be used in parallel in such a way that the results generated by one of them are further processed by the other to finally generate the list of misbehaving nodes. The first part detects the misbehaving links using the 2ACK technique and this information is fed into the second part which uses the principle of conservation of flow (PFC) technique to detect the misbehaving node. The problem with the 2ACK algorithm is that it can detect the misbehaving link but cannot decide upon which one of the nodes associated with that link are misbehaving. Hence we use the principle of conservation of flow, PFC for the second part which detects the misbehaving nodes associated with that of the misbehaving link.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134181109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514222
Kiran P. Singh, A. Paliwal, M. D. Upadhyay
This paper gives an approach for loss reduction in the design of maximally flat low pass filter (LPF) using two different methods. First using defected ground structure (DGS) and second by using series of grounded patches (SGP) for surface wave compensation. The fourth order maximally flat low pass filter is designed at cut-off frequency of 3 GHz. To achieve arbitrary cut-off frequency and impedance level specification, it is converted to low pass filter using frequency scaling and impedance transform. Loss reduction is investigated first by defected ground structure and then for series of grounded patches structure (SGP) of small square patches of 2 mm by 2mm on microstripline. The simulation is performed using PUFF and IE3D software and simulation results are reported. Simulation result shows improvement in reflection co-efficient for 3 GHz cut-off frequency by −7.18 dB for filter with DGS and −2.56 dB for filter with SGP, in comparison with simple LPF. However, for 2.68 GHz frequency, LPF with SGP shows −20.18 dB improvements. All the three filters are fabricated i.e. without defected ground, with defected ground and with series of grounded patches. Measured results of all three fabricated low pass filters, using vector network analyser are presented. Insertion loss, radiation loss, transmission loss and return loss are calculated on the basis of the measured and simulated parameters and group delays of each filter are reported. Comparative insight with different losses, group delay and theoretical and practical values for three filters are presented and shows good agreement.
{"title":"Novel approach for loss reduction in LPF for satellite communication system","authors":"Kiran P. Singh, A. Paliwal, M. D. Upadhyay","doi":"10.1109/IADCC.2013.6514222","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514222","url":null,"abstract":"This paper gives an approach for loss reduction in the design of maximally flat low pass filter (LPF) using two different methods. First using defected ground structure (DGS) and second by using series of grounded patches (SGP) for surface wave compensation. The fourth order maximally flat low pass filter is designed at cut-off frequency of 3 GHz. To achieve arbitrary cut-off frequency and impedance level specification, it is converted to low pass filter using frequency scaling and impedance transform. Loss reduction is investigated first by defected ground structure and then for series of grounded patches structure (SGP) of small square patches of 2 mm by 2mm on microstripline. The simulation is performed using PUFF and IE3D software and simulation results are reported. Simulation result shows improvement in reflection co-efficient for 3 GHz cut-off frequency by −7.18 dB for filter with DGS and −2.56 dB for filter with SGP, in comparison with simple LPF. However, for 2.68 GHz frequency, LPF with SGP shows −20.18 dB improvements. All the three filters are fabricated i.e. without defected ground, with defected ground and with series of grounded patches. Measured results of all three fabricated low pass filters, using vector network analyser are presented. Insertion loss, radiation loss, transmission loss and return loss are calculated on the basis of the measured and simulated parameters and group delays of each filter are reported. Comparative insight with different losses, group delay and theoretical and practical values for three filters are presented and shows good agreement.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131792943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514219
V. Deotare, D. Padole, V. K. Mohanty
This paper emphasis on the necessity of fast and cost effective services in embedded domain and the method of accomplishing it. Here the service provider will access the system from remote end using internet and run a diagnostic s/w. And take corrective action like s/w updates, notifications etc. Some of the common known corrective actions are preloaded by the manufacturer on to the ROM memory in the system. And for those which are newly occurring will get rapid upgradation from the remote end.
{"title":"Rapid upgradation and remote service station facitity for embedded based systems","authors":"V. Deotare, D. Padole, V. K. Mohanty","doi":"10.1109/IADCC.2013.6514219","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514219","url":null,"abstract":"This paper emphasis on the necessity of fast and cost effective services in embedded domain and the method of accomplishing it. Here the service provider will access the system from remote end using internet and run a diagnostic s/w. And take corrective action like s/w updates, notifications etc. Some of the common known corrective actions are preloaded by the manufacturer on to the ROM memory in the system. And for those which are newly occurring will get rapid upgradation from the remote end.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133382271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514472
Sulabh Kumra, S. Mehta, R. Singh
We developed a robotic arm for a master-slave system to support tele-operation of an anthropomorphic robot, which realizes remote dexterous manipulation tasks. In this paper, we describe the design and specifications of the experimental setup of the master-slave arm to demonstrate the feasibility of the tele-operation using exoskeleton. The paper explores the design decisions and trade-offs made in achieving this combination of price and performance. We developed the 6-degree-of-freedom slave arm and exoskeleton master for control of robotic arm.
{"title":"Development of anthropomorphic multi-D.O.F master-slave manipulator","authors":"Sulabh Kumra, S. Mehta, R. Singh","doi":"10.1109/IADCC.2013.6514472","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514472","url":null,"abstract":"We developed a robotic arm for a master-slave system to support tele-operation of an anthropomorphic robot, which realizes remote dexterous manipulation tasks. In this paper, we describe the design and specifications of the experimental setup of the master-slave arm to demonstrate the feasibility of the tele-operation using exoskeleton. The paper explores the design decisions and trade-offs made in achieving this combination of price and performance. We developed the 6-degree-of-freedom slave arm and exoskeleton master for control of robotic arm.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133789277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514290
M. Morshed, Md. Rafiqul Islam
Cluster Based Secure Routing Protocol (CBSRP) is a MANET routing protocol that ensures secure key management and communication between mobile nodes. It uses Digital Signature and One Way Hashing technique for secure communication. According to CBSRP, it forms a group of small clusters consist of 4-5 nodes and after that the communication takes place between mobile nodes. Inside a cluster, there is always a cluster node or cluster head. The cluster head inside the cluster is not permanent as other nodes stay in the queue and based on the priority new cluster node or cluster head is elected from rest of the node. Inside a cluster, mobile nodes are authenticated using One Way Hashing concept and Digital Signature is not necessary inside cluster communication. For Cluster-Cluster authentication we proposed to use Digital Signature. CBSRP ensures secure communication which will be energy efficient as we segmented the whole network into small set of clusters.
{"title":"CBSRP: Cluster Based Secure Routing Protocol","authors":"M. Morshed, Md. Rafiqul Islam","doi":"10.1109/IADCC.2013.6514290","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514290","url":null,"abstract":"Cluster Based Secure Routing Protocol (CBSRP) is a MANET routing protocol that ensures secure key management and communication between mobile nodes. It uses Digital Signature and One Way Hashing technique for secure communication. According to CBSRP, it forms a group of small clusters consist of 4-5 nodes and after that the communication takes place between mobile nodes. Inside a cluster, there is always a cluster node or cluster head. The cluster head inside the cluster is not permanent as other nodes stay in the queue and based on the priority new cluster node or cluster head is elected from rest of the node. Inside a cluster, mobile nodes are authenticated using One Way Hashing concept and Digital Signature is not necessary inside cluster communication. For Cluster-Cluster authentication we proposed to use Digital Signature. CBSRP ensures secure communication which will be energy efficient as we segmented the whole network into small set of clusters.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124375149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514229
N. K. Gupta, M. K. Rohil
Genetic algorithms have been successfully applied in the area of software testing. The demand for automation of test case generation in object oriented software testing is increasing. Extensive tests can only be achieved through a test automation process. The benefits achieved through test automation include lowering the cost of tests and consequently, the cost of whole process of software development. Several studies have been performed using this technique for automation in generating test data but this technique is expensive and cannot be applied properly to programs having complex structures. Since, previous approaches in the area of object-oriented testing are limited in terms of test case feasibility due to call dependences and runtime exceptions. This paper proposes a strategy for evaluating the fitness of both feasible and unfeasible test cases leading to the improvement of evolutionary search by achieving higher coverage and evolving more number of unfeasible test cases into feasible ones.
{"title":"Improving GA based automated test data generation technique for object oriented software","authors":"N. K. Gupta, M. K. Rohil","doi":"10.1109/IADCC.2013.6514229","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514229","url":null,"abstract":"Genetic algorithms have been successfully applied in the area of software testing. The demand for automation of test case generation in object oriented software testing is increasing. Extensive tests can only be achieved through a test automation process. The benefits achieved through test automation include lowering the cost of tests and consequently, the cost of whole process of software development. Several studies have been performed using this technique for automation in generating test data but this technique is expensive and cannot be applied properly to programs having complex structures. Since, previous approaches in the area of object-oriented testing are limited in terms of test case feasibility due to call dependences and runtime exceptions. This paper proposes a strategy for evaluating the fitness of both feasible and unfeasible test cases leading to the improvement of evolutionary search by achieving higher coverage and evolving more number of unfeasible test cases into feasible ones.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124405684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}