Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.19
A. Sinha, Sumana Gupta
In thiswork, a newalgorithmis proposed for fast estimation of nonparametricmultivariate kernel density, based on principal direction divisive partitioning (PDDP) of the data space. The goal of the proposed algorithm is to use the finite support property of kernels for fast estimation of density. Compared to earlier approaches, this work explains the need of using boundaries (for partitioning the space) instead of centroids (used in earlier approaches), for better unsupervised nature (less user incorporation), and lesser (or atleast same) computational complexity. In earlier approaches, the finite support of a fixed kernel varies within the space due to the use of cluster centroids. It has been argued that if one uses boundaries (for partitioning) rather than centroids, the finite support of a fixed kernel does not change for a constant precision error. This fact introduces better unsupervision within the estimation framework. Themain contributionof thiswork is the insight gained in the kernel density estimation with the incorporation of clustering algortihm and its application in texture synthesis. Texture synthesis through nonparametric, noncausal, Markov random field (MRF), has been implemented earlier through estimation of and sampling from nonparametric conditional density. The incorporation of the proposed kernel density estimation algorithm within the earlier texture synthesis algorithm reduces the computational complexity with perceptually same results. These results provide the efficacy of the proposed algorithm within the context of natural texture synthesis.
{"title":"Fast Estimation of Nonparametric Kernel Density Through PDDP, and its Application in Texture Synthesis","authors":"A. Sinha, Sumana Gupta","doi":"10.14236/EWIC/VOCS2008.19","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.19","url":null,"abstract":"In thiswork, a newalgorithmis proposed for fast estimation of nonparametricmultivariate kernel density, based on principal direction divisive partitioning (PDDP) of the data space. The goal of the proposed algorithm is to use the finite support property of kernels for fast estimation of density. Compared to earlier approaches, this work explains the need of using boundaries (for partitioning the space) instead of centroids (used in earlier approaches), for better unsupervised nature (less user incorporation), and lesser (or atleast same) computational complexity. In earlier approaches, the finite support of a fixed kernel varies within the space due to the use of cluster centroids. It has been argued that if one uses boundaries (for partitioning) rather than centroids, the finite support of a fixed kernel does not change for a constant precision error. This fact introduces better unsupervision within the estimation framework. Themain contributionof thiswork is the insight gained in the kernel density estimation with the incorporation of clustering algortihm and its application in texture synthesis. \u0000 \u0000Texture synthesis through nonparametric, noncausal, Markov random field (MRF), has been implemented earlier through estimation of and sampling from nonparametric conditional density. The incorporation of the proposed kernel density estimation algorithm within the earlier texture synthesis algorithm reduces the computational complexity with perceptually same results. These results provide the efficacy of the proposed algorithm within the context of natural texture synthesis.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"289 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124161762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.26
Chadlia Jerad, Kamel Barkaoui, A. Touzi
Real-Time Maude is an executable rewriting logic language particularly well suited for the specification of object-oriented open and distributed real time systems. In this paper we explore the possibility of using Real-Time Maude as a formal notation for software architecture description and verification of real time systems. The system model is composed of two kinds of descriptions: static and dynamic. The static description consists in identifying the different elements composing the architecture, while the dynamic description is the definition of the rules governing the system behaviour in terms of the possible actions allowed. The correspondence between software architecture concepts and the Real-Time Maude concepts are developed for this purpose. The step towards verifying system architecture is realized by applying Real-Time Maude simulation and analysis techniques to the described model and the properties that must be satisfied. An example is used to illustrate our proposal and to compare it with other architecture description languages.
real - time Maude是一种可执行的重写逻辑语言,特别适合于面向对象的开放和分布式实时系统规范。在本文中,我们探讨了使用real - time Maude作为实时系统的软件体系结构描述和验证的正式符号的可能性。系统模型由静态和动态两种描述组成。静态描述包括识别组成体系结构的不同元素,而动态描述是根据允许的可能操作来定义控制系统行为的规则。软件架构概念和Real-Time Maude概念之间的对应关系是为此目的而开发的。验证系统架构的步骤是通过对所描述的模型和必须满足的特性应用实时仿真和分析技术来实现的。用一个例子来说明我们的建议,并将其与其他架构描述语言进行比较。
{"title":"On the Use of Real-Time Maude for Architecture Description and Verification: A Case Study","authors":"Chadlia Jerad, Kamel Barkaoui, A. Touzi","doi":"10.14236/EWIC/VOCS2008.26","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.26","url":null,"abstract":"Real-Time Maude is an executable rewriting logic language particularly well suited for the specification of object-oriented open and distributed real time systems. In this paper we explore the possibility of using Real-Time Maude as a formal notation for software architecture description and verification of real time systems. The system model is composed of two kinds of descriptions: static and dynamic. The static description consists in identifying the different elements composing the architecture, while the dynamic description is the definition of the rules governing the system behaviour in terms of the possible actions allowed. The correspondence between software architecture concepts and the Real-Time Maude concepts are developed for this purpose. The step towards verifying system architecture is realized by applying Real-Time Maude simulation and analysis techniques to the described model and the properties that must be satisfied. An example is used to illustrate our proposal and to compare it with other architecture description languages.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125199022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.18
D. Pokrajac, N. Reljin, N. Pejcic, A. Lazarevic
Outlier detection has recently become an important problem in many industrial and financial applications. Often, outliers have to be detected from data streams that continuously arrive from data sources. Incremental outlier detection algorithms, aimed at detecting outliers as soon as they appear in a database, have recently become emerging research field. In this paper, we develop an incremental version of connectivity-based outlier factor (COF) algorithm and discuss its computational complexity. The proposed incremental COF algorithm has equivalent detection performance as the iterated static COF algorithm (applied after insertion of each data record), with significant reduction in computational time. The paper provides theoretical and experimental evidence that the number of updates per such insertion/deletion does not depend on the total number of points in the data set, which makes algorithm viable for very large dynamic datasets. Finally, we also illustrate an application of the proposed algorithm on motion detection in video surveillance applications.
{"title":"Incremental Connectivity-Based Outlier Factor Algorithm","authors":"D. Pokrajac, N. Reljin, N. Pejcic, A. Lazarevic","doi":"10.14236/EWIC/VOCS2008.18","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.18","url":null,"abstract":"Outlier detection has recently become an important problem in many industrial and financial applications. Often, outliers have to be detected from data streams that continuously arrive from data sources. Incremental outlier detection algorithms, aimed at detecting outliers as soon as they appear in a database, have recently become emerging research field. In this paper, we develop an incremental version of connectivity-based outlier factor (COF) algorithm and discuss its computational complexity. The proposed incremental COF algorithm has equivalent detection performance as the iterated static COF algorithm (applied after insertion of each data record), with significant reduction in computational time. The paper provides theoretical and experimental evidence that the number of updates per such insertion/deletion does not depend on the total number of points in the data set, which makes algorithm viable for very large dynamic datasets. Finally, we also illustrate an application of the proposed algorithm on motion detection in video surveillance applications.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116191872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.8
W. Cockshott, Andreas Koltes, J. O'Donnell, P. Prosser, W. Vanderbauwhede
Digital circuits with feedback loops can solve some instances of NP-hard problems by relaxation: the circuit will either oscillate or settle down to a stable state that represents a solution to the problem instance. This approach differs from using hardware accelerators to speed up the execution of deterministic algorithms, as it exploits stabilisation properties of circuits with feedback, and it allows a variety of hardware techniques that do not have counterparts in software. A feedback circuit that solves many instances of Boolean satisfiability problems is described, with experimental results from a preliminary simulation using a hardware accelerator.
{"title":"A Hardware Relaxation Paradigm for Solving NP-Hard Problems","authors":"W. Cockshott, Andreas Koltes, J. O'Donnell, P. Prosser, W. Vanderbauwhede","doi":"10.14236/EWIC/VOCS2008.8","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.8","url":null,"abstract":"Digital circuits with feedback loops can solve some instances of NP-hard problems by relaxation: the circuit will either oscillate or settle down to a stable state that represents a solution to the problem instance. This approach differs from using hardware accelerators to speed up the execution of deterministic algorithms, as it exploits stabilisation properties of circuits with feedback, and it allows a variety of hardware techniques that do not have counterparts in software. A feedback circuit that solves many instances of Boolean satisfiability problems is described, with experimental results from a preliminary simulation using a hardware accelerator.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130788918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.10
Ross McFarlane, I. Biktasheva
Despite over a century's study, the trigger mechanisms of cardiac arrhythmias are poorly understood. Even modern experimental methods do not provide sufficient temporal and spacial resolution to trace the development of fibrillation in samples of cardiac tissue, not to mention the heart in vivo. Advances in human genetics provide information on the impact of certain genes on cellular activity, but do not explain the resultant mechanisms by which fibrillation arises. Thus, for some genetic cardiac diseases, the first presenting symptom is death. Computer simulations of electrical activity in cardiac tissue offer increasingly detailed insight into these phenomena, providing a view of cellular-level activity on the scale of a whole tissue wall. Already, advances in this field have led to developments in our understanding of heart fibrillation and sudden cardiac death and their impact is expected to increase significantly as we approach the ultimate goal of whole-heart modelling. Modelling the propagation of Action Potential through cardiac tissue is computationally expensive due to the huge number of equations per cell and the vast spacial and temporal scales required. The complexity of the problem encompasses the description of ionic currents underlying excitation of a single cell through the inhomogeneity of the tissue to the complex geometry of the whole heart. The timely running of computational models of cardiac tissue is increasingly dependant on the effective use of High Performance Computing (HPC), i.e. systems with parallel processors. Current state of the art cardiac simulation tools are limited either by the availability of modern, detailed models, or by their hardware portability or ease of use. The miscellany of current model implementations leads many researchers to develop their own ad-hoc software, preventing them from both utilising the power of HPC effectively, and from collaborating fluidly. It is, arguably, impeding scientific progress. This paper presents a roadmap for the development of Beatbox, a computer simulation environment for computational biology of the heart--an adaptable and extensible framework with which High Performance Computing may be harnessed by researchers.
{"title":"Beatbox - A Computer Simulation Environment for Computational Biology of the Heart","authors":"Ross McFarlane, I. Biktasheva","doi":"10.14236/EWIC/VOCS2008.10","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.10","url":null,"abstract":"Despite over a century's study, the trigger mechanisms of cardiac arrhythmias are poorly understood. Even modern experimental methods do not provide sufficient temporal and spacial resolution to trace the development of fibrillation in samples of cardiac tissue, not to mention the heart in vivo. Advances in human genetics provide information on the impact of certain genes on cellular activity, but do not explain the resultant mechanisms by which fibrillation arises. Thus, for some genetic cardiac diseases, the first presenting symptom is death. \u0000 \u0000Computer simulations of electrical activity in cardiac tissue offer increasingly detailed insight into these phenomena, providing a view of cellular-level activity on the scale of a whole tissue wall. Already, advances in this field have led to developments in our understanding of heart fibrillation and sudden cardiac death and their impact is expected to increase significantly as we approach the ultimate goal of whole-heart modelling. \u0000 \u0000Modelling the propagation of Action Potential through cardiac tissue is computationally expensive due to the huge number of equations per cell and the vast spacial and temporal scales required. The complexity of the problem encompasses the description of ionic currents underlying excitation of a single cell through the inhomogeneity of the tissue to the complex geometry of the whole heart. The timely running of computational models of cardiac tissue is increasingly dependant on the effective use of High Performance Computing (HPC), i.e. systems with parallel processors. Current state of the art cardiac simulation tools are limited either by the availability of modern, detailed models, or by their hardware portability or ease of use. The miscellany of current model implementations leads many researchers to develop their own ad-hoc software, preventing them from both utilising the power of HPC effectively, and from collaborating fluidly. It is, arguably, impeding scientific progress. \u0000 \u0000This paper presents a roadmap for the development of Beatbox, a computer simulation environment for computational biology of the heart--an adaptable and extensible framework with which High Performance Computing may be harnessed by researchers.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"594 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122938050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.4
Erik Wilde, M. Gaedke
Web Engineering has become one of the core disciplines for building Web-oriented applications. This paper proposes to reposition Web engineering to be more specific to what the Web is, by which we mean not only an interface technology, but an information system, into which Web-oriented applications have to be embedded. More traditional Web applications often are just user interfaces to data silos, whereas the last years have shown that well-designed Web-oriented applications can essentially start with no data, and derive all their value from being open and attracting users on a large scale. We propose "Web Engineering 2.0" to not focus anymore on how to engineer for the Web, but how to engineer the Web. Such an approach to Web engineering not only leads to a more disciplined way of engineering the Web, it also allows computer science to better integrate the special properties of the Web, most importantly the loosely coupled nature of the Web, and the importance of the social systems driving the Web.
{"title":"Web Engineering Revisited","authors":"Erik Wilde, M. Gaedke","doi":"10.14236/EWIC/VOCS2008.4","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.4","url":null,"abstract":"Web Engineering has become one of the core disciplines for building Web-oriented applications. This paper proposes to reposition Web engineering to be more specific to what the Web is, by which we mean not only an interface technology, but an information system, into which Web-oriented applications have to be embedded. More traditional Web applications often are just user interfaces to data silos, whereas the last years have shown that well-designed Web-oriented applications can essentially start with no data, and derive all their value from being open and attracting users on a large scale. We propose \"Web Engineering 2.0\" to not focus anymore on how to engineer for the Web, but how to engineer the Web. Such an approach to Web engineering not only leads to a more disciplined way of engineering the Web, it also allows computer science to better integrate the special properties of the Web, most importantly the loosely coupled nature of the Web, and the importance of the social systems driving the Web.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131773591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.17
Z. Hammal, M. Kunz, M. Arguin, F. Gosselin
Automatic recognition of Pain expression has potential medical significance. In this paper we present results of the application of an automatic facial expression recognition system on sequences of spontaneous Pain expression. Twenty participants were videotaped while undergoing thermal heat stimulation at nonpainful and painful intensities. Pain was induced experimentally by use of a Peltierbased, computerized thermal stimulator with a 3 × 3 cm2 contact probe. Our aim is to automatically recognize the videos where Pain was induced. We chose a machine learning approach, previously used successfully to categorize the six basic facial expressions in posed datasets [1, 2] based on the Transferable Belief Model. For this paper, we extended this model to the recognition of sequences of spontaneous Pain expression. The originality of the proposed method is the use of the dynamic information for the recognition of spontaneous Pain expression and the combination of different sensors: facial features behavior, transient features and the context of the expression study. Experimental results show good classification rates for spontaneous Pain sequences especially when we use the contextual information. Moreover the system behaviour compares favourably to the human observer in the other case, which opens promising perspectives for the future development of the proposed system.
{"title":"Spontaneous Pain Expression Recognition in Video Sequences","authors":"Z. Hammal, M. Kunz, M. Arguin, F. Gosselin","doi":"10.14236/EWIC/VOCS2008.17","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.17","url":null,"abstract":"Automatic recognition of Pain expression has potential medical significance. In this paper we present results of the application of an automatic facial expression recognition system on sequences of spontaneous Pain expression. Twenty participants were videotaped while undergoing thermal heat stimulation at nonpainful and painful intensities. Pain was induced experimentally by use of a Peltierbased, computerized thermal stimulator with a 3 × 3 cm2 contact probe. Our aim is to automatically recognize the videos where Pain was induced. We chose a machine learning approach, previously used successfully to categorize the six basic facial expressions in posed datasets [1, 2] based on the Transferable Belief Model. For this paper, we extended this model to the recognition of sequences of spontaneous Pain expression. The originality of the proposed method is the use of the dynamic information for the recognition of spontaneous Pain expression and the combination of different sensors: facial features behavior, transient features and the context of the expression study. Experimental results show good classification rates for spontaneous Pain sequences especially when we use the contextual information. Moreover the system behaviour compares favourably to the human observer in the other case, which opens promising perspectives for the future development of the proposed system.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117307352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.13
Rocío Aldeco-Pérez, L. Moreau
Across the world, organizations are required to comply with regulatory frameworks dictating how to manage personal information. Despite these, several cases of data leaks and exposition of private data to unauthorized recipients have been publicly and widely advertised. For authorities and system administrators to check compliance to regulations, auditing of private data processing becomes crucial in IT systems. Finding the origin of some data, determining how some data is being used, checking that the processing of some data is compatible with the purpose for which the data was captured are typical functionality that an auditing capability should support, but difficult to implement in a reusable manner. Such questions are so-called provenance questions, where provenance is defined as the process that led to some data being produced. The aim of this paper is to articulate how data provenance can be used as the underpinning approach of an auditing capability in IT systems. We present a case study based on requirements of the Data Protection Act and an application that audits the processing of private data, which we apply to an example manipulating private data in a university.
{"title":"Provenance-Based Auditing of Private Data Use","authors":"Rocío Aldeco-Pérez, L. Moreau","doi":"10.14236/EWIC/VOCS2008.13","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.13","url":null,"abstract":"Across the world, organizations are required to comply with regulatory frameworks dictating how to manage personal information. Despite these, several cases of data leaks and exposition of private data to unauthorized recipients have been publicly and widely advertised. For authorities and system administrators to check compliance to regulations, auditing of private data processing becomes crucial in IT systems. Finding the origin of some data, determining how some data is being used, checking that the processing of some data is compatible with the purpose for which the data was captured are typical functionality that an auditing capability should support, but difficult to implement in a reusable manner. Such questions are so-called provenance questions, where provenance is defined as the process that led to some data being produced. The aim of this paper is to articulate how data provenance can be used as the underpinning approach of an auditing capability in IT systems. We present a case study based on requirements of the Data Protection Act and an application that audits the processing of private data, which we apply to an example manipulating private data in a university.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122811991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.5
G. White
We argue that the mathematics developed for the semantics of computer languages can be fruitfully applied to problems in human communication and action.
我们认为,为计算机语言语义开发的数学可以有效地应用于人类交流和行动中的问题。
{"title":"Contexts for Human Action","authors":"G. White","doi":"10.14236/EWIC/VOCS2008.5","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.5","url":null,"abstract":"We argue that the mathematics developed for the semantics of computer languages can be fruitfully applied to problems in human communication and action.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122927721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.30
A. Ünlü, A. Levi
We propose a probabilistic key predistribution scheme for wireless sensor networks, where keying materials are distributed to sensor nodes for secure communication. We use a two-tier approach in which there are two types of nodes: regular nodes and agent nodes. Agent nodes are more capable than regular nodes. Our node deployment model is zone-based such that the nodes that may end up with closer positions on ground are grouped together. The keying material of nodes that belong to different zones is non-overlapping. However, it is still possible for nodes that belong to different zones to communicate with each other via agent nodes when needed. We give a comparative analysis of our scheme through simulations and show that our scheme provides good connectivity figures at reasonable communication cost. Most importantly, simulation results show that our scheme is highly resilient to node captures.
{"title":"Two-Tier, Location-Aware and Highly Resilient Key Predistribution Scheme for Wireless Sensor Networks","authors":"A. Ünlü, A. Levi","doi":"10.14236/EWIC/VOCS2008.30","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.30","url":null,"abstract":"We propose a probabilistic key predistribution scheme for wireless sensor networks, where keying materials are distributed to sensor nodes for secure communication. We use a two-tier approach in which there are two types of nodes: regular nodes and agent nodes. Agent nodes are more capable than regular nodes. Our node deployment model is zone-based such that the nodes that may end up with closer positions on ground are grouped together. The keying material of nodes that belong to different zones is non-overlapping. However, it is still possible for nodes that belong to different zones to communicate with each other via agent nodes when needed. We give a comparative analysis of our scheme through simulations and show that our scheme provides good connectivity figures at reasonable communication cost. Most importantly, simulation results show that our scheme is highly resilient to node captures.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"87 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131487288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}