D. Tominaga, Fukumi Iguchi, K. Horimoto, Yutaka Akiyama
In many cases of biological observations such as cell array, DNA micro-array or tissue microscopy, primary data are obtained as photographs. Specialized processing methods are needed for each kind of photographs because they have very wide variety, and often needed automated systems for modern high-throughput observations. We developed a fully-automated image processing system for cell array, high-throughput time series observation system for living cells, to evaluate gene expression levels and phenotype changes in time of each cell.
{"title":"High-throughput Automated Image Processing System for Cell Array Observations","authors":"D. Tominaga, Fukumi Iguchi, K. Horimoto, Yutaka Akiyama","doi":"10.2197/IPSJDC.3.728","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.728","url":null,"abstract":"In many cases of biological observations such as cell array, DNA micro-array or tissue microscopy, primary data are obtained as photographs. Specialized processing methods are needed for each kind of photographs because they have very wide variety, and often needed automated systems for modern high-throughput observations. We developed a fully-automated image processing system for cell array, high-throughput time series observation system for living cells, to evaluate gene expression levels and phenotype changes in time of each cell.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124116068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Immersive virtual reality (VR) has long been considered an excellent environment in which to manipulate 3D virtual objects. However currently used immersive VR user interfaces have limitations. For example, while direct manipulation by hand is easy to understand and to use for approximate positioning, direct manipulation by hand is not suitable for making fine adjustments to virtual objects in an immersive environment because it is difficult to hold an unsupported hand in midair and then to release an object at a fixed point. We therefore propose a method that combines direct 3D manipulation by hand with a virtual 3D gearbox widget that we recently designed. Using this method, hand manipulation is used first to move virtual objects and place them in an approximate position, and then the widget is used to move them into a precise position. The experimental evaluation showed that this combination of direct manipulation by hand and the proposed gearbox is the best of five tested methods in terms of completion ratio of task and subjective preference.
{"title":"A Study on Approximate and Fine Adjustments by Hand Motion in an Immersive Environment","authors":"Noritaka Osawa, Xiangshi Ren","doi":"10.2197/IPSJDC.3.719","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.719","url":null,"abstract":"Immersive virtual reality (VR) has long been considered an excellent environment in which to manipulate 3D virtual objects. However currently used immersive VR user interfaces have limitations. For example, while direct manipulation by hand is easy to understand and to use for approximate positioning, direct manipulation by hand is not suitable for making fine adjustments to virtual objects in an immersive environment because it is difficult to hold an unsupported hand in midair and then to release an object at a fixed point. We therefore propose a method that combines direct 3D manipulation by hand with a virtual 3D gearbox widget that we recently designed. Using this method, hand manipulation is used first to move virtual objects and place them in an approximate position, and then the widget is used to move them into a precise position. The experimental evaluation showed that this combination of direct manipulation by hand and the proposed gearbox is the best of five tested methods in terms of completion ratio of task and subjective preference.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132238559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The matching of a bipartite graph is a structure that can be seen in various assignment problems and has long been studied. The semi-matching is an extension of the matching for a bipartite graph G =(U ∪ V, E). It is defined as a set of edges, M ⊆ E, such that each vertex in U is an endpoint of exactly one edge in M. The load-balancing problem is the problem of finding a semi-matching such that the degrees of each vertex in V are balanced. This problem is studied in the context of the task scheduling to find a “balanced” assignment of tasks for machines, and an O(¦E¦¦U¦) time algorithm is proposed. On the other hand, in some practical problems, only balanced assignments are not sufficient, e.g., the assignment of wireless stations (users)to access points (APs) in wireless networks. In wireless networks, the quality of the transmission depends on the distance between a user and its AP; shorter distances are more desirable. In this paper, We formulate the min-weight load-balancing problem of finding a balanced semi-matching that minimizes the total weight for weighted bipartite graphs. We then give an optimal condition of weighted semi-matchings and propose an O(¦E¦¦U¦¦V¦) time algorithm.
{"title":"Optimal Balanced Semi-Matchings for Weighted Bipartite Graphs","authors":"Y. Harada, H. Ono, K. Sadakane, M. Yamashita","doi":"10.2197/IPSJDC.3.693","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.693","url":null,"abstract":"The matching of a bipartite graph is a structure that can be seen in various assignment problems and has long been studied. The semi-matching is an extension of the matching for a bipartite graph G =(U ∪ V, E). It is defined as a set of edges, M ⊆ E, such that each vertex in U is an endpoint of exactly one edge in M. The load-balancing problem is the problem of finding a semi-matching such that the degrees of each vertex in V are balanced. This problem is studied in the context of the task scheduling to find a “balanced” assignment of tasks for machines, and an O(¦E¦¦U¦) time algorithm is proposed. On the other hand, in some practical problems, only balanced assignments are not sufficient, e.g., the assignment of wireless stations (users)to access points (APs) in wireless networks. In wireless networks, the quality of the transmission depends on the distance between a user and its AP; shorter distances are more desirable. In this paper, We formulate the min-weight load-balancing problem of finding a balanced semi-matching that minimizes the total weight for weighted bipartite graphs. We then give an optimal condition of weighted semi-matchings and propose an O(¦E¦¦U¦¦V¦) time algorithm.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"693 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122981742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, an architecture of software environment to offload user-defined software modules to Maestro2 cluster network, named Maestro dynamic offloading mechanism (MDO), is described. Maestro2 is a high-performance network for clusters. The network interface and the switch of Maestro2 have a general-purpose processor tightly coupled with a dedicated communication hardware. MDO enables the users to offload software modules to both the network interface and the switch. MDO includes a wrapper library with which offload modules can be executed on a host machine without rewriting the program. The overhead and the effectiveness of MDO are evaluated by offloading collective communications.
{"title":"Architecture and Performance of Dynamic Offloading Mechanism for Maestro2 Cluster Network","authors":"K. Aoki, K. Wada, Hiroki Maruoka, M. Ono","doi":"10.2197/IPSJDC.3.683","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.683","url":null,"abstract":"In this paper, an architecture of software environment to offload user-defined software modules to Maestro2 cluster network, named Maestro dynamic offloading mechanism (MDO), is described. Maestro2 is a high-performance network for clusters. The network interface and the switch of Maestro2 have a general-purpose processor tightly coupled with a dedicated communication hardware. MDO enables the users to offload software modules to both the network interface and the switch. MDO includes a wrapper library with which offload modules can be executed on a host machine without rewriting the program. The overhead and the effectiveness of MDO are evaluated by offloading collective communications.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131741417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work proposes a method to control the dominance area of solutions in order to induce appropriate ranking of solutions for the problem at hand, enhance selection, and improve the performance of MOEAs on combinatorial optimization problems. The proposed method can control the degree of expansion or contraction of the dominance area of solutions using a user-defined parameter S. Modifying the dominance area of solutions changes their dominance relation inducing a ranking of solutions that is different to conventional dominance. In this work we use 0/1 multiobjective knapsack problems to analyze the effects on solutions ranking caused by contracting and expanding the dominance area of solutions and its impact on the search performance of a multi-objective optimizer when the number of objectives, the size of the search space, and the feasibility of the problems vary. We show that either convergence or diversity can be emphasized by contracting or expanding the dominance area. Also, we show that the optimal value of the area of dominance depends strongly on all factors analyzed here: number of objectives, size of the search space, and feasibility of the problems.
{"title":"Controlling Dominance Area of Solutions in Multiobjective Evolutionary Algorithms and Performance Analysis on Multiobjective 0/1 Knapsack Problems","authors":"Hiroyuki Sato, H. Aguirre, Kiyoshi Tanaka","doi":"10.2197/IPSJDC.3.703","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.703","url":null,"abstract":"This work proposes a method to control the dominance area of solutions in order to induce appropriate ranking of solutions for the problem at hand, enhance selection, and improve the performance of MOEAs on combinatorial optimization problems. The proposed method can control the degree of expansion or contraction of the dominance area of solutions using a user-defined parameter S. Modifying the dominance area of solutions changes their dominance relation inducing a ranking of solutions that is different to conventional dominance. In this work we use 0/1 multiobjective knapsack problems to analyze the effects on solutions ranking caused by contracting and expanding the dominance area of solutions and its impact on the search performance of a multi-objective optimizer when the number of objectives, the size of the search space, and the feasibility of the problems vary. We show that either convergence or diversity can be emphasized by contracting or expanding the dominance area. Also, we show that the optimal value of the area of dominance depends strongly on all factors analyzed here: number of objectives, size of the search space, and feasibility of the problems.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122016415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Takenaka, H. Mineno, Yuichi Tokunaga, N. Miyauchi, T. Mizuno
Node localization obtained by estimating node positions is an essential technique for wireless multi-hop networks. In this paper, we present an optimized link state routing (OLSR)-based localization (ROULA) that satisfies the following key design requirements: (i) independency from anchor nodes, (ii) robustness for non-convex network topology, and(iii) compatibility with network protocol. ROULA is independent from anchor nodes and can obtain the correct node positions in non-convex network topology. In addition, ROULA is compatible with the OLSR network protocol, and it uses the inherent distance characteristic of multipoint relay (MPR) nodes. We reveal the characteristics of MPR selection and the farthest 2-hop node selection used in ROULA, and describe how these node selections contribute to reducing the distance error for a localization scheme without using ranging devices. We used a simulation to specify appropriate MPR_COVERAGE, which is defined to control the number of MPR nodes in OLSR, and give a comparative performance evaluation of ROULA for various scenarios including non-convex network topology and various deployment radii of anchor nodes. Our evaluation proves that ROULA achieves desirable performance in various network scenarios.
{"title":"Performance Analysis of Optimized Link State Routing-based Localization","authors":"T. Takenaka, H. Mineno, Yuichi Tokunaga, N. Miyauchi, T. Mizuno","doi":"10.2197/IPSJDC.3.541","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.541","url":null,"abstract":"Node localization obtained by estimating node positions is an essential technique for wireless multi-hop networks. In this paper, we present an optimized link state routing (OLSR)-based localization (ROULA) that satisfies the following key design requirements: (i) independency from anchor nodes, (ii) robustness for non-convex network topology, and(iii) compatibility with network protocol. ROULA is independent from anchor nodes and can obtain the correct node positions in non-convex network topology. In addition, ROULA is compatible with the OLSR network protocol, and it uses the inherent distance characteristic of multipoint relay (MPR) nodes. We reveal the characteristics of MPR selection and the farthest 2-hop node selection used in ROULA, and describe how these node selections contribute to reducing the distance error for a localization scheme without using ranging devices. We used a simulation to specify appropriate MPR_COVERAGE, which is defined to control the number of MPR nodes in OLSR, and give a comparative performance evaluation of ROULA for various scenarios including non-convex network topology and various deployment radii of anchor nodes. Our evaluation proves that ROULA achieves desirable performance in various network scenarios.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126039392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Searching in a spatial database for 3D objects that are similar to a given object is an important task that arises in a number of database applications, for example, in medicine and CAD fields. Most of the existing similarity searching methods are based on global features of 3D objects. Developing a feature set or a feature vector of 3D object using their partial features is a challenging. In this paper, we propose a novel segment weight vector for matching 3D objects rapidly. We also describe a partial and geometrical similarity based solution to the problem of searching for similar 3D objects. As the first step, we split each 3D object into parts according to its topology. Next, we introduce a new method to extract the thickness feature of each part of every 3D object to generate its feature vector and a novel searching algorithm using the new feature vector. Finally, we present a novel solution for improving the accuracy of the similarity queries. We also present a performance evaluation of our stratagem. The experiment result and discussion indicate that the proposed approach offers a significant performance improvement over the existing approach. Since the proposed method is based on partial features, it is particularly suited to searching objects having distinct part structures and is invariant to part architecture.
{"title":"Using a Partial Geometric Feature for Similarity Search of 3D Objects","authors":"Yingliang Lu, K. Kaneko, A. Makinouchi","doi":"10.2197/IPSJDC.3.674","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.674","url":null,"abstract":"Searching in a spatial database for 3D objects that are similar to a given object is an important task that arises in a number of database applications, for example, in medicine and CAD fields. Most of the existing similarity searching methods are based on global features of 3D objects. Developing a feature set or a feature vector of 3D object using their partial features is a challenging. In this paper, we propose a novel segment weight vector for matching 3D objects rapidly. We also describe a partial and geometrical similarity based solution to the problem of searching for similar 3D objects. As the first step, we split each 3D object into parts according to its topology. Next, we introduce a new method to extract the thickness feature of each part of every 3D object to generate its feature vector and a novel searching algorithm using the new feature vector. Finally, we present a novel solution for improving the accuracy of the similarity queries. We also present a performance evaluation of our stratagem. The experiment result and discussion indicate that the proposed approach offers a significant performance improvement over the existing approach. Since the proposed method is based on partial features, it is particularly suited to searching objects having distinct part structures and is invariant to part architecture.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121134184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Ota, Kazuki Yoneyama, S. Kiyomoto, Toshiaki Tanaka, K. Ohta
In large-scale networks, users want to be able to communicate securely with each other over a channel that is unreliable. When the existing 2- and 3-party protocols are realized in this situation, there are several problems: a client must hold many passwords and the load on the server concerning password management is heavy. In this paper, we define a new ideal client-to-client general authenticated key exchange functionality, where arbitrary 2-party key exchange protocols are applicable to protocols between the client and server and between servers. We also propose a client-to-client general authenticated key exchange protocol C2C-GAKE as a general form of the client-to-client model, and a client-to-client hybrid authenticated key exchange protocol C2C-HAKE as an example protocol of C2C-GAKE to solve the above problems. In C2C-HAKE, a server shares passwords only with clients in the same realm respectively, public/private keys are used between respective servers, and two clients between different realms share a final session key via the respective servers. Thus, with regard to password management in C2C-HAKE, the load on the server can be distributed to several servers. In addition, we prove that C2C-HAKE securely realizes the above functionality. C2C-HAKE is the first client-to-client hybrid authenticated key exchange protocol that is secure in a universally composable framework with a security-preserving composition property.
{"title":"Universally Composable Client-to-Client General Authenticated Key Exchange","authors":"H. Ota, Kazuki Yoneyama, S. Kiyomoto, Toshiaki Tanaka, K. Ohta","doi":"10.2197/IPSJDC.3.555","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.555","url":null,"abstract":"In large-scale networks, users want to be able to communicate securely with each other over a channel that is unreliable. When the existing 2- and 3-party protocols are realized in this situation, there are several problems: a client must hold many passwords and the load on the server concerning password management is heavy. In this paper, we define a new ideal client-to-client general authenticated key exchange functionality, where arbitrary 2-party key exchange protocols are applicable to protocols between the client and server and between servers. We also propose a client-to-client general authenticated key exchange protocol C2C-GAKE as a general form of the client-to-client model, and a client-to-client hybrid authenticated key exchange protocol C2C-HAKE as an example protocol of C2C-GAKE to solve the above problems. In C2C-HAKE, a server shares passwords only with clients in the same realm respectively, public/private keys are used between respective servers, and two clients between different realms share a final session key via the respective servers. Thus, with regard to password management in C2C-HAKE, the load on the server can be distributed to several servers. In addition, we prove that C2C-HAKE securely realizes the above functionality. C2C-HAKE is the first client-to-client hybrid authenticated key exchange protocol that is secure in a universally composable framework with a security-preserving composition property.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124352524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a series of empirical analyses of information-security investment based on a reliable survey of Japanese enterprises. To begin with, after showing our methodology for representing the vulnerability level regarding the threat of computer viruses, we verify the re- lation between vulnerability level and the effects of information security investment. Although in the first section there is only a weak empirical support of the investment model, one can understand that the representing methodology is worth attempting in empirical analyses in this research field. In the second section, we verify the relations between the probability of computer virus incidents and adopting a set of information security countermeasures. It is shown that “Defense Measure” associated with “Information Security Policy” and “Human Cultivation” has remarkable effects on virus incidents. At the last step, we analyze the effect of continuous investment in the three security countermeasures. The empirical results suggest that virus incidents were significantly reduced in those enterprises which adopted the three countermeasures both in 2002 and in 2003.
{"title":"Empirical-Analysis Methodology for Information-Security Investment and Its Application to Reliable Survey of Japanese Firms","authors":"Wei Liu, Hideyuki Tanaka, Kanta Matsuura","doi":"10.2197/IPSJDC.3.585","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.585","url":null,"abstract":"This paper presents a series of empirical analyses of information-security investment based on a reliable survey of Japanese enterprises. To begin with, after showing our methodology for representing the vulnerability level regarding the threat of computer viruses, we verify the re- lation between vulnerability level and the effects of information security investment. Although in the first section there is only a weak empirical support of the investment model, one can understand that the representing methodology is worth attempting in empirical analyses in this research field. In the second section, we verify the relations between the probability of computer virus incidents and adopting a set of information security countermeasures. It is shown that “Defense Measure” associated with “Information Security Policy” and “Human Cultivation” has remarkable effects on virus incidents. At the last step, we analyze the effect of continuous investment in the three security countermeasures. The empirical results suggest that virus incidents were significantly reduced in those enterprises which adopted the three countermeasures both in 2002 and in 2003.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127339540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An ID-based encryption (IBE) is a public key cryptosystem, in which a user's public key is given as a user ID. In IBE, only a single center generates all user secret keys, which may give the center a load of burdensome work. A hierarchical ID-based encryption (HIBE) is a kind of IBE and overcomes the problem by delegating a user secret key generation to a lower-level center, in which centers form a hierarchical structure. However, all ancestor nodes in HIBE act as centers. That is, any ancestor as well as the root can generate a secret key for any descendant node and, thus, a cipher text to a node can be decrypted by any ancestor node even if the ancestor does not have the same secret key as that of a target node. In this paper, we propose the concept of ancestor-excludable HIBE, in which ancestors with a level less than the designated one can be excluded from a set of privileged ancestors with a right to decrypt a cipher text to a target node. We also give the functional definition together with the security definition. This notion is denoted by AE-HIBE simply. We present the concrete example of AE-HIBE, which can work with constant-size ciphertext and decryption time, independent of the hierarchy level. We prove that our AE-HIBE is selective-ID-CPA secure in the standard model, which can be converted to be selective-ID-CCA secure by applying a general conversion method. Furthermore, AE-HIBE can be naturally applied to the broadcast encryption to realize the efficient public-key version with the user-key size of O(log2 N) and the transmission rate of O(r) for N users and r revoked users. The user-key size is the smallest at the transmission rate of O(r), up to the present.
{"title":"Ancestor Excludable Hierarchical ID-based Encryption and Its Application to Broadcast Encryption","authors":"A. Miyaji","doi":"10.2197/IPSJDC.3.610","DOIUrl":"https://doi.org/10.2197/IPSJDC.3.610","url":null,"abstract":"An ID-based encryption (IBE) is a public key cryptosystem, in which a user's public key is given as a user ID. In IBE, only a single center generates all user secret keys, which may give the center a load of burdensome work. A hierarchical ID-based encryption (HIBE) is a kind of IBE and overcomes the problem by delegating a user secret key generation to a lower-level center, in which centers form a hierarchical structure. However, all ancestor nodes in HIBE act as centers. That is, any ancestor as well as the root can generate a secret key for any descendant node and, thus, a cipher text to a node can be decrypted by any ancestor node even if the ancestor does not have the same secret key as that of a target node. In this paper, we propose the concept of ancestor-excludable HIBE, in which ancestors with a level less than the designated one can be excluded from a set of privileged ancestors with a right to decrypt a cipher text to a target node. We also give the functional definition together with the security definition. This notion is denoted by AE-HIBE simply. We present the concrete example of AE-HIBE, which can work with constant-size ciphertext and decryption time, independent of the hierarchy level. We prove that our AE-HIBE is selective-ID-CPA secure in the standard model, which can be converted to be selective-ID-CCA secure by applying a general conversion method. Furthermore, AE-HIBE can be naturally applied to the broadcast encryption to realize the efficient public-key version with the user-key size of O(log2 N) and the transmission rate of O(r) for N users and r revoked users. The user-key size is the smallest at the transmission rate of O(r), up to the present.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133335126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}