To reduce the cost and complexity of receiver without losing much diversity gain, we propose an antenna selection scheme for MIMO systems employing non-constant envelope mapping. By detection of the received signal and a simplified process without any knowledge of channel statement information, this scheme can approach the ideal selection strategy. Single carrier system and STC-OFDM (space-time-code orthogonal frequency division multiplexing) system with Alamouti STBC (space time block code) are both considered. Theoretical analysis and simulation results indicate that the proposed scheme provides significant diversity gain with fairly low complexity.
{"title":"Antenna Selection Scheme for Application in MIMO Systems with Non-constant Envelope Mapping","authors":"Shuyun Jia, Jian Wang, Jintao Wang, B. Ai","doi":"10.1109/CSE.2010.41","DOIUrl":"https://doi.org/10.1109/CSE.2010.41","url":null,"abstract":"To reduce the cost and complexity of receiver without losing much diversity gain, we propose an antenna selection scheme for MIMO systems employing non-constant envelope mapping. By detection of the received signal and a simplified process without any knowledge of channel statement information, this scheme can approach the ideal selection strategy. Single carrier system and STC-OFDM (space-time-code orthogonal frequency division multiplexing) system with Alamouti STBC (space time block code) are both considered. Theoretical analysis and simulation results indicate that the proposed scheme provides significant diversity gain with fairly low complexity.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128010348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The RAID reconstruction performance has a significant impact on the availability of RAID-structured storage systems due to the high disk failure rate. Most existing cache managements for RAID-structured storage systems focus on improving the performance or the energy efficiency, while they do not intent to improve the RAID availability by boosting the RAID reconstruction process. In this paper, we propose a novel and practical Availability-aware Cache Management to make the reconstruction process more sequential in the physical disks and reduce the extra reconstruction I/O requests. We implement a prototype of Availability-aware Cache Management, called Shaper, to verify its effectiveness on the RAID reconstruction. Shaper does not affect the I/O performance in the normal mode, but has potential to significantly reduce the RAID reconstruction time and the average user response time during reconstruction simultaneously.
{"title":"Availability-Aware Cache Management with Improved RAID Reconstruction Performance","authors":"Suzhen Wu, Bo Mao, D. Feng, Jianxi Chen","doi":"10.1109/CSE.2010.38","DOIUrl":"https://doi.org/10.1109/CSE.2010.38","url":null,"abstract":"The RAID reconstruction performance has a significant impact on the availability of RAID-structured storage systems due to the high disk failure rate. Most existing cache managements for RAID-structured storage systems focus on improving the performance or the energy efficiency, while they do not intent to improve the RAID availability by boosting the RAID reconstruction process. In this paper, we propose a novel and practical Availability-aware Cache Management to make the reconstruction process more sequential in the physical disks and reduce the extra reconstruction I/O requests. We implement a prototype of Availability-aware Cache Management, called Shaper, to verify its effectiveness on the RAID reconstruction. Shaper does not affect the I/O performance in the normal mode, but has potential to significantly reduce the RAID reconstruction time and the average user response time during reconstruction simultaneously.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115285227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Artificial Fish-swarm Algorithm (AFA) is an intelligent population-based optimization algorithm inspired by the behaviors of fish swarm. Unfortunately, it sometimes fails to maintain an appropriate balance between exploration and exploitation, and has a drawback of blind search. In this paper, a novel cultured AFA with the crossover operator, namely CAFAC, is proposed to enhance its optimization performance. The crossover operator utilized is to promote the diversification of the artificial fish and make them inherit their parents’ characteristics. The Culture Algorithms (CA) is also combined with the AFA so that the blind search can be combated with. A total of 10 high-dimension and multi-peak functions are employed to investigate the optimization property of our CAFAC. Numerical simulation results demonstrate that the proposed CAFAC can indeed outperform the original AFA.
人工鱼群算法(Artificial fish -swarm Algorithm, AFA)是一种受鱼群行为启发的基于种群的智能优化算法。不幸的是,它有时不能在探索和利用之间保持适当的平衡,并且存在盲目搜索的缺点。为了提高优化性能,本文提出了一种带有交叉算子的新型培养AFA,即CAFAC。交叉算子的使用是为了促进人工鱼的多样化,使其继承父母的特征。将培养算法(CA)与遗传算法(AFA)相结合,克服了盲目搜索的问题。采用10个高维多峰函数对CAFAC的优化性能进行了研究。数值仿真结果表明,所提出的CAFAC确实优于原AFA。
{"title":"A Knowledge-Based Artificial Fish-Swarm Algorithm","authors":"X. Gao, Ying Wu, K. Zenger, Xianlin Huang","doi":"10.1109/CSE.2010.49","DOIUrl":"https://doi.org/10.1109/CSE.2010.49","url":null,"abstract":"The Artificial Fish-swarm Algorithm (AFA) is an intelligent population-based optimization algorithm inspired by the behaviors of fish swarm. Unfortunately, it sometimes fails to maintain an appropriate balance between exploration and exploitation, and has a drawback of blind search. In this paper, a novel cultured AFA with the crossover operator, namely CAFAC, is proposed to enhance its optimization performance. The crossover operator utilized is to promote the diversification of the artificial fish and make them inherit their parents’ characteristics. The Culture Algorithms (CA) is also combined with the AFA so that the blind search can be combated with. A total of 10 high-dimension and multi-peak functions are employed to investigate the optimization property of our CAFAC. Numerical simulation results demonstrate that the proposed CAFAC can indeed outperform the original AFA.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127197742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Attaching a reconfigurable loop accelerator to a processor for improving the performance and the efficiency of the system, which can be further enhanced by unrolling the loop to change its parallelism in a better way, is a promising development. The more a loop is unrolled, the wider the reconfigurable area that is exposed. However, the utilization of a loop accelerator is highly linked with the input. Also, in some situations, one will be wasting area to overunroll the loop. With a focus on the area and the performance balance, this paper proposes a dynamically adaptive reconfigurable accelerator framework for the processor/RL architecture. In the framework, reconfiguration of the accelerator is driven by the input. An accelerator selection model is presented for selecting an accelerator at run time among the predefined input patterns. Also, with the help of a detailed illustration of a bzip2 case study, experimental results were provided for the feasibility of the approach, which showed that up to 69.21% reconfigurable area is saved at a cost of 2.63% performance slowdown in the best case.
{"title":"Input-Driven Reconfiguration for Area and Performance Adaption of Reconfigurable Accelerators","authors":"Like Yan, Y. Wen, Tianzhou Chen","doi":"10.1109/CSE.2010.64","DOIUrl":"https://doi.org/10.1109/CSE.2010.64","url":null,"abstract":"Attaching a reconfigurable loop accelerator to a processor for improving the performance and the efficiency of the system, which can be further enhanced by unrolling the loop to change its parallelism in a better way, is a promising development. The more a loop is unrolled, the wider the reconfigurable area that is exposed. However, the utilization of a loop accelerator is highly linked with the input. Also, in some situations, one will be wasting area to overunroll the loop. With a focus on the area and the performance balance, this paper proposes a dynamically adaptive reconfigurable accelerator framework for the processor/RL architecture. In the framework, reconfiguration of the accelerator is driven by the input. An accelerator selection model is presented for selecting an accelerator at run time among the predefined input patterns. Also, with the help of a detailed illustration of a bzip2 case study, experimental results were provided for the feasibility of the approach, which showed that up to 69.21% reconfigurable area is saved at a cost of 2.63% performance slowdown in the best case.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127499642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The IETF Differentiated Service (DiffServ) architecture can allow for establishing a modern large scale network which guarantees the quality of service. In order to realize the multiple levels of packet drop precedence required for the Assured Forwarding (AF) framework of DiffServ, a multi-level RED algorithm is needed. RIO (RED with In/Out) is suitable for the AF scheme, and two major RIO variants such as RIO-D and RIO-C have been proposed and widely used. Both use the average lengths of virtual queues to determine the multiple levels of drop precedence and have different coupling strengths between virtual queues. The key difference between RIO-C and RIO-D lies with the fact that RIO-C takes full coupling and RIO-D has zero coupling. In this paper, a novel algorithm called RIO-FEC (RIO based on Fractional Exponent Coupling for determining the coupling strength) is proposed, and can achieve partial coupling with a controllable coupling power for calculating each average virtual queue length. The effects of the fractional exponent based coupling on the drop rates and throughputs of color-labeled virtual queues are analyzed, and a fractional power 1/3 is found to be optimal. The queue weight factor for calculating the EWMAs (Exponential Weighted Moving Averages) of virtual queue lengths, thus affecting the drop probability, is also presented. The results show that the proposed algorithm can achieve a total drop rate and a total throughput as good as RIO-D. Furthermore, RIO-FEC outperforms RIO-C and RIO-D in terms of preventing the lowest-priority virtual queue from bandwidth starvation, and can effectively adjudge the coupling strengths to gain the desired control on the allocation of priority and fairness.
IETF差分服务(DiffServ)体系结构可以建立保证服务质量的现代大规模网络。为了实现DiffServ的保证转发(AF)框架所要求的多级丢包优先级,需要一种多级RED算法。里约热内卢(RED with In/Out)适用于AF方案,目前已有两种主要的里约热内卢变体RIO- d和RIO- c被提出并广泛使用。两者都使用虚拟队列的平均长度来确定多级丢弃优先级,并且在虚拟队列之间具有不同的耦合强度。RIO-C和RIO-D的关键区别在于RIO-C采用全耦合,而RIO-D为零耦合。本文提出了一种基于分数指数耦合(里约热内卢based on Fractional Exponent Coupling,用于确定耦合强度)的新算法RIO- fec,该算法可以实现部分耦合,并以可控的耦合功率计算每个平均虚拟队列长度。分析了基于分数次幂的耦合对彩色标记虚拟队列的丢包率和吞吐量的影响,发现分数次幂为1/3是最优的。本文还提出了用于计算虚拟队列长度的指数加权移动平均(ewma)从而影响掉线概率的队列权重因子。结果表明,该算法可以达到与RIO-D相同的总丢包率和总吞吐量。此外,RIO-FEC在防止最低优先级虚拟队列带宽耗尽方面优于RIO-C和RIO-D,并能有效地判断耦合强度以获得所需的优先级分配控制和公平性。
{"title":"Fractional Exponent Coupling of RIO","authors":"Wen-Ping Lai, Zhen Liu","doi":"10.1109/CSE.2010.35","DOIUrl":"https://doi.org/10.1109/CSE.2010.35","url":null,"abstract":"The IETF Differentiated Service (DiffServ) architecture can allow for establishing a modern large scale network which guarantees the quality of service. In order to realize the multiple levels of packet drop precedence required for the Assured Forwarding (AF) framework of DiffServ, a multi-level RED algorithm is needed. RIO (RED with In/Out) is suitable for the AF scheme, and two major RIO variants such as RIO-D and RIO-C have been proposed and widely used. Both use the average lengths of virtual queues to determine the multiple levels of drop precedence and have different coupling strengths between virtual queues. The key difference between RIO-C and RIO-D lies with the fact that RIO-C takes full coupling and RIO-D has zero coupling. In this paper, a novel algorithm called RIO-FEC (RIO based on Fractional Exponent Coupling for determining the coupling strength) is proposed, and can achieve partial coupling with a controllable coupling power for calculating each average virtual queue length. The effects of the fractional exponent based coupling on the drop rates and throughputs of color-labeled virtual queues are analyzed, and a fractional power 1/3 is found to be optimal. The queue weight factor for calculating the EWMAs (Exponential Weighted Moving Averages) of virtual queue lengths, thus affecting the drop probability, is also presented. The results show that the proposed algorithm can achieve a total drop rate and a total throughput as good as RIO-D. Furthermore, RIO-FEC outperforms RIO-C and RIO-D in terms of preventing the lowest-priority virtual queue from bandwidth starvation, and can effectively adjudge the coupling strengths to gain the desired control on the allocation of priority and fairness.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"2018 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114699286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service oriented architecture (SOA) promotes highly standardized, loosely coupled and Web-enabled services to foster rapid, low-cost and easy composition of distribute applications. Critical to SOA software development is to acquire users¡¯ requirements and to provide service analysis and design techniques for identifying, conceptualizing, profiling, and rationalising service-enable applications. Semantic-enabled requirements engineering (SRE) for developing service-oriented applications helps to transform disordered users¡¯ needs into ordered requirement specifications. It facilitates requirements acquisition with multiple stakeholders¡¯ participation. It further enhances semantic-conducted services aggregation and eventually provides customized service productions according to process semantics of requirements. This paper concentrates on user-centric service customization with a paradigm shift from classical provider-centric service provision to user-centric active service production. Subscription and pushing technology of ATOM/RSS is adopted to actively push users¡¯ needs to service providers. Semantic description of service requirements is proposed, and characterizes in identifying process (workflow) and service. A customized service production platform is designed to demonstrate its practicability and effectiveness.
{"title":"Process Semantic-Enabled Customisation for Active Service Provisioning","authors":"Bin Wen, K. He, Peng Liang, Lai Xu","doi":"10.1109/CSE.2010.56","DOIUrl":"https://doi.org/10.1109/CSE.2010.56","url":null,"abstract":"Service oriented architecture (SOA) promotes highly standardized, loosely coupled and Web-enabled services to foster rapid, low-cost and easy composition of distribute applications. Critical to SOA software development is to acquire users¡¯ requirements and to provide service analysis and design techniques for identifying, conceptualizing, profiling, and rationalising service-enable applications. Semantic-enabled requirements engineering (SRE) for developing service-oriented applications helps to transform disordered users¡¯ needs into ordered requirement specifications. It facilitates requirements acquisition with multiple stakeholders¡¯ participation. It further enhances semantic-conducted services aggregation and eventually provides customized service productions according to process semantics of requirements. This paper concentrates on user-centric service customization with a paradigm shift from classical provider-centric service provision to user-centric active service production. Subscription and pushing technology of ATOM/RSS is adopted to actively push users¡¯ needs to service providers. Semantic description of service requirements is proposed, and characterizes in identifying process (workflow) and service. A customized service production platform is designed to demonstrate its practicability and effectiveness.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116839300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui-Ya Li, C. Ou, Yi-Tsan Hung, Wen-Jyi Hwang, Chia-Lung Hung
This paper presents a novel pipelined architecture of the competitive learning (CL) algorithm with k-winners-take-all activation. The architecture employs a codeword swapping scheme so that neurons failing the competition for a training vector are immediately available for the competitions for the subsequent training vectors. An efficient pipeline architecture is then designed based on the codeword swapping scheme for enhancing the throughput. The CPU time of the NIOS processor executing the CL training with the proposed architecture as an accelerator is measured. Experiment results show that the CPU time is lower than that of other hardware or software implementations running the CL training program with or without the support of custom hardware.
{"title":"Hardware Implementation of k-Winner-Take-All Neural Network with On-chip Learning","authors":"Hui-Ya Li, C. Ou, Yi-Tsan Hung, Wen-Jyi Hwang, Chia-Lung Hung","doi":"10.1109/CSE.2010.51","DOIUrl":"https://doi.org/10.1109/CSE.2010.51","url":null,"abstract":"This paper presents a novel pipelined architecture of the competitive learning (CL) algorithm with k-winners-take-all activation. The architecture employs a codeword swapping scheme so that neurons failing the competition for a training vector are immediately available for the competitions for the subsequent training vectors. An efficient pipeline architecture is then designed based on the codeword swapping scheme for enhancing the throughput. The CPU time of the NIOS processor executing the CL training with the proposed architecture as an accelerator is measured. Experiment results show that the CPU time is lower than that of other hardware or software implementations running the CL training program with or without the support of custom hardware.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117264485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During the Computer Aided Physical Design Cycle and specifically for high-performance VLSI circuits, on-chip power density plays a major role. The catalyst factors are increased scaling of technology, increasing number of components, higher frequency and bandwidth. The consumed power is usually converted into dissipated heat, affecting the performance and reliability of a chip. Moreover, recent trends in VLSI design entails the stacking of multiple active layers into a monolithic chip. These 3D chips have significantly larger power densities than their 2D counterparts. In this paper, we consider the placement of standard cells and gate arrays (modules) under thermal considerations. Our contribution includes a novel algorithm for placement of the gates or cells in the different active layers of a 3D IC such that: (i) the temperatures of the modules in each of the active layers is uniformly distributed, (ii) the maximum temperatures of each of the active layers is not too high, (iii) the maximum temperatures of the layers vary in a non-increasing manner from bottom layer to top layer to ensure an efficient heat dissipation of the whole chip. Experimental results on randomly generated and standard MCNC and ISPD benchmark instances are quite encouraging.
在计算机辅助物理设计周期中,特别是对于高性能VLSI电路,片上功率密度起着主要作用。催化因素是技术规模的扩大、元件数量的增加、频率和带宽的提高。消耗的功率通常转化为散失的热量,影响芯片的性能和可靠性。此外,VLSI设计的最新趋势需要将多个有源层堆叠成单片芯片。这些3D芯片的功率密度明显高于2D芯片。在本文中,我们考虑在热的考虑下放置标准电池和栅极阵列(模块)。我们的贡献包括小说放置算法盖茨或细胞不同活性层的3 d IC这样:(i)模块的温度均匀分布在每个活跃层,(ii)活跃的每一层的最大温度不是太高,(3)层的最大温度变化无添加的方式从底层到顶层,以确保整个芯片的散热效率。在随机生成的和标准的MCNC和ISPD基准实例上的实验结果相当令人鼓舞。
{"title":"Minimizing Thermal Disparities during Placement in 3D ICs","authors":"P. Ghosal, H. Rahaman, P. Dasgupta","doi":"10.1109/CSE.2010.28","DOIUrl":"https://doi.org/10.1109/CSE.2010.28","url":null,"abstract":"During the Computer Aided Physical Design Cycle and specifically for high-performance VLSI circuits, on-chip power density plays a major role. The catalyst factors are increased scaling of technology, increasing number of components, higher frequency and bandwidth. The consumed power is usually converted into dissipated heat, affecting the performance and reliability of a chip. Moreover, recent trends in VLSI design entails the stacking of multiple active layers into a monolithic chip. These 3D chips have significantly larger power densities than their 2D counterparts. In this paper, we consider the placement of standard cells and gate arrays (modules) under thermal considerations. Our contribution includes a novel algorithm for placement of the gates or cells in the different active layers of a 3D IC such that: (i) the temperatures of the modules in each of the active layers is uniformly distributed, (ii) the maximum temperatures of each of the active layers is not too high, (iii) the maximum temperatures of the layers vary in a non-increasing manner from bottom layer to top layer to ensure an efficient heat dissipation of the whole chip. Experimental results on randomly generated and standard MCNC and ISPD benchmark instances are quite encouraging.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122697563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P2P e-commerce systems are commonly perceived as an environment offering both opportunities and threats. The existence of malicious peers could damage the correctness and availability of them. One way to minimize such threats is to use trust model to evaluate the trustworthiness of peers. This paper presents a dynamic self-adaptive trust model for p2p e-commerce system—AdaptTrust model, which involves in two kinds of trust, namely direct trust and global reputation. The direct trust from a peer to another peer is calculated based on transaction feedbacks between them. The global reputation of a peer is calculated based on trustworthiness ratings from rating peers to the reference peer. Finally, a series of experiments have been done to verify the effectiveness and resistibility of the proposed trust model.
{"title":"A Dynamic Self-Adaptive Trust Model for P2P E-Commerce System","authors":"Jie Wang, Miaomiao Li, Yang Yu, Zhenguang Huang","doi":"10.1109/CSE.2010.73","DOIUrl":"https://doi.org/10.1109/CSE.2010.73","url":null,"abstract":"P2P e-commerce systems are commonly perceived as an environment offering both opportunities and threats. The existence of malicious peers could damage the correctness and availability of them. One way to minimize such threats is to use trust model to evaluate the trustworthiness of peers. This paper presents a dynamic self-adaptive trust model for p2p e-commerce system—AdaptTrust model, which involves in two kinds of trust, namely direct trust and global reputation. The direct trust from a peer to another peer is calculated based on transaction feedbacks between them. The global reputation of a peer is calculated based on trustworthiness ratings from rating peers to the reference peer. Finally, a series of experiments have been done to verify the effectiveness and resistibility of the proposed trust model.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134011110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We formulate the problem of materials budget allocation for academic libraries by way of the mathematical programming model and design an effective algorithm using discrete particle swarm optimization to resolve the problem. The objective function is to maximize the average preferences of materials selection subjected to the constraints of material costs and required amounts in specified categories. For the comparison purpose, CPLEX, a linear programming software package, and a greedy algorithm are applied to obtain optimal or approximate solutions. The computation results demonstrate the effectiveness and robustness of the proposed DPSO algorithm in dealing with the materials budget allocation problem.
{"title":"Discrete Particle Swarm Optimization for Materials Budget Allocation in Academic Libraries","authors":"Tsu-Feng Ho, S. Shyu, Yi-Ling Wu, B. Lin","doi":"10.1109/CSE.2010.33","DOIUrl":"https://doi.org/10.1109/CSE.2010.33","url":null,"abstract":"We formulate the problem of materials budget allocation for academic libraries by way of the mathematical programming model and design an effective algorithm using discrete particle swarm optimization to resolve the problem. The objective function is to maximize the average preferences of materials selection subjected to the constraints of material costs and required amounts in specified categories. For the comparison purpose, CPLEX, a linear programming software package, and a greedy algorithm are applied to obtain optimal or approximate solutions. The computation results demonstrate the effectiveness and robustness of the proposed DPSO algorithm in dealing with the materials budget allocation problem.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127614419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}