Pub Date : 2012-02-29DOI: 10.3745/KIPSTD.2012.19D.1.049
S. Oh, H. La, Soo Dong Kim
In cloud computing, service providers develop and deploy services with common and reusable features among various applications, service consumers locate and reuse them in building their applications. Hence, reusability is a key intrinsic characteristic of cloud services. Services with high reusability would yield high return-on-investment. Cloud services have characteristics which do not appear in conventional programming paradigms, existing quality models for software reusability would not applicable to services. In this paper, we propose a reusability evaluation suite for cloud services, which includes quality attributes and metrics. A case study is presented to show its applicability.
{"title":"Method to Evaluate and Enhance Reusability of Cloud Services","authors":"S. Oh, H. La, Soo Dong Kim","doi":"10.3745/KIPSTD.2012.19D.1.049","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.049","url":null,"abstract":"In cloud computing, service providers develop and deploy services with common and reusable features among various applications, service consumers locate and reuse them in building their applications. Hence, reusability is a key intrinsic characteristic of cloud services. Services with high reusability would yield high return-on-investment. Cloud services have characteristics which do not appear in conventional programming paradigms, existing quality models for software reusability would not applicable to services. In this paper, we propose a reusability evaluation suite for cloud services, which includes quality attributes and metrics. A case study is presented to show its applicability.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115253985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-02-29DOI: 10.3745/KIPSTD.2012.19D.1.001
Hyoung-Geun An, Jae-Jin Koh
As the size of the data warehouse is large, the selection of indices on the data warehouse affects the efficiency of the query processing of the data warehouse. Indices induce the lower query processing cost, but they occupy the large storage areas and induce the index maintenance cost which are accompanied by database updates. The bitmap join indices are well applied when we optimize the star join queries which join a fact table and many dimension tables and the selection on dimension tables in data warehouses. Though the bitmap join indices with the binary representations induce the lower storage cost, the task to select the indexing attributes among the huge candidate attributes which are generated is difficult. The processes of index selection are to reduce the number of candidate attributes to be indexed and then select the indexing attributes. In this paper on bitmap join index selection problem we reduce the number of candidate attributes by the data mining techniques. Compared to the existing techniques which reduce the number of candidate attributes by the frequencies of attributes we consider the frequencies of attributes and the size of dimension tables and the size of the tuples of the dimension tables and the page size of disk. We use the mining of the frequent itemsets as mining techniques and reduce the great number of candidate attributes. We make the bitmap join indices which have the least costs and the least storage area adapted to storage constraints by using the cost functions applied to the bitmap join indices of the candidate attributes. We compare the existing techniques and ours and analyze them in order to evaluate the efficiencies of ours.
{"title":"A Study on Selecting Bitmap Join Index to Speed up Complex Queries in Relational Data Warehouses","authors":"Hyoung-Geun An, Jae-Jin Koh","doi":"10.3745/KIPSTD.2012.19D.1.001","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.001","url":null,"abstract":"As the size of the data warehouse is large, the selection of indices on the data warehouse affects the efficiency of the query processing of the data warehouse. Indices induce the lower query processing cost, but they occupy the large storage areas and induce the index maintenance cost which are accompanied by database updates. The bitmap join indices are well applied when we optimize the star join queries which join a fact table and many dimension tables and the selection on dimension tables in data warehouses. Though the bitmap join indices with the binary representations induce the lower storage cost, the task to select the indexing attributes among the huge candidate attributes which are generated is difficult. The processes of index selection are to reduce the number of candidate attributes to be indexed and then select the indexing attributes. In this paper on bitmap join index selection problem we reduce the number of candidate attributes by the data mining techniques. Compared to the existing techniques which reduce the number of candidate attributes by the frequencies of attributes we consider the frequencies of attributes and the size of dimension tables and the size of the tuples of the dimension tables and the page size of disk. We use the mining of the frequent itemsets as mining techniques and reduce the great number of candidate attributes. We make the bitmap join indices which have the least costs and the least storage area adapted to storage constraints by using the cost functions applied to the bitmap join indices of the candidate attributes. We compare the existing techniques and ours and analyze them in order to evaluate the efficiencies of ours.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128440601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-02-29DOI: 10.3745/KIPSTD.2012.19D.1.105
Jung-Hyoun Bae, Seung-Ju Lim, Jeong-Ju Kim, Sung-Dae Park, Jeong-Do Kim
Most methods for detecting PVC and APC require the measurement of accurate QRS complex, P wave and T wave. In this study, we propose new algorithm for detecting PVC and APC without using complex parameter and algorithms. Proposed algorithm have wide applicability to abnormal waveform by personal distinction and difference as well as all sorts of normal waveform on ECG. To achieve this, we separate ECG signal into each unit patterns and made a standard unit pattern by just using unit patterns which have normal R-R internal. After that, we detect PVC and APC by using similarity analysis for pattern matching between standard unit pattern and each unit patterns.
{"title":"The Classification of Arrhythmia Using Similarity Analysis Between Unit Patterns at ECG Signal","authors":"Jung-Hyoun Bae, Seung-Ju Lim, Jeong-Ju Kim, Sung-Dae Park, Jeong-Do Kim","doi":"10.3745/KIPSTD.2012.19D.1.105","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.105","url":null,"abstract":"Most methods for detecting PVC and APC require the measurement of accurate QRS complex, P wave and T wave. In this study, we propose new algorithm for detecting PVC and APC without using complex parameter and algorithms. Proposed algorithm have wide applicability to abnormal waveform by personal distinction and difference as well as all sorts of normal waveform on ECG. To achieve this, we separate ECG signal into each unit patterns and made a standard unit pattern by just using unit patterns which have normal R-R internal. After that, we detect PVC and APC by using similarity analysis for pattern matching between standard unit pattern and each unit patterns.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125071291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-02-29DOI: 10.3745/KIPSTD.2012.19D.1.029
Seiyoung Lee
Software globalization is becoming more important worldwide but little is known about how globalization technologies are actually carried out in the Korean software industry. In this paper, we have designed Globalization Quality Management (GQM) framework and applied the framework to the industry domestically for the first time. GQM provides a structured and effective way for software organizations to adopt globalization practices and evaluate the results. GQM consists of three main components: 1) software quality management process, 2) globalization support model and 3) globalization assessment model. This framework also supports both plan-driven and iterative/incremental development methods. On the basis of the GQM, a survey study was conducted among software engineering professionals, gathering the data from 31 IT companies across 7 large-scale projects in Korea. The result indicate that the evaluation score for globalization capability is 2.47 and global readiness is 2.55 out of 5 points. Also It said that internationalized product design (32.9%) and global/local product requirements analysis (28%) are needed to be taken care of first.
{"title":"Design and Implementation of Software Globalization Quality Management Framework","authors":"Seiyoung Lee","doi":"10.3745/KIPSTD.2012.19D.1.029","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.029","url":null,"abstract":"Software globalization is becoming more important worldwide but little is known about how globalization technologies are actually carried out in the Korean software industry. In this paper, we have designed Globalization Quality Management (GQM) framework and applied the framework to the industry domestically for the first time. GQM provides a structured and effective way for software organizations to adopt globalization practices and evaluate the results. GQM consists of three main components: 1) software quality management process, 2) globalization support model and 3) globalization assessment model. This framework also supports both plan-driven and iterative/incremental development methods. On the basis of the GQM, a survey study was conducted among software engineering professionals, gathering the data from 31 IT companies across 7 large-scale projects in Korea. The result indicate that the evaluation score for globalization capability is 2.47 and global readiness is 2.55 out of 5 points. Also It said that internationalized product design (32.9%) and global/local product requirements analysis (28%) are needed to be taken care of first.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125839496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-02-29DOI: 10.3745/KIPSTD.2012.19D.1.015
Se-Mi Hwang, Duck-Ho Bae, Sang-Wook Kim
In this paper, to solve a vested interests of old papers in scientific literature ranking, we propose novel method that considers not only the current citations from other published papers but also the latent citations of papers to be published in the future. Furthermore, the method also considers the relevance of contents in the citing and cited papers. Finally, we verify the superiority of our proposed method through extensive experiments.
{"title":"Scientific Literature Ranking Considering Latent Citations","authors":"Se-Mi Hwang, Duck-Ho Bae, Sang-Wook Kim","doi":"10.3745/KIPSTD.2012.19D.1.015","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.015","url":null,"abstract":"In this paper, to solve a vested interests of old papers in scientific literature ranking, we propose novel method that considers not only the current citations from other published papers but also the latent citations of papers to be published in the future. Furthermore, the method also considers the relevance of contents in the citing and cited papers. Finally, we verify the superiority of our proposed method through extensive experiments.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129354969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-02-29DOI: 10.3745/KIPSTD.2012.19D.1.039
Jin-Wook Kwon, Yunja Choi, W. Lee
In order to quickly understand which changes of source codes have been made and to perform effective maintenance of a system, it is important to visualize the changed parts. Although there are many works for analyzing software changes, there are few works for visualizing both of the change types and change quantifications for Java based systems. In this paper, we propose a change analysis technique based on class diagram and provide a change visualization technique by using change quantification information. In order to check the structural changes in source codes, source codes are transformed to class diagrams by reverse engineering methods. On the class diagrams, the changes are analyzed and quantified by numbers. Based on the change quantification, the changes are visualized on the class diagram by color spectrum. By using visualization techniques, maintainers can easily recognize the code changes to reduce the cost and time of maintenance.
{"title":"Development of Analysis and Visualization Tool for Java Source Code Changes using Reverse Engineering Technique","authors":"Jin-Wook Kwon, Yunja Choi, W. Lee","doi":"10.3745/KIPSTD.2012.19D.1.039","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.039","url":null,"abstract":"In order to quickly understand which changes of source codes have been made and to perform effective maintenance of a system, it is important to visualize the changed parts. Although there are many works for analyzing software changes, there are few works for visualizing both of the change types and change quantifications for Java based systems. In this paper, we propose a change analysis technique based on class diagram and provide a change visualization technique by using change quantification information. In order to check the structural changes in source codes, source codes are transformed to class diagrams by reverse engineering methods. On the class diagrams, the changes are analyzed and quantified by numbers. Based on the change quantification, the changes are visualized on the class diagram by color spectrum. By using visualization techniques, maintainers can easily recognize the code changes to reduce the cost and time of maintenance.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"149 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130832410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-02-29DOI: 10.3745/KIPSTD.2012.19D.1.095
Dal-Soo Weon, Moon-seog Jun
The main purpose of this paper is to propose the direction of financial IT development in macro-perspective. And it also shows a theoretical basis on the financial IT system that will be progressed with regard to an empirical model on the basis of the transformation process of the domestic financial IT environment for the future. In the process, this research produces and analyzes the meaningful patterns that have a significant influence on the financial IT development for 40 years, and attempts to backtrack the life-cycle of the core-banking model. This paper can be summarized as follows: Firstly, I analyzed the life-cycle of financial IT system in Korea per 10years. Secondly. The life-cycle of core-banking model is analyzed by 11years on the average and the one of the long term model by 33years. Thirdly, from the earlier days, the models of a long-term survival core-banking have been designed and developed through the objective analysis and bench-marketing. Lastly, the financial IT field should be developed into the integrated industry, and systematization of core-banking model studies and more professionals need to be extended. This research has contributed to provide the new frameworks through the analysis of the core-banking model that has not studied obviously for a long time. The paper involves two related sections, the first section deals with the significance of backtracking in core-banking model and also focuses on the key components from the perspective of financial IT management strategies. Based on the process, the second section figures out the life-cycles of actual core-banking model.
本文的主要目的是从宏观角度提出金融信息化的发展方向。本文的主要目的是在宏观视角下提出金融信息化的发展方向,并在未来国内金融信息化环境转型过程的基础上,通过实证模型的建立,为金融信息化体系的发展提供理论依据。在此过程中,本研究提出并分析了对 40 年金融信息化发展具有重要影响的意义模式,并试图回溯核心银行模式的生命周期。本文可概括如下:首先,笔者分析了韩国金融 IT 系统每 10 年的生命周期。其次,分析了核心银行模式的生命周期。对核心银行模式的生命周期进行了平均 11 年的分析,对长期模式的生命周期进行了平均 33 年的分析。第三,从早期开始,核心银行业务长期生存模型的设计和开发都是通过客观分析和市场调研进行的。最后,金融信息技术领域应向综合产业发展,核心银行业务模型研究的系统化和专业化需要进一步拓展。本研究通过对长期以来没有明显研究的核心银行业务模式的分析,为提供新的框架做出了贡献。本文包括两个相关部分,第一部分论述了核心银行业务模型中回溯的意义,并从金融 IT 管理战略的角度重点分析了其关键组成部分。在此基础上,第二部分阐述了实际核心银行模型的生命周期。
{"title":"A Study on Developing Trend of Core-Banking Model through Tracking of Financial IT Development","authors":"Dal-Soo Weon, Moon-seog Jun","doi":"10.3745/KIPSTD.2012.19D.1.095","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.095","url":null,"abstract":"The main purpose of this paper is to propose the direction of financial IT development in macro-perspective. And it also shows a theoretical basis on the financial IT system that will be progressed with regard to an empirical model on the basis of the transformation process of the domestic financial IT environment for the future. In the process, this research produces and analyzes the meaningful patterns that have a significant influence on the financial IT development for 40 years, and attempts to backtrack the life-cycle of the core-banking model. This paper can be summarized as follows: Firstly, I analyzed the life-cycle of financial IT system in Korea per 10years. Secondly. The life-cycle of core-banking model is analyzed by 11years on the average and the one of the long term model by 33years. Thirdly, from the earlier days, the models of a long-term survival core-banking have been designed and developed through the objective analysis and bench-marketing. Lastly, the financial IT field should be developed into the integrated industry, and systematization of core-banking model studies and more professionals need to be extended. This research has contributed to provide the new frameworks through the analysis of the core-banking model that has not studied obviously for a long time. The paper involves two related sections, the first section deals with the significance of backtracking in core-banking model and also focuses on the key components from the perspective of financial IT management strategies. Based on the process, the second section figures out the life-cycles of actual core-banking model.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129941362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-02-29DOI: 10.3745/KIPSTD.2012.19D.1.081
Kwanwoo Lee
FORM(Feature-Oriented Reuse Method) is one of representative product line engineering methods. The essence of FORM is the FORM architecture models, which can be reused in the development of multiple products of a software product line. The FORM architecture models, however, have the following problems when applied in practice. First, they are not standardized models like UML(Unified Modeling Language) and therefore they can be constructed only through a specific modeling tool. Second, they do not represent architectural variability explicitly. Instead their variability is only managed through a mapping from a feature model. To address these two problems, we developed at first a method for representing the FORM architecture models using UML, which enables the FORM architecture models to be constructed through various available UML modeling tools. Also, we developed an effective method for representing as well as managing the variability of the FORM architecture models through a mapping from a feature model.
{"title":"Managing and Modeling Variability of UML Based FORM Architectures Through Feature-Architecture Mapping","authors":"Kwanwoo Lee","doi":"10.3745/KIPSTD.2012.19D.1.081","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.081","url":null,"abstract":"FORM(Feature-Oriented Reuse Method) is one of representative product line engineering methods. The essence of FORM is the FORM architecture models, which can be reused in the development of multiple products of a software product line. The FORM architecture models, however, have the following problems when applied in practice. First, they are not standardized models like UML(Unified Modeling Language) and therefore they can be constructed only through a specific modeling tool. Second, they do not represent architectural variability explicitly. Instead their variability is only managed through a mapping from a feature model. To address these two problems, we developed at first a method for representing the FORM architecture models using UML, which enables the FORM architecture models to be constructed through various available UML modeling tools. Also, we developed an effective method for representing as well as managing the variability of the FORM architecture models through a mapping from a feature model.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126971321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-02-01DOI: 10.3745/KIPSTD.2012.19D.2.161
Jungkeol Lee, Hongchan Roh, Sanghyun Park
Recently, the flash memory consistently increases the storage capacity while the price of the memory is being cheap. This makes the mass storage SSD(Solid State Drive) popular. The flash memory, however, has a lot of defects. In order that these defects should be complimented, it is needed to use the FTL(Flash Translation Layer) as a special layer. To operate restrictions of the hardware efficiently, the FTL that is essential to work plays a role of transferring from the logical sector number of file systems to the physical sector number of the flash memory. Especially, the poor performance is attributed to Erase-Before-Write among the flash memory`s restrictions, and even if there are lots of studies based on the log block, a few problems still exists in order for the mass storage flash memory to be operated. If the FAST based on Log Block-Based Flash often is generated in the wide locality causing the random writing, the merge operation will be occur as the sectors is not used in the data block. In other words, the block thrashing which is not effective occurs and then, the flash memory`s performance get worse. If the log-block makes the overwriting caused, the log-block is executed like a cache and this technique contributes to developing the flash memory performance improvement. This study for the improvement of the random writing demonstrates that the log block is operated like not only the cache but also the entire flash memory so that the merge operation and the erase operation are diminished as there are a distinct mapping table called as the offset mapping table for the operation. The new FTL is to be defined as the XAST(extensively-Associative Sector Translation). The XAST manages the offset mapping table with efficiency based on the spatial locality and temporal locality.
{"title":"The Efficient Merge Operation in Log Buffer-Based Flash Translation Layer for Enhanced Random Writing","authors":"Jungkeol Lee, Hongchan Roh, Sanghyun Park","doi":"10.3745/KIPSTD.2012.19D.2.161","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.2.161","url":null,"abstract":"Recently, the flash memory consistently increases the storage capacity while the price of the memory is being cheap. This makes the mass storage SSD(Solid State Drive) popular. The flash memory, however, has a lot of defects. In order that these defects should be complimented, it is needed to use the FTL(Flash Translation Layer) as a special layer. To operate restrictions of the hardware efficiently, the FTL that is essential to work plays a role of transferring from the logical sector number of file systems to the physical sector number of the flash memory. Especially, the poor performance is attributed to Erase-Before-Write among the flash memory`s restrictions, and even if there are lots of studies based on the log block, a few problems still exists in order for the mass storage flash memory to be operated. If the FAST based on Log Block-Based Flash often is generated in the wide locality causing the random writing, the merge operation will be occur as the sectors is not used in the data block. In other words, the block thrashing which is not effective occurs and then, the flash memory`s performance get worse. If the log-block makes the overwriting caused, the log-block is executed like a cache and this technique contributes to developing the flash memory performance improvement. This study for the improvement of the random writing demonstrates that the log block is operated like not only the cache but also the entire flash memory so that the merge operation and the erase operation are diminished as there are a distinct mapping table called as the offset mapping table for the operation. The new FTL is to be defined as the XAST(extensively-Associative Sector Translation). The XAST manages the offset mapping table with efficiency based on the spatial locality and temporal locality.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127796661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-31DOI: 10.3745/KIPSTD.2011.18D.6.481
Jae-jin Lee, Junseok Oh, B. Lee
Recently, various mobile services are provided by the spread of wireless network infrastructures and smart devices. The improvement of cloud computing technologies increases the interests for enterprise mobile cloud services in various IT companies as well. By increasing the interests for enterprise mobile cloud services, it is necessary to evaluate the use of enterprise mobile cloud services. Therefore, the factors which affect the user acceptance of enterprise mobile cloud services are analyzed on the basis of Davis` technology acceptance model in this research. As analysis results, four external variables have significant effects on perceived ease of use of mobile cloud services. Also, these variables indirectly affect attitude toward using cloud services. The results show that the security is the most important factor for attitude toward using enterprise mobile cloud services. The service users also consider the interoperability as an important factor for the user acceptance of cloud services. The perceived ease of use has more contribution than the perceive usefulness on attitude toward using enterprise mobile cloud services. This research has both industrial and academic contributions because it provides the guideline to companies for introducing the enterprise mobile cloud services and apply the technology acceptance model on new IT services.
{"title":"Significant Factors for Building Enterprise Mobile Cloud","authors":"Jae-jin Lee, Junseok Oh, B. Lee","doi":"10.3745/KIPSTD.2011.18D.6.481","DOIUrl":"https://doi.org/10.3745/KIPSTD.2011.18D.6.481","url":null,"abstract":"Recently, various mobile services are provided by the spread of wireless network infrastructures and smart devices. The improvement of cloud computing technologies increases the interests for enterprise mobile cloud services in various IT companies as well. By increasing the interests for enterprise mobile cloud services, it is necessary to evaluate the use of enterprise mobile cloud services. Therefore, the factors which affect the user acceptance of enterprise mobile cloud services are analyzed on the basis of Davis` technology acceptance model in this research. As analysis results, four external variables have significant effects on perceived ease of use of mobile cloud services. Also, these variables indirectly affect attitude toward using cloud services. The results show that the security is the most important factor for attitude toward using enterprise mobile cloud services. The service users also consider the interoperability as an important factor for the user acceptance of cloud services. The perceived ease of use has more contribution than the perceive usefulness on attitude toward using enterprise mobile cloud services. This research has both industrial and academic contributions because it provides the guideline to companies for introducing the enterprise mobile cloud services and apply the technology acceptance model on new IT services.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"282 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124512829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}