Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6419883
K. Prasanga, Y. Saito, T. Nozaki, K. Ohnishi
As the technology advances, almost all the fields of the society gets developed for the benefit of the mankind. Surgery is one such field where a lot of focus is made to develop the surgical tools and instruments for past couple of decades. As a result, the robot assisted minimal invasive surgeries are very popular in the modern world. Laparoscopic forceps robots are widely used in these types of surgeries. Most of these forceps robots can only be position controlled where the user cannot feel the environment. However in surgeries it is necessary to feel the stiffness of the tissues. Therefore the transmission of force sensation is required. Especially in the case of a remote operation, bilateral control is essential. Also, most of the forceps robots are manufactured according to the traditional forceps mechanism with a crank arrangement at the tip of the forceps. This mechanism restricts the independent move of the two jaws and transmits the force in a single mechanical channel to the user. This paper proposes a method to operate the forceps tip independently with the use of bilaterally controlled tendon arrangement. Also at the same time it linearizes the force applied by the forceps tip to the object. Experimental results confirm the validity of the proposed method.
{"title":"Achievement of real haptic sensation with tendon driven segregated jaws for laparoscopic forceps","authors":"K. Prasanga, Y. Saito, T. Nozaki, K. Ohnishi","doi":"10.1109/ICIAFS.2012.6419883","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6419883","url":null,"abstract":"As the technology advances, almost all the fields of the society gets developed for the benefit of the mankind. Surgery is one such field where a lot of focus is made to develop the surgical tools and instruments for past couple of decades. As a result, the robot assisted minimal invasive surgeries are very popular in the modern world. Laparoscopic forceps robots are widely used in these types of surgeries. Most of these forceps robots can only be position controlled where the user cannot feel the environment. However in surgeries it is necessary to feel the stiffness of the tissues. Therefore the transmission of force sensation is required. Especially in the case of a remote operation, bilateral control is essential. Also, most of the forceps robots are manufactured according to the traditional forceps mechanism with a crank arrangement at the tip of the forceps. This mechanism restricts the independent move of the two jaws and transmits the force in a single mechanical channel to the user. This paper proposes a method to operate the forceps tip independently with the use of bilaterally controlled tendon arrangement. Also at the same time it linearizes the force applied by the forceps tip to the object. Experimental results confirm the validity of the proposed method.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130388169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6419920
X. Gou, Jinxiang Wu, Liansheng Liu, E. Wang, Junhu Zhou, Jianzhong Liu, K. Cen
This study developed a new model of low NOx combustion for the direct flow of pulverized coal, which enabled the relationships among gas temperature, oxygen concentration, time consumption, and distance to be investigated. A model of the ignition and combustion zone was discussed in detail. It was shown that the gas temperature and the oxygen concentration change with the distance. The proper position of the secondary air injection can be obtained according to the gas temperature and the oxygen concentration which significantly affect NOx formation. The model can be used to determine the injection position of the secondary air for the design or modification of boilers for low NOx coal combustion. Application of the model to a power plant boiler together with pre-ignition model and reburning technology enabled an overall 48% reduction of NOx to be obtained.
{"title":"Research on the secondary air position for the one-dimensional model of low NOx combustion","authors":"X. Gou, Jinxiang Wu, Liansheng Liu, E. Wang, Junhu Zhou, Jianzhong Liu, K. Cen","doi":"10.1109/ICIAFS.2012.6419920","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6419920","url":null,"abstract":"This study developed a new model of low NOx combustion for the direct flow of pulverized coal, which enabled the relationships among gas temperature, oxygen concentration, time consumption, and distance to be investigated. A model of the ignition and combustion zone was discussed in detail. It was shown that the gas temperature and the oxygen concentration change with the distance. The proper position of the secondary air injection can be obtained according to the gas temperature and the oxygen concentration which significantly affect NOx formation. The model can be used to determine the injection position of the secondary air for the design or modification of boilers for low NOx coal combustion. Application of the model to a power plant boiler together with pre-ignition model and reburning technology enabled an overall 48% reduction of NOx to be obtained.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114316724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6419887
Sena Seneviratne, S. Witharana
The current Operating System (OS) kernels calculate the load average value as a lump sum. Also the algorithm for the calculation of load average does not separate CPU load from Disk load. This leads to the presentation of an incorrect measurement when both disk bound tasks and CPU bound tasks run simultaneously. In this paper a new algorithm is proposed to calculate, store and display each user's CPU and Disk loads separately. The separation of user load at the kernel level has an importance in the collection of historical load signals as they can be useful for load prediction. In Grids and Clusters the users have certain usage patterns that can be easily traced back in the historical load profile collections. Such selected patterns are useful in the prediction of load profiles.
{"title":"Division of load for operating system kernel","authors":"Sena Seneviratne, S. Witharana","doi":"10.1109/ICIAFS.2012.6419887","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6419887","url":null,"abstract":"The current Operating System (OS) kernels calculate the load average value as a lump sum. Also the algorithm for the calculation of load average does not separate CPU load from Disk load. This leads to the presentation of an incorrect measurement when both disk bound tasks and CPU bound tasks run simultaneously. In this paper a new algorithm is proposed to calculate, store and display each user's CPU and Disk loads separately. The separation of user load at the kernel level has an importance in the collection of historical load signals as they can be useful for load prediction. In Grids and Clusters the users have certain usage patterns that can be easily traced back in the historical load profile collections. Such selected patterns are useful in the prediction of load profiles.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124922170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6419898
M. Zouqi, J. Samarabandu, Yanbo Zhou
Geospatial tools and techniques are becoming more important for land surveyors to do their off-location inspections of the urban areas. Accurate geocoded street-level images are the base of these tools. For these applications, an error of 2.5 meters is tolerable. However, the geographic coordinates provided by GPS have error up to 10 meters. In this paper we propose an automatic method to improve the accuracy of geocoding of street-level images by registering them to the accurate geocoded reference image, which is the satellite image. The proposed technique uses an unconstrained nonlinear optimization method to find local optimal solutions by matching high-level features and their relative locations. A global optimization method is then employed over all of the local solutions by applying a geometric constraint. We used our algorithm for correcting the geographic information of more than 2500 fisheye images and show that the proposed algorithm can achieve an average error of 1.19 meters along both x and y directions.
{"title":"Fusion of GPS and image data for accurate geocoding of street-level fisheye images","authors":"M. Zouqi, J. Samarabandu, Yanbo Zhou","doi":"10.1109/ICIAFS.2012.6419898","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6419898","url":null,"abstract":"Geospatial tools and techniques are becoming more important for land surveyors to do their off-location inspections of the urban areas. Accurate geocoded street-level images are the base of these tools. For these applications, an error of 2.5 meters is tolerable. However, the geographic coordinates provided by GPS have error up to 10 meters. In this paper we propose an automatic method to improve the accuracy of geocoding of street-level images by registering them to the accurate geocoded reference image, which is the satellite image. The proposed technique uses an unconstrained nonlinear optimization method to find local optimal solutions by matching high-level features and their relative locations. A global optimization method is then employed over all of the local solutions by applying a geometric constraint. We used our algorithm for correcting the geographic information of more than 2500 fisheye images and show that the proposed algorithm can achieve an average error of 1.19 meters along both x and y directions.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128574653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6420035
C. Witharana
This study investigated the performances of data fusion algorithms when applied to very high spatial resolution satellite images that encompass ongoing- and post-crisis scenes. The evaluation entailed twelve fusion algorithms. The candidate algorithms were applied to GeoEye-1 satellite images taken over three different geographical settings representing natural and anthropogenic crises that had occurred in the recent past: earthquake-damaged sites in Haiti, flood-impacted sites in Pakistan, and armed-conflicted areas in Sri Lanka. Fused images were assessed subjectively and objectively. Spectral quality metrics included correlation coefficient, peak signal-to-noise ratio index, mean structural similarity index, spectral angle mapper, and relative dimensionless global error in synthesis. The spatial integrity of fused images was assessed using Canny edge correspondence and high-pass correlation coefficient. Under each metric, fusion methods were ranked and best competitors were identified. In this study, The Ehlers fusion, wavelet principle component analysis (WV-PCA) fusion, and the high-pass filter fusion algorithms reported the best values for the majority of spectral quality indices. Under spatial metrics, the University of New Brunswick and Gram-Schmidt fusion algorithms reported the optimum values. The color normalization sharpening and subtractive resolution merge algorithms exhibited the highest spectral distortions where as the WV-PCA algorithm showed the weakest spatial improvement. In conclusion, this study recommends the University of New Brunswick algorithm if visual image interpretation is involved, whereas the high-pass filter fusion is recommended if semi- or fully-automated feature extraction is involved, for pansharpening VHSR satellite images of on-going and post crisis sites.
{"title":"Who does what where? Advanced earth observation for humanitarian crisis management","authors":"C. Witharana","doi":"10.1109/ICIAFS.2012.6420035","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6420035","url":null,"abstract":"This study investigated the performances of data fusion algorithms when applied to very high spatial resolution satellite images that encompass ongoing- and post-crisis scenes. The evaluation entailed twelve fusion algorithms. The candidate algorithms were applied to GeoEye-1 satellite images taken over three different geographical settings representing natural and anthropogenic crises that had occurred in the recent past: earthquake-damaged sites in Haiti, flood-impacted sites in Pakistan, and armed-conflicted areas in Sri Lanka. Fused images were assessed subjectively and objectively. Spectral quality metrics included correlation coefficient, peak signal-to-noise ratio index, mean structural similarity index, spectral angle mapper, and relative dimensionless global error in synthesis. The spatial integrity of fused images was assessed using Canny edge correspondence and high-pass correlation coefficient. Under each metric, fusion methods were ranked and best competitors were identified. In this study, The Ehlers fusion, wavelet principle component analysis (WV-PCA) fusion, and the high-pass filter fusion algorithms reported the best values for the majority of spectral quality indices. Under spatial metrics, the University of New Brunswick and Gram-Schmidt fusion algorithms reported the optimum values. The color normalization sharpening and subtractive resolution merge algorithms exhibited the highest spectral distortions where as the WV-PCA algorithm showed the weakest spatial improvement. In conclusion, this study recommends the University of New Brunswick algorithm if visual image interpretation is involved, whereas the high-pass filter fusion is recommended if semi- or fully-automated feature extraction is involved, for pansharpening VHSR satellite images of on-going and post crisis sites.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132952129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6420037
Vorawit Meesuk, Zoran Vojinovic, A. Mynett
Using physically based computational models coupled with remote sensing technologies, photogrammetry techniques, and GIS applications are important tools for flood hazard mapping and flood disaster prevention. Also, information processing of massive input data with refined accuracy allows us to develop and to improve urban-flood-modeling at a detailed level. The topographical information from digital surface model (DSM) or digital terrain model (DTM) is essential for flood managers who actually require this high accuracy and resolution of input data to set up their practical applications. Light detecting and ranging (LiDAR) techniques are mainly used, but these costly techniques can be appraised by equipments, maintenance, and operations which include aircraft. Recent advances in photogrammetry and computer vision technologies like structure form motion (SfM) technique are widely used and offer cost-effective approaches to reconstruct 3D-topographical information from simple 2D photos, so-called 3D reconstruction. In terms of input data for flood modeling, the SfM technique can be comparable to other acquisition-techniques. In this paper, there are one experimental and two case studies. Firstly, a result of the experiment showed a similarity between flood maps by applying the SfM process form the 3D-reconstruction and using benchmark information. These 3D-reconstruction processes started from 2D photos, which were taken from virtual scenes by using multidimensional-view approach. These photos can be used to generate 3D information which is later used to create the DSM from multidimensional fusion of views (MFV-DSM). Then, the DSM was used as input data to set up 2D flood modeling. Thereafter, when using the DSMs as topographical input data, comparison between a benchmark DSM and MFV-DSM shows similarity flood-map results in both flood depths and flood extends. Secondary, the two cases from real world scenes also showed possibilities of using the SfM technique as an alternative acquisition tool, providing 3D information. This information can be used as input data for setting up modeling and can possibly be comparable or even outcompete with other acquisition techniques, such as LiDAR. As a result, using the SfM technique can be extended to become promising methods in practicable applications for modeling real flood events in real world scenes.
{"title":"Using multidimensional views of photographs for flood modelling","authors":"Vorawit Meesuk, Zoran Vojinovic, A. Mynett","doi":"10.1109/ICIAFS.2012.6420037","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6420037","url":null,"abstract":"Using physically based computational models coupled with remote sensing technologies, photogrammetry techniques, and GIS applications are important tools for flood hazard mapping and flood disaster prevention. Also, information processing of massive input data with refined accuracy allows us to develop and to improve urban-flood-modeling at a detailed level. The topographical information from digital surface model (DSM) or digital terrain model (DTM) is essential for flood managers who actually require this high accuracy and resolution of input data to set up their practical applications. Light detecting and ranging (LiDAR) techniques are mainly used, but these costly techniques can be appraised by equipments, maintenance, and operations which include aircraft. Recent advances in photogrammetry and computer vision technologies like structure form motion (SfM) technique are widely used and offer cost-effective approaches to reconstruct 3D-topographical information from simple 2D photos, so-called 3D reconstruction. In terms of input data for flood modeling, the SfM technique can be comparable to other acquisition-techniques. In this paper, there are one experimental and two case studies. Firstly, a result of the experiment showed a similarity between flood maps by applying the SfM process form the 3D-reconstruction and using benchmark information. These 3D-reconstruction processes started from 2D photos, which were taken from virtual scenes by using multidimensional-view approach. These photos can be used to generate 3D information which is later used to create the DSM from multidimensional fusion of views (MFV-DSM). Then, the DSM was used as input data to set up 2D flood modeling. Thereafter, when using the DSMs as topographical input data, comparison between a benchmark DSM and MFV-DSM shows similarity flood-map results in both flood depths and flood extends. Secondary, the two cases from real world scenes also showed possibilities of using the SfM technique as an alternative acquisition tool, providing 3D information. This information can be used as input data for setting up modeling and can possibly be comparable or even outcompete with other acquisition techniques, such as LiDAR. As a result, using the SfM technique can be extended to become promising methods in practicable applications for modeling real flood events in real world scenes.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6419896
A. Pallegedara, Y. Matsuda, N. Egashira, T. Sugi, S. Goto
Construction and evaluation of the two different strategies of force-free control of industrial type robot arm is presented in this paper. First, robot arm dynamic model for two link model and basic structure of force-free control method are described. Then two different force-free control architectures are illustrated. Two different force-free control strategies are force-free control by dynamic external torque and force-free control by dynamic torque independent compensation, respectively. Analysis of the each type of force-free control strategy is carried out for the single link and two links perspectives of industrial robot arm configurations by means of simulations under Matlab/Simulink environment. The model characteristics of the force-free control are exploited to discuss the application scenarios. Moreover, analysis of the force-free control is carried out by using real robot parameters throughout the simulations. Since the force-free control deals with external forces applied on the robot arm, it can be used to illustrate the interactive force control between a human and a robot arm by passive motion over an external force.
{"title":"Development and evaluation of simulation model for force-free control strategies","authors":"A. Pallegedara, Y. Matsuda, N. Egashira, T. Sugi, S. Goto","doi":"10.1109/ICIAFS.2012.6419896","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6419896","url":null,"abstract":"Construction and evaluation of the two different strategies of force-free control of industrial type robot arm is presented in this paper. First, robot arm dynamic model for two link model and basic structure of force-free control method are described. Then two different force-free control architectures are illustrated. Two different force-free control strategies are force-free control by dynamic external torque and force-free control by dynamic torque independent compensation, respectively. Analysis of the each type of force-free control strategy is carried out for the single link and two links perspectives of industrial robot arm configurations by means of simulations under Matlab/Simulink environment. The model characteristics of the force-free control are exploited to discuss the application scenarios. Moreover, analysis of the force-free control is carried out by using real robot parameters throughout the simulations. Since the force-free control deals with external forces applied on the robot arm, it can be used to illustrate the interactive force control between a human and a robot arm by passive motion over an external force.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127407904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6419922
S. Witharana, J. Weliwita
Nanoparticle suspensions have demonstrated superior heat transfer properties and hence appear to be a strong contender to become next generation coolants. While the presence of particles enhances thermal conductivity, they also contribute to increase the fluid viscosity. The latter will lead to demand more pumping power in convective systems, hence questioning the overall economy of the concept. This paper presents the recently obtained thermal conductivity and rheology data for alumina (Al2O3) and titania (TiO2) nanoparticles suspended in ethylene glycol in the temperature interval of 20-90°C and particle concentrations of 0-8wt%. Although the thermal conductivity enhanced by up to 14%, a simultaneous increase in viscosity dampens the net advantage of using nanoparticle suspensions as convective heat transfer fluids.
{"title":"Suspended nanoparticles as a way to improve thermal energy transfer efficiency","authors":"S. Witharana, J. Weliwita","doi":"10.1109/ICIAFS.2012.6419922","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6419922","url":null,"abstract":"Nanoparticle suspensions have demonstrated superior heat transfer properties and hence appear to be a strong contender to become next generation coolants. While the presence of particles enhances thermal conductivity, they also contribute to increase the fluid viscosity. The latter will lead to demand more pumping power in convective systems, hence questioning the overall economy of the concept. This paper presents the recently obtained thermal conductivity and rheology data for alumina (Al2O3) and titania (TiO2) nanoparticles suspended in ethylene glycol in the temperature interval of 20-90°C and particle concentrations of 0-8wt%. Although the thermal conductivity enhanced by up to 14%, a simultaneous increase in viscosity dampens the net advantage of using nanoparticle suspensions as convective heat transfer fluids.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132435920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6419916
Heshan Fernando, J. Siriwardana, Saman K. Halgamuge
Data centers require vast amounts of energy for keeping the servers cool at optimal operating temperatures. Recent research has focused on improving the cooling efficiency, and thereby lowering the energy consumption, through different rack arrangements and modifying the air-flow patterns. Thus far, this has been done using computational fluid dynamics (CFD) models as access to a real data centers is often restricted. The next step in this research is to build a physical model for testing purposes. The viability of building a scaled model of an actual data center is investigated using the scale modeling theory for airflow experiments. A full-scale prototype and a half-scale model are created using CFD software and simulated to see if similarity can be achieved in the scaled model for the temperature distribution as well as the airflow velocities. Our results show that the thermal similarity can be achieved within 5% error margin while the airflow similarity cannot be achieved with reasonable accuracy.
{"title":"Can a data center heat-flow model be scaled down?","authors":"Heshan Fernando, J. Siriwardana, Saman K. Halgamuge","doi":"10.1109/ICIAFS.2012.6419916","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6419916","url":null,"abstract":"Data centers require vast amounts of energy for keeping the servers cool at optimal operating temperatures. Recent research has focused on improving the cooling efficiency, and thereby lowering the energy consumption, through different rack arrangements and modifying the air-flow patterns. Thus far, this has been done using computational fluid dynamics (CFD) models as access to a real data centers is often restricted. The next step in this research is to build a physical model for testing purposes. The viability of building a scaled model of an actual data center is investigated using the scale modeling theory for airflow experiments. A full-scale prototype and a half-scale model are created using CFD software and simulated to see if similarity can be achieved in the scaled model for the temperature distribution as well as the airflow velocities. Our results show that the thermal similarity can be achieved within 5% error margin while the airflow similarity cannot be achieved with reasonable accuracy.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"437 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132591613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/ICIAFS.2012.6419919
Xiaohui Yan, Haisheng Chen, Xuehui Zhang, Chunqing Tan
This paper presents a three-layer Artificial Neural Network as the short-term load forecasting model adopting the fastest back-propagation algorithm with robustness, i.e., Levenberg-Marquardt optimization, and moreover, the momentum factor is considered during the learning process. Based on predicted data by aforementioned model, size determination of energy storage system in terms of power rating and capacity is undertaken according to the desired level of shaving peak demand. The illustrative example in reference to the weather and power load data of office building from July to August in 2011 gets the results that the average relative error -0.7% and the root-mean-square error 2.79% which show aforementioned forecasting model can work effectively with the attractive percentage, i.e. 87.5%, of error within the acceptable one 2.79%; Furthermore, size determination of energy storage system adopting battery energy storage technology, i.e. 7.03kW/36.42kWh, is carried out to meet the desired peak shaving demand.
{"title":"Energy storage sizing for office buildings based on short-term load forecasting","authors":"Xiaohui Yan, Haisheng Chen, Xuehui Zhang, Chunqing Tan","doi":"10.1109/ICIAFS.2012.6419919","DOIUrl":"https://doi.org/10.1109/ICIAFS.2012.6419919","url":null,"abstract":"This paper presents a three-layer Artificial Neural Network as the short-term load forecasting model adopting the fastest back-propagation algorithm with robustness, i.e., Levenberg-Marquardt optimization, and moreover, the momentum factor is considered during the learning process. Based on predicted data by aforementioned model, size determination of energy storage system in terms of power rating and capacity is undertaken according to the desired level of shaving peak demand. The illustrative example in reference to the weather and power load data of office building from July to August in 2011 gets the results that the average relative error -0.7% and the root-mean-square error 2.79% which show aforementioned forecasting model can work effectively with the attractive percentage, i.e. 87.5%, of error within the acceptable one 2.79%; Furthermore, size determination of energy storage system adopting battery energy storage technology, i.e. 7.03kW/36.42kWh, is carried out to meet the desired peak shaving demand.","PeriodicalId":151240,"journal":{"name":"2012 IEEE 6th International Conference on Information and Automation for Sustainability","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133579196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}