Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.08.018
Bing-Jhih Yao , Shaw-Hwa Hwang , Cheng-Yu Yeh
Network address translation (NAT) traversal is a critical technology for real-time video streaming between two endpoints. The port predictive algorithm of the NAT mapping rule is the core of NAT traversal. However, most port predictive algorithms in the NAT traversal method are too simple or unclear. The behavior of NAT port mapping in devices in real network environments is complex and cannot be accurately predicted. In this study, NAT port mapping behavior was examined and a mathematical model was established to enhance the port predictability of NAT and increase the success rate of NAT traversal.
{"title":"Mathematical Model of Network Address Translation Port Mapping","authors":"Bing-Jhih Yao , Shaw-Hwa Hwang , Cheng-Yu Yeh","doi":"10.1016/j.aasri.2014.08.018","DOIUrl":"10.1016/j.aasri.2014.08.018","url":null,"abstract":"<div><p>Network address translation (NAT) traversal is a critical technology for real-time video streaming between two endpoints. The port predictive algorithm of the NAT mapping rule is the core of NAT traversal. However, most port predictive algorithms in the NAT traversal method are too simple or unclear. The behavior of NAT port mapping in devices in real network environments is complex and cannot be accurately predicted. In this study, NAT port mapping behavior was examined and a mathematical model was established to enhance the port predictability of NAT and increase the success rate of NAT traversal.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"8 ","pages":"Pages 105-111"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.08.018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80804258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.09.007
Martina Zachariasova, Patrik Kamencay, Robert Hudec, Miroslav Benco, Slavomir Matuska
This paper deals with research in the area of a novel imaging approach of web documents based on semantic inclusion of textual and non-textual informations. The main idea was to create a robust method for relevant display results into search engine based on search by keywords or images. Thus, we proposed method called Semantic Inclusion of Images and Textual (SIIT) segments. The output SIIT method is short web document. It contains image and textual segments, which are semantic linked. Creation of short web document to possible three steps was divided. Firstly, the all images and textual segments from main content web document were extracted. Secondly, extraction images were analyzed in order to obtain of semantic description objects into image. Finally, linked images and textual segments using linguistic analysis.
{"title":"A Novel Imaging Approach of Web Documents based on Semantic Inclusion of Textual and Non – Textual Information","authors":"Martina Zachariasova, Patrik Kamencay, Robert Hudec, Miroslav Benco, Slavomir Matuska","doi":"10.1016/j.aasri.2014.09.007","DOIUrl":"10.1016/j.aasri.2014.09.007","url":null,"abstract":"<div><p>This paper deals with research in the area of a novel imaging approach of web documents based on semantic inclusion of textual and non-textual informations. The main idea was to create a robust method for relevant display results into search engine based on search by keywords or images. Thus, we proposed method called Semantic Inclusion of Images and Textual (SIIT) segments. The output SIIT method is short web document. It contains image and textual segments, which are semantic linked. Creation of short web document to possible three steps was divided. Firstly, the all images and textual segments from main content web document were extracted. Secondly, extraction images were analyzed in order to obtain of semantic description objects into image. Finally, linked images and textual segments using linguistic analysis.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"9 ","pages":"Pages 31-36"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.09.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84398830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.09.018
Ikuo Tanabe, Hideki Sakamoto, Kazuhide Miyamoto
The software for innovative tool using Taguchi-methods is developed and evaluated. There are two trials in the innovative tool using Taguchi-methods; First trial is accomplished for selecting important control factors and its optimum region, and Second trial decides the optimum combination of the control factors by more detail trial using only important control factors. The optimum condition regarding cooling system for cutting was investigated for evaluating this innovation tool in the experiment. It is concluded from the result that (1) Innovative tool using the Taguchi-methods was useful for development with short-term and lower cost, and (2) This tool could quickly and exactly decide the optimum cooling condition.
{"title":"Development of Innovative Tool Using Taguchi-methods","authors":"Ikuo Tanabe, Hideki Sakamoto, Kazuhide Miyamoto","doi":"10.1016/j.aasri.2014.09.018","DOIUrl":"10.1016/j.aasri.2014.09.018","url":null,"abstract":"<div><p>The software for innovative tool using Taguchi-methods is developed and evaluated. There are two trials in the innovative tool using Taguchi-methods; First trial is accomplished for selecting important control factors and its optimum region, and Second trial decides the optimum combination of the control factors by more detail trial using only important control factors. The optimum condition regarding cooling system for cutting was investigated for evaluating this innovation tool in the experiment. It is concluded from the result that (1) Innovative tool using the Taguchi-methods was useful for development with short-term and lower cost, and (2) This tool could quickly and exactly decide the optimum cooling condition.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"9 ","pages":"Pages 107-113"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.09.018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77097114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.09.015
Mohammad Farukh Hashmi, Vijay Anand, Avinas G. Keskar
In the present digital world, digital images and videos are the main carrier of information. However, these sources of information can be easily tampered by using readily available software thus making authenticity and integrity of the digital images an important issue of concern. And in most of the cases copy- move image forgery is used to tamper the digital images. Therefore, as a solution to the aforementioned problem we are going to propose a unique method for copy-move forgery detection which can sustained various pre-processing attacks using a combination of Dyadic Wavelet Transform (DyWT) and Scale Invariant Feature Transform (SIFT). In this process first DyWT is applied on a given image to decompose it into four parts LL, LH, HL, and HH. Since LL part contains most of the information, we intended to apply SIFT on LL part only to extract the key features and find a descriptor vector of these key features and then find similarities between various descriptors vector to conclude that there has been some copy-move tampering done to the given image. And by using DyWT with SIFT we are able to extract more numbers of key points that are matched and thus able to detect copy-move forgery more efficiently.
{"title":"Copy-move Image Forgery Detection Using an Efficient and Robust Method Combining Un-decimated Wavelet Transform and Scale Invariant Feature Transform","authors":"Mohammad Farukh Hashmi, Vijay Anand, Avinas G. Keskar","doi":"10.1016/j.aasri.2014.09.015","DOIUrl":"10.1016/j.aasri.2014.09.015","url":null,"abstract":"<div><p>In the present digital world, digital images and videos are the main carrier of information. However, these sources of information can be easily tampered by using readily available software thus making authenticity and integrity of the digital images an important issue of concern. And in most of the cases copy- move image forgery is used to tamper the digital images. Therefore, as a solution to the aforementioned problem we are going to propose a unique method for copy-move forgery detection which can sustained various pre-processing attacks using a combination of Dyadic Wavelet Transform (DyWT) and Scale Invariant Feature Transform (SIFT). In this process first DyWT is applied on a given image to decompose it into four parts LL, LH, HL, and HH. Since LL part contains most of the information, we intended to apply SIFT on LL part only to extract the key features and find a descriptor vector of these key features and then find similarities between various descriptors vector to conclude that there has been some copy-move tampering done to the given image. And by using DyWT with SIFT we are able to extract more numbers of key points that are matched and thus able to detect copy-move forgery more efficiently.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"9 ","pages":"Pages 84-91"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.09.015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84855019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.09.005
Peter Sykora, Patrik Kamencay, Robert Hudec
In this paper a comparison between two popular feature extraction methods is presented. Scale-invariant feature transform (or SIFT) is the first method. The Speeded up robust features (or SURF) is presented as second. These two methods are tested on set of depth maps. Ten defined gestures of left hand are in these depth maps. The Microsoft Kinect camera is used for capturing the images [1]. The Support vector machine (or SVM) is used as classification method. The results are accuracy of SVM prediction on selected images.
{"title":"Comparison of SIFT and SURF Methods for Use on Hand Gesture Recognition based on Depth Map","authors":"Peter Sykora, Patrik Kamencay, Robert Hudec","doi":"10.1016/j.aasri.2014.09.005","DOIUrl":"10.1016/j.aasri.2014.09.005","url":null,"abstract":"<div><p>In this paper a comparison between two popular feature extraction methods is presented. Scale-invariant feature transform (or SIFT) is the first method. The Speeded up robust features (or SURF) is presented as second. These two methods are tested on set of depth maps. Ten defined gestures of left hand are in these depth maps. The Microsoft Kinect camera is used for capturing the images <span>[1]</span>. The Support vector machine (or SVM) is used as classification method. The results are accuracy of SVM prediction on selected images.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"9 ","pages":"Pages 19-24"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.09.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78050927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.05.024
Chuan Qin, Wenguang Liu, Wenfu He
In this paper, a new friction damper isolation system (FDIS) is suggested for isolated nuclear power plants (NPPs). Seismic responses of NPPs are accomplished by means of the finite element approach and setting up a representative multi-particle model of NPPs. Results in terms of time domain analysis show that response of structure supported by FDIS under small seismic are correspond to fixed structure, and perform similar properties as conventional isolated structure under large seismic. The yield force of friction damper is one of the important parameters which are related to responses and absorbing energy under seismic input energy in new isolated structures. Compared with cases of different yield level, responses of superstructures increase respectively with yield force, while displacements of isolation layer decrease effectively. The proposed new isolation system could be beneficial in enhancing the seismic safety of isolated NPPs.
{"title":"Seismic Response Analysis of Isolated Nuclear Power Plants with Friction Damper Isolation System","authors":"Chuan Qin, Wenguang Liu, Wenfu He","doi":"10.1016/j.aasri.2014.05.024","DOIUrl":"10.1016/j.aasri.2014.05.024","url":null,"abstract":"<div><p>In this paper, a new friction damper isolation system (FDIS) is suggested for isolated nuclear power plants (NPPs). Seismic responses of NPPs are accomplished by means of the finite element approach and setting up a representative multi-particle model of NPPs. Results in terms of time domain analysis show that response of structure supported by FDIS under small seismic are correspond to fixed structure, and perform similar properties as conventional isolated structure under large seismic. The yield force of friction damper is one of the important parameters which are related to responses and absorbing energy under seismic input energy in new isolated structures. Compared with cases of different yield level, responses of superstructures increase respectively with yield force, while displacements of isolation layer decrease effectively. The proposed new isolation system could be beneficial in enhancing the seismic safety of isolated NPPs.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"7 ","pages":"Pages 26-31"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.05.024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83488826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.08.003
Navin R. Singh , Edith M. Peters
The aim of this study was to assess the efficacy of using artificial neural networks (ANNs) to classify hydration status and predict the fluid requirements of endurance athletes. Hydration classification models were built using a total of 237 data sets obtained from 148 participants (106 males,42 females) in field-and laboratory studies involving running or cycling. 116 data sets obtained from athletes who completed endurance events euhydrated (plasma osmolality: 275-295 mmol.kg-1) following ad libitum replenishment of fluid intake was used to design prediction models. A filtering algorithm was used to determine the optimal inputs to the models from a selection of 13 anthropometric, exercise performance, fluid intake and environmental factors. The combination of gender, body mass, exercise intensity and environmental stress index in the prediction model generated a root mean square error of 0.24 L.h-1 and a correlation of 0.90 between predicted and actual drinking rates of the euhydrated participants. Additional inclusion of actual fluid intake resulted in the design of a model that was 89% accurate in classifying the post-exercise hydration status of athletes. These findings suggest that the ANN modelling technique has merit in the prediction of fluid requirements and as a supplement to ad libitum fluid intake practices.
{"title":"Artificial Neural Networks in the Determination of the Fluid Intake Needs of Endurance Athletes","authors":"Navin R. Singh , Edith M. Peters","doi":"10.1016/j.aasri.2014.08.003","DOIUrl":"10.1016/j.aasri.2014.08.003","url":null,"abstract":"<div><p>The aim of this study was to assess the efficacy of using artificial neural networks (ANNs) to classify hydration status and predict the fluid requirements of endurance athletes. Hydration classification models were built using a total of 237 data sets obtained from 148 participants (106 males,42 females) in field-and laboratory studies involving running or cycling. 116 data sets obtained from athletes who completed endurance events euhydrated (plasma osmolality: 275-295 mmol.kg<sup>-1</sup>) following <em>ad libitum</em> replenishment of fluid intake was used to design prediction models. A filtering algorithm was used to determine the optimal inputs to the models from a selection of 13 anthropometric, exercise performance, fluid intake and environmental factors. The combination of gender, body mass, exercise intensity and environmental stress index in the prediction model generated a root mean square error of 0.24 L.h<sup>-1</sup> and a correlation of 0.90 between predicted and actual drinking rates of the euhydrated participants. Additional inclusion of actual fluid intake resulted in the design of a model that was 89% accurate in classifying the post-exercise hydration status of athletes. These findings suggest that the ANN modelling technique has merit in the prediction of fluid requirements and as a supplement to <em>ad libitum</em> fluid intake practices.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"8 ","pages":"Pages 9-14"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.08.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74408117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.05.032
L.C. Ribeiro , E.L. Bonaldi , L.E.L. de Oliveira , L.E. Borges da Silva , C.P. Salomon , W.C. Santana , J.G. Borges da Silva , G. Lambert-Torres
This paper presents an equipment for predictive maintenance in large hydrogenerators. This equipment uses techniques of digital signal processing of the information contained in the electrical variables involved in the operation of the generator. Basically, the current and voltage signals of the generator are monitored and applied the techniques of electric signature analysis. The central idea is to unite the techniques of current signature analysis (CSA), voltage signature analysis (VSA) and Enhanced Park's Vector Approach (EPVA), to separate the spectra of signals and detect frequencies related to electrical and mechanical defects of generator-turbine set. This is possible because the generator is basically a device handling magnetic fields, so it's believable to infer that any operating conditions of all, somehow, influences the behavior of the magnetic field, reflecting noticeably in variations in signs of tensions and currents provided by its. The problem is to detect these variations, because some of them are under existing noise signs, and relate them to defects which they represent. This paper presents a real implementation in a hydrogenerator at Itapebi Power Plant, Brazil.
{"title":"Equipment for Predictive Maintenance in Hydrogenerators","authors":"L.C. Ribeiro , E.L. Bonaldi , L.E.L. de Oliveira , L.E. Borges da Silva , C.P. Salomon , W.C. Santana , J.G. Borges da Silva , G. Lambert-Torres","doi":"10.1016/j.aasri.2014.05.032","DOIUrl":"10.1016/j.aasri.2014.05.032","url":null,"abstract":"<div><p>This paper presents an equipment for predictive maintenance in large hydrogenerators. This equipment uses techniques of digital signal processing of the information contained in the electrical variables involved in the operation of the generator. Basically, the current and voltage signals of the generator are monitored and applied the techniques of electric signature analysis. The central idea is to unite the techniques of current signature analysis (CSA), voltage signature analysis (VSA) and Enhanced Park's Vector Approach (EPVA), to separate the spectra of signals and detect frequencies related to electrical and mechanical defects of generator-turbine set. This is possible because the generator is basically a device handling magnetic fields, so it's believable to infer that any operating conditions of all, somehow, influences the behavior of the magnetic field, reflecting noticeably in variations in signs of tensions and currents provided by its. The problem is to detect these variations, because some of them are under existing noise signs, and relate them to defects which they represent. This paper presents a real implementation in a hydrogenerator at Itapebi Power Plant, Brazil.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"7 ","pages":"Pages 75-80"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.05.032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88382508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.09.012
Kusuma Mohanchandra , Snehanshu Saha
EEG is an extensively used powerful tool for brain computer interface due to its good temporal resolution and ease of use. The signals captured by multichannel EEG recordings contribute to huge data and often lead to the high computational burden on the computer. An optimal number of electrodes that capture brain signals relevant to the purpose can be used, excluding the redundant and non-contributing electrodes. In this study, we propose an optimization technique on common spatial pattern for channel selection. The implementation of optimization is done as a sequential quadratic programming problem of fast convergence. Extensive experimentation is done to show that the proposed method induces large variance between two tasks of brain action related to sub vocalized speech.
{"title":"Optimal Channel Selection for Robust EEG Single-trial Analysis","authors":"Kusuma Mohanchandra , Snehanshu Saha","doi":"10.1016/j.aasri.2014.09.012","DOIUrl":"10.1016/j.aasri.2014.09.012","url":null,"abstract":"<div><p>EEG is an extensively used powerful tool for brain computer interface due to its good temporal resolution and ease of use. The signals captured by multichannel EEG recordings contribute to huge data and often lead to the high computational burden on the computer. An optimal number of electrodes that capture brain signals relevant to the purpose can be used, excluding the redundant and non-contributing electrodes. In this study, we propose an optimization technique on common spatial pattern for channel selection. The implementation of optimization is done as a sequential quadratic programming problem of fast convergence. Extensive experimentation is done to show that the proposed method induces large variance between two tasks of brain action related to sub vocalized speech.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"9 ","pages":"Pages 64-71"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.09.012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"109538348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1016/j.aasri.2014.05.017
Riccardo Dodi , Federica Ferraguti , Asko Ristolainen , Cristian Secchi , Alberto Sanna
New technological methods to assist percutaneous cryoablation procedures are here presented, namely a planning software and a simulation algorithm. The first has the role to calculate a feasible displacement of the tools to ensure an effective ablation of the lesion, satisfying well-specified procedural constraints. Starting from intra-operative CT scans of the patient, a virtual model of the anatomical site is obtained and uploaded. The displacement of the cryoprobes is computed in order to cover the whole volume of the tumour with the developed iceball, but minimizing the damage to surrounding healthy renal tissue. On the other hand, the simulation algorithm is a graphical tool useful to assess the temperature distribution throughout the evolution of the procedure. A discrete iterative function calculates the heat transfer from the probes to the surrounding tissue within a specified three-dimensional grid: the isolation of significant isotherms can help to assess whether the whole tumour will be frozen or not. By using a real intra-operative dataset of a successful percutaneous cryoablation, the volume of the real iceball has been matched with that generated from the simulator, showing a good accuracy in terms of dimension and shape. Even though been designed to be integrated within a robotic system, this method is usable and extensible for different purposes and adapted to simulate other scenarios or procedures.
{"title":"Planning and Simulation of Percutaneous Cryoablation","authors":"Riccardo Dodi , Federica Ferraguti , Asko Ristolainen , Cristian Secchi , Alberto Sanna","doi":"10.1016/j.aasri.2014.05.017","DOIUrl":"10.1016/j.aasri.2014.05.017","url":null,"abstract":"<div><p>New technological methods to assist percutaneous cryoablation procedures are here presented, namely a planning software and a simulation algorithm. The first has the role to calculate a feasible displacement of the tools to ensure an effective ablation of the lesion, satisfying well-specified procedural constraints. Starting from intra-operative CT scans of the patient, a virtual model of the anatomical site is obtained and uploaded. The displacement of the cryoprobes is computed in order to cover the whole volume of the tumour with the developed iceball, but minimizing the damage to surrounding healthy renal tissue. On the other hand, the simulation algorithm is a graphical tool useful to assess the temperature distribution throughout the evolution of the procedure. A discrete iterative function calculates the heat transfer from the probes to the surrounding tissue within a specified three-dimensional grid: the isolation of significant isotherms can help to assess whether the whole tumour will be frozen or not. By using a real intra-operative dataset of a successful percutaneous cryoablation, the volume of the real iceball has been matched with that generated from the simulator, showing a good accuracy in terms of dimension and shape. Even though been designed to be integrated within a robotic system, this method is usable and extensible for different purposes and adapted to simulate other scenarios or procedures.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"6 ","pages":"Pages 118-122"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.05.017","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86415829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}