Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.244966
M. Bentum, Robert G. J. Arendsen, C. Slump, C. Mistretta, W. Peppler, F. Zink
At the Department of Medical Physics at the University of Wisconsin-Madison, research on dual energy chest imaging including the algorithm and patient studies is done using a Pixar image processor computer. In the project described here, a study was made of which low-cost system is able to replace the Pixar and provide high-speed dual energy image processing. The dual energy algorithm was analyzed and the user and system requirements were obtained. A single workstation (e.g. Sun Sparc Station 2) does not provide enough processing power. Therefore accelerator boards for the workstation were reviewed. A prototype system was developed, using an i860-based accelerator board, i.e. the CSPI SuperCard-1, in a Sun 3/150 host computer. Bare computer time for the dual energy algorithm was reduced from 25 min using the Pixar image computer to less than three minutes using the SuperCard-1 processor board.<>
在威斯康星大学麦迪逊分校的医学物理系,双能量胸部成像的研究包括算法和患者研究是使用皮克斯图像处理器计算机完成的。在这里所描述的项目中,研究了一种低成本的系统能够取代皮克斯并提供高速双能量图像处理。对双能量算法进行了分析,得到了用户需求和系统需求。单个工作站(例如Sun Sparc Station 2)不能提供足够的处理能力。因此,对工作站的加速板进行了综述。开发了一个原型系统,在Sun 3/150主机上使用基于i860的加速板,即CSPI SuperCard-1。双能量算法的裸机时间从使用皮克斯图像计算机的25分钟减少到使用SuperCard-1处理器板的不到3分钟。
{"title":"Design and realization of high speed single exposure dual energy image processing","authors":"M. Bentum, Robert G. J. Arendsen, C. Slump, C. Mistretta, W. Peppler, F. Zink","doi":"10.1109/CBMS.1992.244966","DOIUrl":"https://doi.org/10.1109/CBMS.1992.244966","url":null,"abstract":"At the Department of Medical Physics at the University of Wisconsin-Madison, research on dual energy chest imaging including the algorithm and patient studies is done using a Pixar image processor computer. In the project described here, a study was made of which low-cost system is able to replace the Pixar and provide high-speed dual energy image processing. The dual energy algorithm was analyzed and the user and system requirements were obtained. A single workstation (e.g. Sun Sparc Station 2) does not provide enough processing power. Therefore accelerator boards for the workstation were reviewed. A prototype system was developed, using an i860-based accelerator board, i.e. the CSPI SuperCard-1, in a Sun 3/150 host computer. Bare computer time for the dual energy algorithm was reduced from 25 min using the Pixar image computer to less than three minutes using the SuperCard-1 processor board.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130828151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.245008
E. Miskiel, Ö. Özdamar, D. Oller, R. Eilers
An innovative feature detection tactile vocoder based on linear predictive coding (LPC) processing algorithms is described. The device utilizes the 80386 microprocessor and the TMS320C30 floating point digital signal processor. Speech processing is implemented in real-time. The vocoder algorithms allow the detection of spectral peaks corresponding to the first three formants of the vocal tract as well as fundamental frequency. This LPC approach enhances the ability of vocoders to track formant transitions and fundamental frequency, and it minimizes spatiotemporal masking.<>
{"title":"Digital signal processor-based feature extraction vocoder for the deaf","authors":"E. Miskiel, Ö. Özdamar, D. Oller, R. Eilers","doi":"10.1109/CBMS.1992.245008","DOIUrl":"https://doi.org/10.1109/CBMS.1992.245008","url":null,"abstract":"An innovative feature detection tactile vocoder based on linear predictive coding (LPC) processing algorithms is described. The device utilizes the 80386 microprocessor and the TMS320C30 floating point digital signal processor. Speech processing is implemented in real-time. The vocoder algorithms allow the detection of spectral peaks corresponding to the first three formants of the vocal tract as well as fundamental frequency. This LPC approach enhances the ability of vocoders to track formant transitions and fundamental frequency, and it minimizes spatiotemporal masking.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129482048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.244929
G. Rom, G. Jöchtl, G. Pfurtscheller
A system is described which should assist the physician during an EMG (electromyographic) examination and in the diagnostic process and should help to reduce the examination time. For the realization of the system, a multiprocessor environment (transputer network) is used to build a fully integrated system on which tasks like data acquisition, signal display in real time, signal processing and analysis with different methods, expert system and general program management can run in parallel and easily communicate with one another. The prototype of the expert system consists of about 200 rules which are responsible for planning the examination process, doing the parameter classification, and giving information about the type of lesion.<>
{"title":"A transputer-based EMG-system with integrated knowledge-base for diagnostic-support","authors":"G. Rom, G. Jöchtl, G. Pfurtscheller","doi":"10.1109/CBMS.1992.244929","DOIUrl":"https://doi.org/10.1109/CBMS.1992.244929","url":null,"abstract":"A system is described which should assist the physician during an EMG (electromyographic) examination and in the diagnostic process and should help to reduce the examination time. For the realization of the system, a multiprocessor environment (transputer network) is used to build a fully integrated system on which tasks like data acquisition, signal display in real time, signal processing and analysis with different methods, expert system and general program management can run in parallel and easily communicate with one another. The prototype of the expert system consists of about 200 rules which are responsible for planning the examination process, doing the parameter classification, and giving information about the type of lesion.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132210301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.245035
T. Masuzawa, Y. Taenaka, M. Kinoshita, T. Nakatani, H. Takano, Y. Fukui
An electrohydraulic totally implantable artificial heart which has a motor-integrated regenerative pump was developed. The system consists of left and right diaphragm-type blood pumps which are implanted in the thorax with a separately placed electrohydraulic actuator in the abdominal region. The blood pump was designed to have an appropriate anatomical fitting in a human thorax. The actuator is a regenerative pump which is able to pump fluid against a high head. The height, diameter, and weight of the actuator are 32.5 mm, 73 mm, and 350 g, respectively. The rotor-magnet of the brushless DC motor is embedded in the impeller of the regenerative pump in order to miniaturize the actuator and increase durability by reducing the number of moving parts. A 32-b microcomputer controls the motor of the actuator. The detection algorithm of the pumping condition was developed by using the TMSTR (time-sequential multiple-state transition representation) linguistic technique to control the artificial heart correctly and safely. The feasibility of the devices was confirmed by in vivo and in vitro experiments.<>
{"title":"An electrohydraulic totally implantable artificial heart with a motor-integrated regenerative pump and its computer control","authors":"T. Masuzawa, Y. Taenaka, M. Kinoshita, T. Nakatani, H. Takano, Y. Fukui","doi":"10.1109/CBMS.1992.245035","DOIUrl":"https://doi.org/10.1109/CBMS.1992.245035","url":null,"abstract":"An electrohydraulic totally implantable artificial heart which has a motor-integrated regenerative pump was developed. The system consists of left and right diaphragm-type blood pumps which are implanted in the thorax with a separately placed electrohydraulic actuator in the abdominal region. The blood pump was designed to have an appropriate anatomical fitting in a human thorax. The actuator is a regenerative pump which is able to pump fluid against a high head. The height, diameter, and weight of the actuator are 32.5 mm, 73 mm, and 350 g, respectively. The rotor-magnet of the brushless DC motor is embedded in the impeller of the regenerative pump in order to miniaturize the actuator and increase durability by reducing the number of moving parts. A 32-b microcomputer controls the motor of the actuator. The detection algorithm of the pumping condition was developed by using the TMSTR (time-sequential multiple-state transition representation) linguistic technique to control the artificial heart correctly and safely. The feasibility of the devices was confirmed by in vivo and in vitro experiments.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134411581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.244968
W. Geckle, Z. Szabo
Physiologic factor analysis (PFA) has been applied to a set of dynamic positron emission tomography (PET)-generated images to extract fundamental, kinetic functions useful in compartmental modeling of PET data. The study was conducted to investigate PFA as a means of improving compartment models of tracer kinetics for the estimation of neuroreceptor binding characteristics from dynamic PET data. PFA derived factors avoid the problem of overlapping tissue types in compartmental estimates and also avoid errors in operator definition of regions of interest, since PFA is an automated method. Three factors were estimated: the first factor was identified as a sample of the free tracer in tissue compartment and accounted for a mean contribution of 41% to the total factor representation of the data; the second factor was identified as bound radioligand with a 33% mean contribution; and the third was identified as free tracer in blood with a 26% mean contribution. The PFA results obtained from the 14 human PET studies were compared to published results from animal studies using the same radioligand but where tissue samples were analyzed for radioactivity. The time-dependent behavior of compartmental activity in the two cases was similar.<>
{"title":"Physiologic factor analysis (PFA) and parametric imaging of dynamic PET images","authors":"W. Geckle, Z. Szabo","doi":"10.1109/CBMS.1992.244968","DOIUrl":"https://doi.org/10.1109/CBMS.1992.244968","url":null,"abstract":"Physiologic factor analysis (PFA) has been applied to a set of dynamic positron emission tomography (PET)-generated images to extract fundamental, kinetic functions useful in compartmental modeling of PET data. The study was conducted to investigate PFA as a means of improving compartment models of tracer kinetics for the estimation of neuroreceptor binding characteristics from dynamic PET data. PFA derived factors avoid the problem of overlapping tissue types in compartmental estimates and also avoid errors in operator definition of regions of interest, since PFA is an automated method. Three factors were estimated: the first factor was identified as a sample of the free tracer in tissue compartment and accounted for a mean contribution of 41% to the total factor representation of the data; the second factor was identified as bound radioligand with a 33% mean contribution; and the third was identified as free tracer in blood with a 26% mean contribution. The PFA results obtained from the 14 human PET studies were compared to published results from animal studies using the same radioligand but where tissue samples were analyzed for radioactivity. The time-dependent behavior of compartmental activity in the two cases was similar.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130360930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.245023
A. Abdel-Malek, J. Bloomer, F. Yassa
Presents an approach for reducing X-ray absorbed dose during cardiac fluoroscopic interventional procedures. The approach hinges on two main concepts: (1) adapting the X-ray pulse rate to the activity of the organ under investigation (the heart); and (2) maintaining the appearance of a 30-frame/s display rate to the viewer. The first concept was accomplished through the processing of multiple sensor information to determine the onset of the various phases of ventricular motion within the cardiac cycle. For each detected phase of the cardiac cycle, a specific tube pulse rate is assigned or automatically determined (after a learning period) such that high activity phases will have higher tube rate than phases with low activity. In order to maintain a 30-frame/s display rate to the viewer, a last-frame-hold approach was used and the resultant sequence shows minimal jerkiness artifacts as a result of the adaptive-motion-dependent sampling strategy. Preliminary results of the proposed system indicate the possibility of a three-to-one reduction of the tube pulse-rate. This translates to a dose reduction of a similar ratio.<>
{"title":"Adaptive pulse rate scheduling for reduced dose X-ray cardiac interventional fluoroscopic procedures","authors":"A. Abdel-Malek, J. Bloomer, F. Yassa","doi":"10.1109/CBMS.1992.245023","DOIUrl":"https://doi.org/10.1109/CBMS.1992.245023","url":null,"abstract":"Presents an approach for reducing X-ray absorbed dose during cardiac fluoroscopic interventional procedures. The approach hinges on two main concepts: (1) adapting the X-ray pulse rate to the activity of the organ under investigation (the heart); and (2) maintaining the appearance of a 30-frame/s display rate to the viewer. The first concept was accomplished through the processing of multiple sensor information to determine the onset of the various phases of ventricular motion within the cardiac cycle. For each detected phase of the cardiac cycle, a specific tube pulse rate is assigned or automatically determined (after a learning period) such that high activity phases will have higher tube rate than phases with low activity. In order to maintain a 30-frame/s display rate to the viewer, a last-frame-hold approach was used and the resultant sequence shows minimal jerkiness artifacts as a result of the adaptive-motion-dependent sampling strategy. Preliminary results of the proposed system indicate the possibility of a three-to-one reduction of the tube pulse-rate. This translates to a dose reduction of a similar ratio.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127428523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.244958
Mengyang Liao, Xinliang Li, Jiamei Qin, Sixian Wang
SGLD (spatial gray level dependence) matrices are used to analyze the B-scan images of 23 samples of normal stomach coats and 14 samples of cancerous stomach coats. According to these matrices, the values of eight texture features of each sample image are computed. Two groups of conditional frequency distributions are obtained. On the basis of these distributions, the authors evaluated the quality, which reflects the error probability in discriminating between pattern classes of all the features. By comparing the measurements of the quality, the authors select from these features the most effective ones in discriminating between a normal stomach and a cancerous stomach. The evaluation methods include normal distribution hypothesis testing, and T testing. The result of the experiments indicates that the selected texture features can be applied to an automatic diagnosis system in the near future.<>
{"title":"The extraction of the best SGLD texture features in the ultrasound B-scan images of cancered stomach coats","authors":"Mengyang Liao, Xinliang Li, Jiamei Qin, Sixian Wang","doi":"10.1109/CBMS.1992.244958","DOIUrl":"https://doi.org/10.1109/CBMS.1992.244958","url":null,"abstract":"SGLD (spatial gray level dependence) matrices are used to analyze the B-scan images of 23 samples of normal stomach coats and 14 samples of cancerous stomach coats. According to these matrices, the values of eight texture features of each sample image are computed. Two groups of conditional frequency distributions are obtained. On the basis of these distributions, the authors evaluated the quality, which reflects the error probability in discriminating between pattern classes of all the features. By comparing the measurements of the quality, the authors select from these features the most effective ones in discriminating between a normal stomach and a cancerous stomach. The evaluation methods include normal distribution hypothesis testing, and T testing. The result of the experiments indicates that the selected texture features can be applied to an automatic diagnosis system in the near future.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121043077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.244944
N. Tavakoli
The information content of magnetic resonance images is analyzed for the purpose of achieving more compression using the existing algorithms. It is found that the first five bits of all images contain no information and therefore can be totally compressed. The experiments also showed that, by segmenting and transforming the image, the entropy and therefore the compression rate can change. A vertical (along z axis) segmentation method is presented which can improve the compression rate by 10%. Other types of segmentation method (along x or y axes) can also be studied. The transformation method used was a simple differential coding method which resulted in almost no improvement in compression. This was an interesting observation, since this same transformation can drastically improve the compression of other types of data such as satellite imagery.<>
{"title":"Analyzing information content of MR images","authors":"N. Tavakoli","doi":"10.1109/CBMS.1992.244944","DOIUrl":"https://doi.org/10.1109/CBMS.1992.244944","url":null,"abstract":"The information content of magnetic resonance images is analyzed for the purpose of achieving more compression using the existing algorithms. It is found that the first five bits of all images contain no information and therefore can be totally compressed. The experiments also showed that, by segmenting and transforming the image, the entropy and therefore the compression rate can change. A vertical (along z axis) segmentation method is presented which can improve the compression rate by 10%. Other types of segmentation method (along x or y axes) can also be studied. The transformation method used was a simple differential coding method which resulted in almost no improvement in compression. This was an interesting observation, since this same transformation can drastically improve the compression of other types of data such as satellite imagery.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122948649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.244923
Mengyang Liao, Jiamei Qin, Y. Tan
The simultaneous autoregressive (SAR) model is used to describe texture. The authors also propose using the least-squares method to estimate six SAR parameters. Based on the SAR model and the parameter estimation method, experiments have been done to classify and segment images of various natural textures and human B-scan images. Excellent results have been obtained.<>
{"title":"Texture classification and segmentation using simultaneous autoregressive random model","authors":"Mengyang Liao, Jiamei Qin, Y. Tan","doi":"10.1109/CBMS.1992.244923","DOIUrl":"https://doi.org/10.1109/CBMS.1992.244923","url":null,"abstract":"The simultaneous autoregressive (SAR) model is used to describe texture. The authors also propose using the least-squares method to estimate six SAR parameters. Based on the SAR model and the parameter estimation method, experiments have been done to classify and segment images of various natural textures and human B-scan images. Excellent results have been obtained.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127062378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-14DOI: 10.1109/CBMS.1992.244948
S. Dharanipragada, K. Arun
The problem of tomographic reconstruction from a few discrete projections is addressed. When the projection data are discrete and few in number, the image formed by the convolution back-projection algorithm may not be consistent with the observed projections and is known to exhibit artifacts. Hence, the problem formulated here is one of finding an image that is closest to a nominal and is consistent with the projection data and other convex constraints such as positivity. The measure of closeness used is a Hilbert space norm, typically a weighted sum/integral of squares, with weights used to reflect expected deviation from the nominal in different regions. In the absence of constraints, this approach leads to a direct, noniterative algorithm (based on a simple matrix-vector computation) for construction of the image. When additional convex constraints such as positivity and upper-bounds need to be enforced on the reconstructed image to improve resolution, a quadratically convergent Newton algorithm is suggested.<>
{"title":"Minimum square-deviation tomographic reconstruction from few projections","authors":"S. Dharanipragada, K. Arun","doi":"10.1109/CBMS.1992.244948","DOIUrl":"https://doi.org/10.1109/CBMS.1992.244948","url":null,"abstract":"The problem of tomographic reconstruction from a few discrete projections is addressed. When the projection data are discrete and few in number, the image formed by the convolution back-projection algorithm may not be consistent with the observed projections and is known to exhibit artifacts. Hence, the problem formulated here is one of finding an image that is closest to a nominal and is consistent with the projection data and other convex constraints such as positivity. The measure of closeness used is a Hilbert space norm, typically a weighted sum/integral of squares, with weights used to reflect expected deviation from the nominal in different regions. In the absence of constraints, this approach leads to a direct, noniterative algorithm (based on a simple matrix-vector computation) for construction of the image. When additional convex constraints such as positivity and upper-bounds need to be enforced on the reconstructed image to improve resolution, a quadratically convergent Newton algorithm is suggested.<<ETX>>","PeriodicalId":197891,"journal":{"name":"[1992] Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116909741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}