Colorectal cancer is a complex disease in which uncontrolled growth of abnormal cells occurs in the large intestine (colon or rectum). The study of tumor-specific antigens (neoantigens), molecules that interact with the immune system, has been extensively explored as a possible therapy called in silico cancer vaccine. Cancer vaccine studies have been triggered by the current high-throughput DNA sequencing technologies. However, there is no universal bioinformatic protocol to study tumor-antigens with DNA sequencing data. We propose a bioinformatic protocol to detect tumor-specific antigens associated with single nucleotide variants (SNVs) or “mutations” in colorectal cancer and their interaction with frequent HLA alleles (complex that present antigens to immune cells) in the Costa Rican Central Valley population. We used public data of human exome (DNA regions that produce functional products, including proteins). A variant calling analysis was implemented to detect tumorspecific SNVs, in comparison to healthy tissue. We then predicted and analyzed the peptides (protein fragments, the tumor specific antigens) derived from these variants, in the context of its affinity with frequent alleles of HLA type I of the Costa Rican population. We found 28 non-silent SNVs, present in 26 genes. The protocol yielded 23 strong binders peptides derived from the SNVs for frequent alleles (greater than 8%) for the Costa Rican population at the HLA-A, B and C loci. It is concluded that the standardized protocol was able to identify neoantigens and this can be considered a first step for the eventual design of a colorectal cancer vaccine for Costa Rican patients. To our knowledge, this is the first study of an in silico cancer vaccine using DNA sequencing data in the context of the Costa Rican HLA alleles.
{"title":"Colorectal cancer vaccines: in silico identification of tumor-specific antigens associated with frequent HLA-I alleles in the costa rican Central Valley population","authors":"Diego Morazán-Fernández, J. Molina-Mora","doi":"10.18845/tm.v35i8.6458","DOIUrl":"https://doi.org/10.18845/tm.v35i8.6458","url":null,"abstract":"Colorectal cancer is a complex disease in which uncontrolled growth of abnormal cells occurs in the large intestine (colon or rectum). The study of tumor-specific antigens (neoantigens), molecules that interact with the immune system, has been extensively explored as a possible therapy called in silico cancer vaccine. Cancer vaccine studies have been triggered by the current high-throughput DNA sequencing technologies. However, there is no universal bioinformatic protocol to study tumor-antigens with DNA sequencing data. \u0000We propose a bioinformatic protocol to detect tumor-specific antigens associated with single nucleotide variants (SNVs) or “mutations” in colorectal cancer and their interaction with frequent HLA alleles (complex that present antigens to immune cells) in the Costa Rican Central Valley population. We used public data of human exome (DNA regions that produce functional products, including proteins). A variant calling analysis was implemented to detect tumorspecific SNVs, in comparison to healthy tissue. We then predicted and analyzed the peptides (protein fragments, the tumor specific antigens) derived from these variants, in the context of its affinity with frequent alleles of HLA type I of the Costa Rican population. \u0000We found 28 non-silent SNVs, present in 26 genes. The protocol yielded 23 strong binders peptides derived from the SNVs for frequent alleles (greater than 8%) for the Costa Rican population at the HLA-A, B and C loci. It is concluded that the standardized protocol was able to identify neoantigens and this can be considered a first step for the eventual design of a colorectal cancer vaccine for Costa Rican patients. To our knowledge, this is the first study of an in silico cancer vaccine using DNA sequencing data in the context of the Costa Rican HLA alleles.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"18 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74864263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Chacón-Vargas, Daniel Pérez-Conejo, Marvin Coto-Jiménez
Speaker diarization is the task of automatically identifying speaker identities and detecting their speaking times in an audio recording. Several algorithms have shown improvements in the performance of this task during the past years. However, it still has performance challenges in interaction scenarios, such as between a child and adult, where interruptions, fillers, laughs and other elements may affect the detection and clustering of the segments. In this work, we perform an exploratory study with two diarization algorithms in children-adult interactions within a recording studio and assess the effectiveness of the algorithms in different age groups and genders. All participants are native Costa Rican Spanish speakers. The children have ages between 3 to 14 years, and the interaction combines guided repetition of words or short phrases, as well as natural speech. The results demonstrate how the age affects the diarization performance, both in cluster purity and speaker purity, in a direct but non-linear fashion.
{"title":"Assessing the effectiveness of diarization algorithms in costa rican children-adult speech according to age group and gender","authors":"Alejandro Chacón-Vargas, Daniel Pérez-Conejo, Marvin Coto-Jiménez","doi":"10.18845/tm.v35i8.6443","DOIUrl":"https://doi.org/10.18845/tm.v35i8.6443","url":null,"abstract":"Speaker diarization is the task of automatically identifying speaker identities and detecting their speaking times in an audio recording. Several algorithms have shown improvements in the performance of this task during the past years. However, it still has performance challenges in interaction scenarios, such as between a child and adult, where interruptions, fillers, laughs and other elements may affect the detection and clustering of the segments. \u0000In this work, we perform an exploratory study with two diarization algorithms in children-adult interactions within a recording studio and assess the effectiveness of the algorithms in different age groups and genders. All participants are native Costa Rican Spanish speakers. The children have ages between 3 to 14 years, and the interaction combines guided repetition of words or short phrases, as well as natural speech. \u0000The results demonstrate how the age affects the diarization performance, both in cluster purity and speaker purity, in a direct but non-linear fashion.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"PP 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84541193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The experimentation of footsteps as a biometric has a short history of about two decades. The process of identification of a person is based on the study of footstep signals captured when walking over a sensing area, and the registering of sounds, pressure, vibration, or a combination of these measures. Application of this biometric can emerge in security systems, that identify persons who enter or leave a space, and in providing help to elderly and disabled persons. In this paper, we are focused in the exploration of pure audio signals of footsteps and the robustness of a person’s classification under noisy conditions. We present a comparison between four wellknown classifiers and three kinds of noise, applied at different signal to noise ratio. Results are reported in terms of accuracy in the detection an users, showing different levels of sensibility according to the kind and level of noise.
{"title":"An experimental study on footsteps sound recognition as biometric under noisy conditions","authors":"Marisol Zeledón-Córdoba, Carolina Paniagua Peñaranda, Marvin Coto-Jiménez","doi":"10.18845/tm.v35i8.6467","DOIUrl":"https://doi.org/10.18845/tm.v35i8.6467","url":null,"abstract":"The experimentation of footsteps as a biometric has a short history of about two decades. The process of identification of a person is based on the study of footstep signals captured when walking over a sensing area, and the registering of sounds, pressure, vibration, or a combination of these measures. Application of this biometric can emerge in security systems, that identify persons who enter or leave a space, and in providing help to elderly and disabled persons. In this paper, we are focused in the exploration of pure audio signals of footsteps and the robustness of a person’s classification under noisy conditions. We present a comparison between four wellknown classifiers and three kinds of noise, applied at different signal to noise ratio. Results are reported in terms of accuracy in the detection an users, showing different levels of sensibility according to the kind and level of noise.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"28 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75610188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Capability maps are an important tool for enabling robots to understand their bodies by providing a way of representing the dexterity of their arms. They are usually treated as static data structures be- cause of how computationally intensive they are to generate. We present a method for generating capability maps taking advantage of the parallelization that modern GPUs offer such that these maps are generated approximately 50 times faster than previous implementations. This system could be used in situations were the robot has to generate this maps fast, for example when using unknown tools.
{"title":"GPU based approach for fast generation of robot capability representations","authors":"Daniel García Vaglio, Federico Ruiz Ugalde","doi":"10.18845/tm.v35i8.6449","DOIUrl":"https://doi.org/10.18845/tm.v35i8.6449","url":null,"abstract":"Capability maps are an important tool for enabling robots to understand their bodies by providing a way of representing the dexterity of their arms. They are usually treated as static data structures be- cause of how computationally intensive they are to generate. We present a method for generating capability maps taking advantage of the parallelization that modern GPUs offer such that these maps are generated approximately 50 times faster than previous implementations. This system could be used in situations were the robot has to generate this maps fast, for example when using unknown tools.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"42 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77819598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parkinson’s Disease(PD), one of the most serious neurodegenerative diseases that known huge controversy on social networks. Following medical lexicons, few approaches have been extended to leverage sentiment information that obviously reflects the patient’s health status in terms of related-narratives observations. It is been crucial to analyze online narratives and detect sentiment in patients’ self-reports. In this paper, we propose an automatic concept-level neural network method to distilling genuine sentiment in patients’ notes as medical polar facts into true positives and true negatives. Towards building emotional Parkinsonism assisted method from Parkinson’s Disease daily narratives di- gests, we characterize polar facts of defined medical configuration space through distributed biomedical representation at the concept-level as- sociated with real-world entities, which are operated to quantifying the emotional status of the speaker context. We conduct comparisons with state-of-art neural networks algorithms and biomedical distributed systems. Finally, as a result, we achieve an 85.3% accuracy performance, and the approach shows a well-understanding of medical natural language concepts.
{"title":"The impact of social media messages on Parkinson’s disease treatment: detecting genuine sentiment in patient notes","authors":"Hanane Grissette, El Habib Nfaoui","doi":"10.18845/tm.v35i8.6441","DOIUrl":"https://doi.org/10.18845/tm.v35i8.6441","url":null,"abstract":"Parkinson’s Disease(PD), one of the most serious neurodegenerative diseases that known huge controversy on social networks. Following medical lexicons, few approaches have been extended to leverage sentiment information that obviously reflects the patient’s health status in terms of related-narratives observations. It is been crucial to analyze online narratives and detect sentiment in patients’ self-reports. In this paper, we propose an automatic concept-level neural network method to distilling genuine sentiment in patients’ notes as medical polar facts into true positives and true negatives. Towards building emotional Parkinsonism assisted method from Parkinson’s Disease daily narratives di- gests, we characterize polar facts of defined medical configuration space through distributed biomedical representation at the concept-level as- sociated with real-world entities, which are operated to quantifying the emotional status of the speaker context. We conduct comparisons with state-of-art neural networks algorithms and biomedical distributed systems. Finally, as a result, we achieve an 85.3% accuracy performance, and the approach shows a well-understanding of medical natural language concepts.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"31 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74261953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chandrasen Pandey, Neeraj Baghel, Malay Kishore-Dutta, Carlos M. Travieso González
Back pain is a common pain that mostly affects people of all ages and results in different types of disorders such as Obesity, Slipped disc, Scoliosis, and Osteoporosis, etc. The diagnosis of back pain disorder is difficult due to the extent affected by the disorder and exact biomechanical factors. This work presents a machine learning method to diagnose these disorders using the Gait monitoring system. It involves support vector machines that classify between lower back pain and normal, on the bases of 3 Gait patterns that are integrated pressure, the direction of progression, and CISP-ML. The proposed method uses 13 different features such as mean and standard deviation, etc. recorded from 62 subjects (30 normal and 32 with lower back pain). The features alone resulted in higher leave-one-out classification accuracy (LOOCV) 92%. The proposed method can be used for automatically diagnosing the lower back pain and its gait effects on the person. This model can be ported to small computing devices for self-diagnosis of lower back pain in a remote area.
{"title":"Automatic diagnosis of lower back pain using gait patterns","authors":"Chandrasen Pandey, Neeraj Baghel, Malay Kishore-Dutta, Carlos M. Travieso González","doi":"10.18845/tm.v35i8.6459","DOIUrl":"https://doi.org/10.18845/tm.v35i8.6459","url":null,"abstract":"Back pain is a common pain that mostly affects people of all ages and results in different types of disorders such as Obesity, Slipped disc, Scoliosis, and Osteoporosis, etc. The diagnosis of back pain disorder is difficult due to the extent affected by the disorder and exact biomechanical factors. This work presents a machine learning method to diagnose these disorders using the Gait monitoring system. It involves support vector machines that classify between lower back pain and normal, on the bases of 3 Gait patterns that are integrated pressure, the direction of progression, and CISP-ML. The proposed method uses 13 different features such as mean and standard deviation, etc. recorded from 62 subjects (30 normal and 32 with lower back pain). The features alone resulted in higher leave-one-out classification accuracy (LOOCV) 92%. The proposed method can be used for automatically diagnosing the lower back pain and its gait effects on the person. This model can be ported to small computing devices for self-diagnosis of lower back pain in a remote area.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"8 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89697543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manoj Kaushik, Divyanshu Singh, Malay Kishore-Dutta, Carlos M. Travieso
Electroencephalogram (EEG) is an effective non-invasive way to detect sudden changes in neural brain activity, which generally occurs due to excessive electric discharge in the brain cells. EEG signals could be helpful in imminent seizure prediction if the machine could detect changes in EEG patterns. In this study, we have proposed a one-dimensional Convolutional Neural network (CNN) for the automatic detection of epilepsy seizures. The automated process might be convenient in the situations where a neurologist is unavailable and also help the neurologists in proper analysis of EEG signals and case diagnosis. We have used two publicly available EEG datasets, which were collected from the two African countries, Guinea-Bissau and Nigeria. The datasets contain EEG signals of 318 subjects. We have trained and verify the performance of our model by testing it on both the datasets and obtained the highest accuracy of 82.818%.
{"title":"A deep learning approach for epilepsy seizure detection using EEG signals","authors":"Manoj Kaushik, Divyanshu Singh, Malay Kishore-Dutta, Carlos M. Travieso","doi":"10.18845/tm.v35i8.6461","DOIUrl":"https://doi.org/10.18845/tm.v35i8.6461","url":null,"abstract":"Electroencephalogram (EEG) is an effective non-invasive way to detect sudden changes in neural brain activity, which generally occurs due to excessive electric discharge in the brain cells. EEG signals could be helpful in imminent seizure prediction if the machine could detect changes in EEG patterns. In this study, we have proposed a one-dimensional Convolutional Neural network (CNN) for the automatic detection of epilepsy seizures. The automated process might be convenient in the situations where a neurologist is unavailable and also help the neurologists in proper analysis of EEG signals and case diagnosis. We have used two publicly available EEG datasets, which were collected from the two African countries, Guinea-Bissau and Nigeria. The datasets contain EEG signals of 318 subjects. We have trained and verify the performance of our model by testing it on both the datasets and obtained the highest accuracy of 82.818%.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"67 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85306079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic segmentation and classification of audio streams is a challenging problem, with many applications, such as indexing multi – media digital libraries, information retrieving, and the building of speech corpus or spoken corpus) for particular languages and accents. Those corpus is a database of speech audio files and the corresponding text transcriptions. Among the several steps and tasks required for any of those applications, the speaker diarization is one of the most relevant, because it pretends to find boundaries in the audio recordings according to who speaks in each fragment. Speaker diarization can be performed in a supervised or unsupervised way and is commonly applied in audios consisting of pure speech. In this work, a first annotated dataset and analysis of speaker diarization for Costa Rican radio broadcasting is performed, using two approaches: a classic one based on k-means clustering, and the more recent Fischer Semi Discriminant. We chose publicly available radio broadcast and decided to compare those systems’ applicability in the complete audio files, which also contains some segments of music and challenging acoustic conditions. Results show a dependency on the results according to the number of speakers in each broadcast, especially in the average cluster purity. The results also show the necessity of further exploration and combining with other classification and segmentation algorithms to better extract useful information from the dataset and allow further development of speech corpus.
{"title":"Application of Fischer semi discriminant analysis for speaker diarization in costa rican radio broadcasts","authors":"Roberto Sánchez Cárdenas, Marvin Coto-Jiménez","doi":"10.18845/tm.v35i8.6464","DOIUrl":"https://doi.org/10.18845/tm.v35i8.6464","url":null,"abstract":"Automatic segmentation and classification of audio streams is a challenging problem, with many applications, such as indexing multi – media digital libraries, information retrieving, and the building of speech corpus or spoken corpus) for particular languages and accents. Those corpus is a database of speech audio files and the corresponding text transcriptions. Among the several steps and tasks required for any of those applications, the speaker diarization is one of the most relevant, because it pretends to find boundaries in the audio recordings according to who speaks in each fragment. Speaker diarization can be performed in a supervised or unsupervised way and is commonly applied in audios consisting of pure speech. In this work, a first annotated dataset and analysis of speaker diarization for Costa Rican radio broadcasting is performed, using two approaches: a classic one based on k-means clustering, and the more recent Fischer Semi Discriminant. We chose publicly available radio broadcast and decided to compare those systems’ applicability in the complete audio files, which also contains some segments of music and challenging acoustic conditions. Results show a dependency on the results according to the number of speakers in each broadcast, especially in the average cluster purity. The results also show the necessity of further exploration and combining with other classification and segmentation algorithms to better extract useful information from the dataset and allow further development of speech corpus.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"3 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76030449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rakesh Chandra-Joshi, Malay Kishore-Dutta, Carlos M. Travieso
Many countries are struggling for COVID-19 screening resources which arises the need for automatic and low-cost diagnosis systems which can help to diagnose and a large number of tests can be conducted rapidly. Instead of relying on one single method, artificial intelligence and multiple sensors based approaches can be used to decide the prediction of the health condition of the patient. Temperature, oxygen saturation level, chest X-ray and cough sound can be analyzed for the rapid screening. The multi-sensor approach is more reliable and a person can be analyzed in multiple feature dimensions. Deep learning models can be trained with multiple chest x-ray images belonging to different categories to different health conditions i.e. healthy, COVID-19 positive, pneumonia, tuberculosis, etc. The deep learning model will extract the features from the input images and based on that test images will be classified into different categories. Similarly, cough sound and short talk can be trained on a convolutional neural network and after proper training, input voice samples can be differentiated into different categories. Artificial based approaches can help to develop a system to work efficiently at a low cost.
{"title":"Artificial Intelligence based Multi-sensor COVID-19 Screening Framework","authors":"Rakesh Chandra-Joshi, Malay Kishore-Dutta, Carlos M. Travieso","doi":"10.18845/tm.v35i8.6460","DOIUrl":"https://doi.org/10.18845/tm.v35i8.6460","url":null,"abstract":"Many countries are struggling for COVID-19 screening resources which arises the need for automatic and low-cost diagnosis systems which can help to diagnose and a large number of tests can be conducted rapidly. Instead of relying on one single method, artificial intelligence and multiple sensors based approaches can be used to decide the prediction of the health condition of the patient. Temperature, oxygen saturation level, chest X-ray and cough sound can be analyzed for the rapid screening. The multi-sensor approach is more reliable and a person can be analyzed in multiple feature dimensions. Deep learning models can be trained with multiple chest x-ray images belonging to different categories to different health conditions i.e. healthy, COVID-19 positive, pneumonia, tuberculosis, etc. The deep learning model will extract the features from the input images and based on that test images will be classified into different categories. Similarly, cough sound and short talk can be trained on a convolutional neural network and after proper training, input voice samples can be differentiated into different categories. Artificial based approaches can help to develop a system to work efficiently at a low cost.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"20 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77896397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexánder Campos-Quirós, Kenneth Paniagua-Murillo, Gerardo Valladares-Castrillo, Jorge M. Cubero-Sesin, Luis Cordero-Arias
Los fosfatos de calcio son materiales biocerámicos de gran importancia utilizados en el área de recubrimientos bioactivos para implantes metálicos. En el presente estudio, una muestra de fosfatos de calcio fue sintetizada mediante el método de precipitación química a partir de Ca(NO3)2 y (NH4)2HPO4. La técnica de difracción de rayos-X fue utilizada para realizar un análisis cualitativo y cuantitativo de las fases cristalinas presentes en el material mediante los métodos de Scherrer, Williamson-Hall y Rietveld. Se determinó que la muestra está constituida en un 75 % en masa de monetita (CaHPO4) y un 25 % en masa de brushita (CaHPO4.H2O), con tamaños promedio de cristalito en el orden submicrométrico. El análisis mediante microscopia electrónica de barrido muestra que las partículas se encuentran altamente aglomeradas, con un tamaño promedio de 2,8 ± 1 µm y morfología variada. El análisis elemental por espectroscopia de energía dispersiva de rayos-X reveló una relación molar calcio/fósforo (Ca/P) promedio de 0,95, lo cual concuerda con las fases cristalinas de monetita y brushita. Por último, la presencia de ambas fases se debe principalmente a los bajos niveles de pH durante la reacción y al secado de la muestra posterior al proceso de síntesis.
{"title":"Análisis cualitativo y cuantitativo de fosfatos de calcio por difracción de rayos-X mediante los métodos de Scherrer, Williamson-Hall y refinamiento de Rietveld","authors":"Alexánder Campos-Quirós, Kenneth Paniagua-Murillo, Gerardo Valladares-Castrillo, Jorge M. Cubero-Sesin, Luis Cordero-Arias","doi":"10.18845/tm.v35i4.5664","DOIUrl":"https://doi.org/10.18845/tm.v35i4.5664","url":null,"abstract":"Los fosfatos de calcio son materiales biocerámicos de gran importancia utilizados en el área de recubrimientos bioactivos para implantes metálicos. En el presente estudio, una muestra de fosfatos de calcio fue sintetizada mediante el método de precipitación química a partir de Ca(NO3)2 y (NH4)2HPO4. La técnica de difracción de rayos-X fue utilizada para realizar un análisis cualitativo y cuantitativo de las fases cristalinas presentes en el material mediante los métodos de Scherrer, Williamson-Hall y Rietveld. Se determinó que la muestra está constituida en un 75 % en masa de monetita (CaHPO4) y un 25 % en masa de brushita (CaHPO4.H2O), con tamaños promedio de cristalito en el orden submicrométrico. El análisis mediante microscopia electrónica de barrido muestra que las partículas se encuentran altamente aglomeradas, con un tamaño promedio de 2,8 ± 1 µm y morfología variada. El análisis elemental por espectroscopia de energía dispersiva de rayos-X reveló una relación molar calcio/fósforo (Ca/P) promedio de 0,95, lo cual concuerda con las fases cristalinas de monetita y brushita. Por último, la presencia de ambas fases se debe principalmente a los bajos niveles de pH durante la reacción y al secado de la muestra posterior al proceso de síntesis.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":"1 1","pages":""},"PeriodicalIF":0.1,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75594070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}