Pub Date : 2023-03-31DOI: 10.7171/3fc1f5fe.97b03a79
Xiang-Ning Li, Franziska Grieder
The National Institutes of Health (NIH) offers many types of funding programs and opportunities to support biomedical research. The best known of these programs, the NIH Research Project Grant Program, or R01, supports investigator-initiated research projects. Another well-known funding mechanism is the NIH Shared Instrumentation Grant Program, also known as SIG or S10. This year marks the S10's 40th anniversary. To commemorate this triumphant milestone and a successful 40 years, let's first review how this legendary and highly impactful program started.
{"title":"Happy 40th, NIH Shared Instrumentation Program! The NIH Shared Instrumentation Grant Program Embraces a Promising Future.","authors":"Xiang-Ning Li, Franziska Grieder","doi":"10.7171/3fc1f5fe.97b03a79","DOIUrl":"https://doi.org/10.7171/3fc1f5fe.97b03a79","url":null,"abstract":"<p><p>The National Institutes of Health (NIH) offers many types of funding programs and opportunities to support biomedical research. The best known of these programs, the NIH Research Project Grant Program, or R01, supports investigator-initiated research projects. Another well-known funding mechanism is the NIH Shared Instrumentation Grant Program, also known as SIG or S10. This year marks the S10's 40th anniversary. To commemorate this triumphant milestone and a successful 40 years, let's first review how this legendary and highly impactful program started.</p>","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10121239/pdf/jbt-34-1-oe1655ec.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9844451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdulghafor Khudhaer Abdullah, Saleem Lateef Mohammed, Ali Al-Naji, Mohammed Sameer Alsabah
The tongue reflects the abnormal condition and behavior of the internal organs of the body, such as problems of the heart, liver, pancreas, stomach, intestines, blood diseases and others, which lead to changes in some of the features and characteristics of the tongue. The most important of these is tongue color, which can be adopted as a biometric that can be used in Computerized Tongue Diagnostic Systems (CTDS). Quantitative diagnosis of the tongue requires some devices, including image acquisition devices such as cameras, light sources, filters, color checkers, image analysis and processing devices through the application of some algorithms or image processing and color correction software, as well as a computer. This study proposes a real-time imaging system to analyze tongue color and diagnose diseases using a webcam under specific conditions. The proposed system was designed in a Matlab GUI environment. After testing the system on a data set of more than 100 images, the preliminary results showed that the proposed system gives a disease diagnosis with an accuracy rate of no less than 86.667%. The proposed system contributed to the diagnosis of several diseases in real time, with an accuracy of 95.45%, with ease of use, implementation and low cost. This gives impetus to further studies to apply computerized diagnosis in medical applications, to enhance the medical reality, monitor patient health, and make an accurate diagnosis.
{"title":"Tongue Color Analysis and Diseases Detection Based on a Computer Vision System","authors":"Abdulghafor Khudhaer Abdullah, Saleem Lateef Mohammed, Ali Al-Naji, Mohammed Sameer Alsabah","doi":"10.51173/jt.v5i1.868","DOIUrl":"https://doi.org/10.51173/jt.v5i1.868","url":null,"abstract":"The tongue reflects the abnormal condition and behavior of the internal organs of the body, such as problems of the heart, liver, pancreas, stomach, intestines, blood diseases and others, which lead to changes in some of the features and characteristics of the tongue. The most important of these is tongue color, which can be adopted as a biometric that can be used in Computerized Tongue Diagnostic Systems (CTDS). Quantitative diagnosis of the tongue requires some devices, including image acquisition devices such as cameras, light sources, filters, color checkers, image analysis and processing devices through the application of some algorithms or image processing and color correction software, as well as a computer. This study proposes a real-time imaging system to analyze tongue color and diagnose diseases using a webcam under specific conditions. The proposed system was designed in a Matlab GUI environment. After testing the system on a data set of more than 100 images, the preliminary results showed that the proposed system gives a disease diagnosis with an accuracy rate of no less than 86.667%. The proposed system contributed to the diagnosis of several diseases in real time, with an accuracy of 95.45%, with ease of use, implementation and low cost. This gives impetus to further studies to apply computerized diagnosis in medical applications, to enhance the medical reality, monitor patient health, and make an accurate diagnosis.","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80927070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, the movement of data between smart devices has piqued the world's curiosity because it transmits important and unimportant data via the Internet. Thus, important data must be encrypted during passing over a network so that information can only access by an intended receiver and processed by it. As a result, information security has become even more critical than before. Our proposal suggests a method to secure data in three stages using cryptography and steganography. The important message will divide into two parts a part will encrypt by Caesar Cipher and another by Vigenere Cipher. The ciphertext will process by Morse code and will then hide in a cover image using Least Significant Bits (LSB) technique. According to the value the of peak signal-to-noise ratio (PSNR) obtained in this work our proposal has an extra security level and robustness. Finally, our research provides more security because of the mixture between cryptography and steganography.
{"title":"Multistage Encryption for Text Using Steganography and Cryptography","authors":"Mohammed Majid Msallam, Fayez Aldoghan","doi":"10.51173/jt.v5i1.1087","DOIUrl":"https://doi.org/10.51173/jt.v5i1.1087","url":null,"abstract":"Recently, the movement of data between smart devices has piqued the world's curiosity because it transmits important and unimportant data via the Internet. Thus, important data must be encrypted during passing over a network so that information can only access by an intended receiver and processed by it. As a result, information security has become even more critical than before. Our proposal suggests a method to secure data in three stages using cryptography and steganography. The important message will divide into two parts a part will encrypt by Caesar Cipher and another by Vigenere Cipher. The ciphertext will process by Morse code and will then hide in a cover image using Least Significant Bits (LSB) technique. According to the value the of peak signal-to-noise ratio (PSNR) obtained in this work our proposal has an extra security level and robustness. Finally, our research provides more security because of the mixture between cryptography and steganography.","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82051178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mudhar A. Al-Obaidi, Alanood A. Alsarayreh, I. M. Mujtaba
Reverse Osmosis (RO) process is being engaged to yield fresh water from brackish water sources. However, the RO process is characterized by its high specific energy consumption (SEC) owing to high-pressure pumps. The current study focuses on reducing the SEC of the brackish water RO desalination plant using model-based optimization practice. The inlet conditions of RO process such as the feed pressure, flow rate (individual membrane module and total plant) and temperature, have a substantial influence on the performance indicators namely, water productivity, product concentration and SEC. Therefore, the optimisation of this study has been directed to determine optimal inlet conditions within feasible limits to minimise SEC. Arab Potash Company (APC) brackish water RO desalination plant has been considered as the case study. The optimal inlet conditions have resulted in a significant energy saving of 27.97% depending on the set of decision variables being considered at a fixed brackish water feed concentration.
{"title":"Reduction of Energy Consumption of Brackish Water Reverse Osmosis Desalination System Via Model Based Optimisation","authors":"Mudhar A. Al-Obaidi, Alanood A. Alsarayreh, I. M. Mujtaba","doi":"10.51173/jt.v5i1.1166","DOIUrl":"https://doi.org/10.51173/jt.v5i1.1166","url":null,"abstract":"Reverse Osmosis (RO) process is being engaged to yield fresh water from brackish water sources. However, the RO process is characterized by its high specific energy consumption (SEC) owing to high-pressure pumps. The current study focuses on reducing the SEC of the brackish water RO desalination plant using model-based optimization practice. The inlet conditions of RO process such as the feed pressure, flow rate (individual membrane module and total plant) and temperature, have a substantial influence on the performance indicators namely, water productivity, product concentration and SEC. Therefore, the optimisation of this study has been directed to determine optimal inlet conditions within feasible limits to minimise SEC. Arab Potash Company (APC) brackish water RO desalination plant has been considered as the case study. The optimal inlet conditions have resulted in a significant energy saving of 27.97% depending on the set of decision variables being considered at a fixed brackish water feed concentration.","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90783871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-27eCollection Date: 2023-03-31DOI: 10.7171/3fc1f5fe.0b74b9db
Charles A Whittaker, Alper Kucukural, Chris Gates, Owen Michael Wilkins, George W Bell, John N Hutchinson, Shawn W Polson, Julie Dragon
The functional annotation of gene lists is a common analysis routine required for most genomics experiments, and bioinformatics core facilities must support these analyses. In contrast to methods such as the quantitation of RNA-Seq reads or differential expression analysis, our research group noted a lack of consensus in our preferred approaches to functional annotation. To investigate this observation, we selected 4 experiments that represent a range of experimental designs encountered by our cores and analyzed those data with 6 tools used by members of the Association of Biomolecular Resource Facilities (ABRF) Genomic Bioinformatics Research Group (GBIRG). To facilitate comparisons between tools, we focused on a single biological result for each experiment. These results were represented by a gene set, and we analyzed these gene sets with each tool considered in our study to map the result to the annotation categories presented by each tool. In most cases, each tool produces data that would facilitate identification of the selected biological result for each experiment. For the exceptions, Fisher's exact test parameters could be adjusted to detect the result. Because Fisher's exact test is used by many functional annotation tools, we investigated input parameters and demonstrate that, while background set size is unlikely to have a significant impact on the results, the numbers of differentially expressed genes in an annotation category and the total number of differentially expressed genes under consideration are both critical parameters that may need to be modified during analyses. In addition, we note that differences in the annotation categories tested by each tool, as well as the composition of those categories, can have a significant impact on results.
{"title":"Functional Annotation Routines Used by ABRF Bioinformatics Core Facilities - Observations, Comparisons, and Considerations.","authors":"Charles A Whittaker, Alper Kucukural, Chris Gates, Owen Michael Wilkins, George W Bell, John N Hutchinson, Shawn W Polson, Julie Dragon","doi":"10.7171/3fc1f5fe.0b74b9db","DOIUrl":"10.7171/3fc1f5fe.0b74b9db","url":null,"abstract":"<p><p>The functional annotation of gene lists is a common analysis routine required for most genomics experiments, and bioinformatics core facilities must support these analyses. In contrast to methods such as the quantitation of RNA-Seq reads or differential expression analysis, our research group noted a lack of consensus in our preferred approaches to functional annotation. To investigate this observation, we selected 4 experiments that represent a range of experimental designs encountered by our cores and analyzed those data with 6 tools used by members of the Association of Biomolecular Resource Facilities (ABRF) Genomic Bioinformatics Research Group (GBIRG). To facilitate comparisons between tools, we focused on a single biological result for each experiment. These results were represented by a gene set, and we analyzed these gene sets with each tool considered in our study to map the result to the annotation categories presented by each tool. In most cases, each tool produces data that would facilitate identification of the selected biological result for each experiment. For the exceptions, Fisher's exact test parameters could be adjusted to detect the result. Because Fisher's exact test is used by many functional annotation tools, we investigated input parameters and demonstrate that, while background set size is unlikely to have a significant impact on the results, the numbers of differentially expressed genes in an annotation category and the total number of differentially expressed genes under consideration are both critical parameters that may need to be modified during analyses. In addition, we note that differences in the annotation categories tested by each tool, as well as the composition of those categories, can have a significant impact on results.</p>","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10121236/pdf/jbt-34-1-esvdptzs.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9474239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sama Hayder Abdulhussein AlHakeem, Nashaat Jasim Al-Anber, Hayfaa Abdulzahra Atee
Stock prediction is one of the most important issues on which the investor relies in building his investment decisions and the financial literature has relied heavily on predicting future events because of its exceptional importance in financial work, after which profit or loss is determined, and since money dealers are eager to profit, the researchers have devoted techniques to forecast as providing the tools to achieve this. The choice of the proper model of time series data affects the precision of the predictions, and stock market data is typically random and turbulent for various industries. To obtain forecast models of stock market data that can accurately portray reality and obtain future forecasts, these models must take all data considerations from linear and none linear trends, different influences, and other data factors, hence the research problem of obtaining a method that gives predictions of Iraq's stock market indicators that are accurate and reliable in stock analysis. In this paper, two models were proposed to predict the Iraqi stock markets index through the use of artificial neural networks (ANN) and a long short-term memory (LSTM) algorithm where Iraqi stock market data were used from 2017 to 2021 and good results were achieved in the prediction where the long short-term memory (LSTM) algorithm reached a mean square error (MSE) rate of as little as 0.0016 while the artificial neural network (ANN) algorithm reached error rate 0.0055.
{"title":"Iraqi Stock Market Prediction Using Artificial Neural Network and Long Short-Term Memory","authors":"Sama Hayder Abdulhussein AlHakeem, Nashaat Jasim Al-Anber, Hayfaa Abdulzahra Atee","doi":"10.51173/jt.v5i1.846","DOIUrl":"https://doi.org/10.51173/jt.v5i1.846","url":null,"abstract":"Stock prediction is one of the most important issues on which the investor relies in building his investment decisions and the financial literature has relied heavily on predicting future events because of its exceptional importance in financial work, after which profit or loss is determined, and since money dealers are eager to profit, the researchers have devoted techniques to forecast as providing the tools to achieve this. The choice of the proper model of time series data affects the precision of the predictions, and stock market data is typically random and turbulent for various industries. To obtain forecast models of stock market data that can accurately portray reality and obtain future forecasts, these models must take all data considerations from linear and none linear trends, different influences, and other data factors, hence the research problem of obtaining a method that gives predictions of Iraq's stock market indicators that are accurate and reliable in stock analysis. In this paper, two models were proposed to predict the Iraqi stock markets index through the use of artificial neural networks (ANN) and a long short-term memory (LSTM) algorithm where Iraqi stock market data were used from 2017 to 2021 and good results were achieved in the prediction where the long short-term memory (LSTM) algorithm reached a mean square error (MSE) rate of as little as 0.0016 while the artificial neural network (ANN) algorithm reached error rate 0.0055.","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88219903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, researchers intensified their efforts on a dataset with a large number of features named Big Data because of the technological revolution and the development in the data science sector. Dimensionality reduction technology has efficient, effective, and influential methods for analyzing this data, which contains many variables. The importance of Dimensionality Reduction technology lies in several fields, including “data processing, patterns recognition, machine learning, and data mining”. This paper compares two essential methods of dimensionality reduction, Feature Extraction and Feature Selection Which Machine Learning models frequently employ. We applied many classifiers like (Support vector machines, k-nearest neighbors, Decision tree, and Naive Bayes ) to the data of the anthropometric survey of US Army personnel (ANSUR 2) to classify the data and test the relevance of features by predicting a specific feature in USA Army personnel results showing that (k-nearest neighbors) achieved high accuracy (83%) in prediction, then reducing the dimensions by several techniques like (Highly Correlated Filter, Recursive Feature Elimination, and principal components Analysis) results showing that (Recursive Feature Elimination) have the best accuracy by (66%), From these results, it is clear that the efficiency of dimension reduction techniques varies according to the nature of the data. Some techniques are more efficient than others in text data and others are more efficient in dealing with images.
{"title":"Comparison of Feature Selection and Feature Extraction Role in Dimensionality Reduction of Big Data","authors":"Haidar Khalid Malik, Nashaat Jasim Al-Anber","doi":"10.51173/jt.v5i1.1027","DOIUrl":"https://doi.org/10.51173/jt.v5i1.1027","url":null,"abstract":"Recently, researchers intensified their efforts on a dataset with a large number of features named Big Data because of the technological revolution and the development in the data science sector. Dimensionality reduction technology has efficient, effective, and influential methods for analyzing this data, which contains many variables. The importance of Dimensionality Reduction technology lies in several fields, including “data processing, patterns recognition, machine learning, and data mining”. This paper compares two essential methods of dimensionality reduction, Feature Extraction and Feature Selection Which Machine Learning models frequently employ. We applied many classifiers like (Support vector machines, k-nearest neighbors, Decision tree, and Naive Bayes ) to the data of the anthropometric survey of US Army personnel (ANSUR 2) to classify the data and test the relevance of features by predicting a specific feature in USA Army personnel results showing that (k-nearest neighbors) achieved high accuracy (83%) in prediction, then reducing the dimensions by several techniques like (Highly Correlated Filter, Recursive Feature Elimination, and principal components Analysis) results showing that (Recursive Feature Elimination) have the best accuracy by (66%), From these results, it is clear that the efficiency of dimension reduction techniques varies according to the nature of the data. Some techniques are more efficient than others in text data and others are more efficient in dealing with images.","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":"332 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76366549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recognizing and transcribing human speech has become an increasingly important task. Recently, researchers have been more interested in automatic speech recognition (ASR) using End to End models. Previous choices for the Arabic ASR architecture have been time-delay neural networks, recurrent neural networks (RNN), and long short-term memory (LSTM). Preview end-to-end approaches have suffered from slow training and inference speed because of the limitations of training parallelization, and they require a large amount of data to achieve acceptable results in recognizing Arabic speech This research presents an Arabic speech recognition based on a transformer encoder-decoder architecture with self-attention to transcribe Arabic audio speech segments into text, which can be trained faster with more efficiency. The proposed model exceeds the performance of previous end-to-end approaches when utilizing the Common Voice dataset from Mozilla. In this research, we introduced a speech-transformer model that was trained over 110 epochs using only 112 hours of speech. Although Arabic is considered one of the languages that are difficult to interpret by speech recognition systems, we achieved the best word error rate (WER) of 3.2 compared to other systems whose training requires a very large amount of data. The proposed system was evaluated on the common voice 8.0 dataset without using the language model.
{"title":"Arabic Speech Recognition Based on Encoder-Decoder Architecture of Transformer","authors":"Mohanad Sameer, Ahmed Talib, Alla Hussein","doi":"10.51173/jt.v5i1.749","DOIUrl":"https://doi.org/10.51173/jt.v5i1.749","url":null,"abstract":"Recognizing and transcribing human speech has become an increasingly important task. Recently, researchers have been more interested in automatic speech recognition (ASR) using End to End models. Previous choices for the Arabic ASR architecture have been time-delay neural networks, recurrent neural networks (RNN), and long short-term memory (LSTM). Preview end-to-end approaches have suffered from slow training and inference speed because of the limitations of training parallelization, and they require a large amount of data to achieve acceptable results in recognizing Arabic speech This research presents an Arabic speech recognition based on a transformer encoder-decoder architecture with self-attention to transcribe Arabic audio speech segments into text, which can be trained faster with more efficiency. The proposed model exceeds the performance of previous end-to-end approaches when utilizing the Common Voice dataset from Mozilla. In this research, we introduced a speech-transformer model that was trained over 110 epochs using only 112 hours of speech. Although Arabic is considered one of the languages that are difficult to interpret by speech recognition systems, we achieved the best word error rate (WER) of 3.2 compared to other systems whose training requires a very large amount of data. The proposed system was evaluated on the common voice 8.0 dataset without using the language model.","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83132281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-08eCollection Date: 2023-03-31DOI: 10.7171/3fc1f5fe.2f22458d
Diane B Smith, Amrina Ferdous, Julia Thom Oxford
We analyzed co-authorship patterns within the National Institutes of Health Center of Biomedical Research Excellence in Matrix Biology program from 2014 to 2022. In this study, we analyzed junior investigators, senior researchers, and research scientists within a shared core facility. Social network analysis techniques were applied to evaluate the co-authorship network based on journal publications from members of the center. The results indicated that co-authorship network visualization and analysis is a useful tool for understanding the relationship between a shared core facility and young investigators within a research center. Young investigators collaborated with and relied upon the individual research scientists of the shared core facility to serve as contributing members of their extended research team. This reliance on the shared core facility effectively increases the size and productivity of the research team led by the young investigator. Our results indicate that shared core facility staff may serve as hubs within the network of biomedical researchers, particularly at institutions with a growing research emphasis. Listen to this article.
{"title":"Shared Core Facilities Serve as Hubs for Biomedical Research Network at Institutions of Emerging Excellence.","authors":"Diane B Smith, Amrina Ferdous, Julia Thom Oxford","doi":"10.7171/3fc1f5fe.2f22458d","DOIUrl":"10.7171/3fc1f5fe.2f22458d","url":null,"abstract":"<p><p>We analyzed co-authorship patterns within the National Institutes of Health Center of Biomedical Research Excellence in Matrix Biology program from 2014 to 2022. In this study, we analyzed junior investigators, senior researchers, and research scientists within a shared core facility. Social network analysis techniques were applied to evaluate the co-authorship network based on journal publications from members of the center. The results indicated that co-authorship network visualization and analysis is a useful tool for understanding the relationship between a shared core facility and young investigators within a research center. Young investigators collaborated with and relied upon the individual research scientists of the shared core facility to serve as contributing members of their extended research team. This reliance on the shared core facility effectively increases the size and productivity of the research team led by the young investigator. Our results indicate that shared core facility staff may serve as hubs within the network of biomedical researchers, particularly at institutions with a growing research emphasis. Listen to this article.</p>","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10121238/pdf/jbt-34-1-41epit8d.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9474237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.7171/3fc1f5fe.6db6338a
Lily Birx, Marla Popov, Ron Orlando
Immunoglobulin G (IgG) is the main immunoglobulin in human serum, and its biological activity is modulated by glycosylation on its fragment crystallizable region. Glycosylation of IgGs has shown to be related to aging, disease progression, protein stability, and many other vital processes. A common approach to analyze IgG glycosylation involves the release of the N-glycans by PNGase F, which cleaves the linkage between the asparagine residue and the innermost N-acetylglucosamine (GlcNAc) of all N-glycans except those containing a 3-linked fucose attached to the core GlcNAc. The biological significance of these glycans necessitates the development of accurate methods for their characterization and quantification. Currently, researchers either perform PNGase F deglycosylation on intact or trypsin-digested IgGs. Those who perform PNGase F deglycosylation on trypsin-digested IgGs argue that proteolysis is needed to reduce steric hindrance, whereas the other group states that this step is not needed, and the proteolytic step only adds time. There is minimal experimental evidence supporting either assumption. The importance of obtaining complete glycan release for accurate quantitation led us to investigate the kinetics of this deglycosylation reaction for intact IgGs and IgG glycopeptides. Statistically significant differences in the rate of deglycosylation performed on intact IgGs and trypsin-digested IgGs were determined, and the rate of PNGase F deglycosylation on trypsin-digested IgGs was found to be 3- to 4-times faster than on intact IgG.
{"title":"Do I Need to Trypsin Digest Before Releasing IgG Glycans With PNGase-F?","authors":"Lily Birx, Marla Popov, Ron Orlando","doi":"10.7171/3fc1f5fe.6db6338a","DOIUrl":"https://doi.org/10.7171/3fc1f5fe.6db6338a","url":null,"abstract":"<p><p>Immunoglobulin G (IgG) is the main immunoglobulin in human serum, and its biological activity is modulated by glycosylation on its fragment crystallizable region. Glycosylation of IgGs has shown to be related to aging, disease progression, protein stability, and many other vital processes. A common approach to analyze IgG glycosylation involves the release of the N-glycans by PNGase F, which cleaves the linkage between the asparagine residue and the innermost N-acetylglucosamine (GlcNAc) of all N-glycans except those containing a 3-linked fucose attached to the core GlcNAc. The biological significance of these glycans necessitates the development of accurate methods for their characterization and quantification. Currently, researchers either perform PNGase F deglycosylation on intact or trypsin-digested IgGs. Those who perform PNGase F deglycosylation on trypsin-digested IgGs argue that proteolysis is needed to reduce steric hindrance, whereas the other group states that this step is not needed, and the proteolytic step only adds time. There is minimal experimental evidence supporting either assumption. The importance of obtaining complete glycan release for accurate quantitation led us to investigate the kinetics of this deglycosylation reaction for intact IgGs and IgG glycopeptides. Statistically significant differences in the rate of deglycosylation performed on intact IgGs and trypsin-digested IgGs were determined, and the rate of PNGase F deglycosylation on trypsin-digested IgGs was found to be 3- to 4-times faster than on intact IgG.</p>","PeriodicalId":39617,"journal":{"name":"Journal of Biomolecular Techniques","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10121237/pdf/jbt-34-1x335dgyl.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9582028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}