Pub Date : 2014-12-10eCollection Date: 2014-01-01DOI: 10.1186/s13029-014-0027-x
Thuy Tuong Nguyen, Kyungmin Song, Yury Tsoy, Jin Yeop Kim, Yong-Jun Kwon, Myungjoo Kang, Michael Adsetts Edberg Hansen
Background and method: Successfully automated sigmoidal curve fitting is highly challenging when applied to large data sets. In this paper, we describe a robust algorithm for fitting sigmoid dose-response curves by estimating four parameters (floor, window, shift, and slope), together with the detection of outliers. We propose two improvements over current methods for curve fitting. The first one is the detection of outliers which is performed during the initialization step with correspondent adjustments of the derivative and error estimation functions. The second aspect is the enhancement of the weighting quality of data points using mean calculation in Tukey's biweight function.
Results and conclusion: Automatic curve fitting of 19,236 dose-response experiments shows that our proposed method outperforms the current fitting methods provided by MATLAB®;'s nlinfit function and GraphPad's Prism software.
{"title":"Robust dose-response curve estimation applied to high content screening data analysis.","authors":"Thuy Tuong Nguyen, Kyungmin Song, Yury Tsoy, Jin Yeop Kim, Yong-Jun Kwon, Myungjoo Kang, Michael Adsetts Edberg Hansen","doi":"10.1186/s13029-014-0027-x","DOIUrl":"https://doi.org/10.1186/s13029-014-0027-x","url":null,"abstract":"<p><strong>Background and method: </strong>Successfully automated sigmoidal curve fitting is highly challenging when applied to large data sets. In this paper, we describe a robust algorithm for fitting sigmoid dose-response curves by estimating four parameters (floor, window, shift, and slope), together with the detection of outliers. We propose two improvements over current methods for curve fitting. The first one is the detection of outliers which is performed during the initialization step with correspondent adjustments of the derivative and error estimation functions. The second aspect is the enhancement of the weighting quality of data points using mean calculation in Tukey's biweight function.</p><p><strong>Results and conclusion: </strong>Automatic curve fitting of 19,236 dose-response experiments shows that our proposed method outperforms the current fitting methods provided by MATLAB®;'s nlinfit function and GraphPad's Prism software.</p>","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"9 1","pages":"27"},"PeriodicalIF":0.0,"publicationDate":"2014-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13029-014-0027-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32997445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-05eCollection Date: 2014-01-01DOI: 10.1186/s13029-014-0025-z
Mike Gavrielides, Simon J Furney, Tim Yates, Crispin J Miller, Richard Marais
Background: Whole genomes, whole exomes and transcriptomes of tumour samples are sequenced routinely to identify the drivers of cancer. The systematic sequencing and analysis of tumour samples, as well other oncogenomic experiments, necessitates the tracking of relevant sample information throughout the investigative process. These meta-data of the sequencing and analysis procedures include information about the samples and projects as well as the sequencing centres, platforms, data locations, results locations, alignments, analysis specifications and further information relevant to the experiments.
Results: The current work presents a sample tracking system for oncogenomic studies (Onco-STS) to store these data and make them easily accessible to the researchers who work with the samples. The system is a web application, which includes a database and a front-end web page that allows the remote access, submission and updating of the sample data in the database. The web application development programming framework Grails was used for the development and implementation of the system.
Conclusions: The resulting Onco-STS solution is efficient, secure and easy to use and is intended to replace the manual data handling of text records. Onco-STS allows simultaneous remote access to the system making collaboration among researchers more effective. The system stores both information on the samples in oncogenomic studies and details of the analyses conducted on the resulting data. Onco-STS is based on open-source software, is easy to develop and can be modified according to a research group's needs. Hence it is suitable for laboratories that do not require a commercial system.
{"title":"Onco-STS: a web-based laboratory information management system for sample and analysis tracking in oncogenomic experiments.","authors":"Mike Gavrielides, Simon J Furney, Tim Yates, Crispin J Miller, Richard Marais","doi":"10.1186/s13029-014-0025-z","DOIUrl":"10.1186/s13029-014-0025-z","url":null,"abstract":"<p><strong>Background: </strong>Whole genomes, whole exomes and transcriptomes of tumour samples are sequenced routinely to identify the drivers of cancer. The systematic sequencing and analysis of tumour samples, as well other oncogenomic experiments, necessitates the tracking of relevant sample information throughout the investigative process. These meta-data of the sequencing and analysis procedures include information about the samples and projects as well as the sequencing centres, platforms, data locations, results locations, alignments, analysis specifications and further information relevant to the experiments.</p><p><strong>Results: </strong>The current work presents a sample tracking system for oncogenomic studies (Onco-STS) to store these data and make them easily accessible to the researchers who work with the samples. The system is a web application, which includes a database and a front-end web page that allows the remote access, submission and updating of the sample data in the database. The web application development programming framework Grails was used for the development and implementation of the system.</p><p><strong>Conclusions: </strong>The resulting Onco-STS solution is efficient, secure and easy to use and is intended to replace the manual data handling of text records. Onco-STS allows simultaneous remote access to the system making collaboration among researchers more effective. The system stores both information on the samples in oncogenomic studies and details of the analyses conducted on the resulting data. Onco-STS is based on open-source software, is easy to develop and can be modified according to a research group's needs. Hence it is suitable for laboratories that do not require a commercial system.</p>","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"9 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2014-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4288629/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32967514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-05eCollection Date: 2014-01-01DOI: 10.1186/s13029-014-0026-y
Robert V Baron, Charles Kollar, Nandita Mukhopadhyay, Daniel E Weeks
Background: In a typical study of the genetics of a complex human disease, many different analysis programs are used, to test for linkage and association. This requires extensive and careful data reformatting, as many of these analysis programs use differing input formats. Writing scripts to facilitate this can be tedious, time-consuming, and error-prone. To address these issues, the open source Mega2 data reformatting program provides validated and tested data conversions from several commonly-used input formats to many output formats.
Results: Mega2, the Manipulation Environment for Genetic Analysis, facilitates the creation of analysis-ready datasets from data gathered as part of a genetic study. It transparently allows users to process genetic data for family-based or case/control studies accurately and efficiently. In addition to data validation checks, Mega2 provides analysis setup capabilities for a broad choice of commonly-used genetic analysis programs. First released in 2000, Mega2 has recently been significantly improved in a number of ways. We have rewritten it in C++ and have reduced its memory requirements. Mega2 now can read input files in LINKAGE, PLINK, and VCF/BCF formats, as well as its own specialized annotated format. It supports conversion to many commonly-used formats including SOLAR, PLINK, Merlin, Mendel, SimWalk2, Cranefoot, IQLS, FBAT, MORGAN, BEAGLE, Eigenstrat, Structure, and PLINK/SEQ. When controlled by a batch file, Mega2 can be used non-interactively in data reformatting pipelines. Support for genetic data from several other species besides humans has been added.
Conclusions: By providing tested and validated data reformatting, Mega2 facilitates more accurate and extensive analyses of genetic data, avoiding the need to write, debug, and maintain one's own custom data reformatting scripts. Mega2 is freely available at https://watson.hgen.pitt.edu/register/.
{"title":"Mega2: validated data-reformatting for linkage and association analyses.","authors":"Robert V Baron, Charles Kollar, Nandita Mukhopadhyay, Daniel E Weeks","doi":"10.1186/s13029-014-0026-y","DOIUrl":"https://doi.org/10.1186/s13029-014-0026-y","url":null,"abstract":"<p><strong>Background: </strong>In a typical study of the genetics of a complex human disease, many different analysis programs are used, to test for linkage and association. This requires extensive and careful data reformatting, as many of these analysis programs use differing input formats. Writing scripts to facilitate this can be tedious, time-consuming, and error-prone. To address these issues, the open source Mega2 data reformatting program provides validated and tested data conversions from several commonly-used input formats to many output formats.</p><p><strong>Results: </strong>Mega2, the Manipulation Environment for Genetic Analysis, facilitates the creation of analysis-ready datasets from data gathered as part of a genetic study. It transparently allows users to process genetic data for family-based or case/control studies accurately and efficiently. In addition to data validation checks, Mega2 provides analysis setup capabilities for a broad choice of commonly-used genetic analysis programs. First released in 2000, Mega2 has recently been significantly improved in a number of ways. We have rewritten it in C++ and have reduced its memory requirements. Mega2 now can read input files in LINKAGE, PLINK, and VCF/BCF formats, as well as its own specialized annotated format. It supports conversion to many commonly-used formats including SOLAR, PLINK, Merlin, Mendel, SimWalk2, Cranefoot, IQLS, FBAT, MORGAN, BEAGLE, Eigenstrat, Structure, and PLINK/SEQ. When controlled by a batch file, Mega2 can be used non-interactively in data reformatting pipelines. Support for genetic data from several other species besides humans has been added.</p><p><strong>Conclusions: </strong>By providing tested and validated data reformatting, Mega2 facilitates more accurate and extensive analyses of genetic data, avoiding the need to write, debug, and maintain one's own custom data reformatting scripts. Mega2 is freely available at https://watson.hgen.pitt.edu/register/.</p>","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"9 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2014-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13029-014-0026-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33060440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-15eCollection Date: 2014-01-01DOI: 10.1186/1751-0473-9-24
David Keith Williams, Zoran Bursac
Background: Commonly when designing studies, researchers propose to measure several independent variables in a regression model, a subset of which are identified as the main variables of interest while the rest are retained in a model as covariates or confounders. Power for linear regression in this setting can be calculated using SAS PROC POWER. There exists a void in estimating power for the logistic regression models in the same setting.
Methods: Currently, an approach that calculates power for only one variable of interest in the presence of other covariates for logistic regression is in common use and works well for this special case. In this paper we propose three related algorithms along with corresponding SAS macros that extend power estimation for one or more primary variables of interest in the presence of some confounders.
Results: The three proposed empirical algorithms employ likelihood ratio test to provide a user with either a power estimate for a given sample size, a quick sample size estimate for a given power, and an approximate power curve for a range of sample sizes. A user can specify odds ratios for a combination of binary, uniform and standard normal independent variables of interest, and or remaining covariates/confounders in the model, along with a correlation between variables.
Conclusions: These user friendly algorithms and macro tools are a promising solution that can fill the void for estimation of power for logistic regression when multiple independent variables are of interest, in the presence of additional covariates in the model.
{"title":"Three algorithms and SAS macros for estimating power and sample size for logistic models with one or more independent variables of interest in the presence of covariates.","authors":"David Keith Williams, Zoran Bursac","doi":"10.1186/1751-0473-9-24","DOIUrl":"https://doi.org/10.1186/1751-0473-9-24","url":null,"abstract":"<p><strong>Background: </strong>Commonly when designing studies, researchers propose to measure several independent variables in a regression model, a subset of which are identified as the main variables of interest while the rest are retained in a model as covariates or confounders. Power for linear regression in this setting can be calculated using SAS PROC POWER. There exists a void in estimating power for the logistic regression models in the same setting.</p><p><strong>Methods: </strong>Currently, an approach that calculates power for only one variable of interest in the presence of other covariates for logistic regression is in common use and works well for this special case. In this paper we propose three related algorithms along with corresponding SAS macros that extend power estimation for one or more primary variables of interest in the presence of some confounders.</p><p><strong>Results: </strong>The three proposed empirical algorithms employ likelihood ratio test to provide a user with either a power estimate for a given sample size, a quick sample size estimate for a given power, and an approximate power curve for a range of sample sizes. A user can specify odds ratios for a combination of binary, uniform and standard normal independent variables of interest, and or remaining covariates/confounders in the model, along with a correlation between variables.</p><p><strong>Conclusions: </strong>These user friendly algorithms and macro tools are a promising solution that can fill the void for estimation of power for logistic regression when multiple independent variables are of interest, in the presence of additional covariates in the model.</p>","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"9 ","pages":"24"},"PeriodicalIF":0.0,"publicationDate":"2014-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/1751-0473-9-24","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33143462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Publication quality 2D graphs with less manual effort due to explicit use of dual coordinate systems","authors":"Daan Wagenaar","doi":"10.1186/1751-0473-9-22","DOIUrl":"https://doi.org/10.1186/1751-0473-9-22","url":null,"abstract":"","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"9 1","pages":"22 - 22"},"PeriodicalIF":0.0,"publicationDate":"2014-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/1751-0473-9-22","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65725321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PREdator: a python based GUI for data analysis, evaluation and fitting","authors":"C. Wiedemann, Peter Bellstedt, M. Görlach","doi":"10.1186/1751-0473-9-21","DOIUrl":"https://doi.org/10.1186/1751-0473-9-21","url":null,"abstract":"","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"35 1","pages":"21 - 21"},"PeriodicalIF":0.0,"publicationDate":"2014-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/1751-0473-9-21","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65725273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BioFlow: a web based workflow management software for design and execution of genomics pipelines","authors":"H. Garner, Ashwin Puthige","doi":"10.1186/1751-0473-9-20","DOIUrl":"https://doi.org/10.1186/1751-0473-9-20","url":null,"abstract":"","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"9 1","pages":"20 - 20"},"PeriodicalIF":0.0,"publicationDate":"2014-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/1751-0473-9-20","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65725260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-09-02eCollection Date: 2014-01-01DOI: 10.1186/1751-0473-9-19
Aleksey Porollo
Background: Next-generation sequencing and metagenome projects yield a large number of new genomes that need further annotations, such as identification of enzymes and metabolic pathways, or analysis of metabolic strategies of newly sequenced species in comparison to known organisms. While methods for enzyme identification are available, development of the command line tools for high-throughput comparative analysis and visualization of identified enzymes is lagging.
Methods: A set of perl scripts has been developed to perform automated data retrieval from the KEGG database using its new REST program application interface. Enrichment or depletion in metabolic pathways is evaluated using the two-tailed Fisher exact test followed by Benjamini and Hochberg correction.
Results: Comparative analysis of a given set of enzymes with a specified reference organism includes mapping to known metabolic pathways, finding shared and unique enzymes, generating links to visualize maps at KEGG Pathway, computing enrichment of the pathways, listing the non-mapped enzymes.
Conclusions: EC2KEGG provides a platform independent toolkit for automated comparison of identified sets of enzymes from newly sequenced organisms against annotated reference genomes. The tool can be used both for manual annotations of individual species and for high-throughput annotations as part of a computational pipeline. The tool is publicly available at http://sourceforge.net/projects/ec2kegg/.
{"title":"EC2KEGG: a command line tool for comparison of metabolic pathways.","authors":"Aleksey Porollo","doi":"10.1186/1751-0473-9-19","DOIUrl":"https://doi.org/10.1186/1751-0473-9-19","url":null,"abstract":"<p><strong>Background: </strong>Next-generation sequencing and metagenome projects yield a large number of new genomes that need further annotations, such as identification of enzymes and metabolic pathways, or analysis of metabolic strategies of newly sequenced species in comparison to known organisms. While methods for enzyme identification are available, development of the command line tools for high-throughput comparative analysis and visualization of identified enzymes is lagging.</p><p><strong>Methods: </strong>A set of perl scripts has been developed to perform automated data retrieval from the KEGG database using its new REST program application interface. Enrichment or depletion in metabolic pathways is evaluated using the two-tailed Fisher exact test followed by Benjamini and Hochberg correction.</p><p><strong>Results: </strong>Comparative analysis of a given set of enzymes with a specified reference organism includes mapping to known metabolic pathways, finding shared and unique enzymes, generating links to visualize maps at KEGG Pathway, computing enrichment of the pathways, listing the non-mapped enzymes.</p><p><strong>Conclusions: </strong>EC2KEGG provides a platform independent toolkit for automated comparison of identified sets of enzymes from newly sequenced organisms against annotated reference genomes. The tool can be used both for manual annotations of individual species and for high-throughput annotations as part of a computational pipeline. The tool is publicly available at http://sourceforge.net/projects/ec2kegg/.</p>","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"9 ","pages":"19"},"PeriodicalIF":0.0,"publicationDate":"2014-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/1751-0473-9-19","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32651085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-08-17eCollection Date: 2014-01-01DOI: 10.1186/1751-0473-9-18
Michel Petitjean, Anne Vanet
Background: For over 400 years, due to the reassortment of their segmented genomes, influenza viruses evolve extremely quickly and cause devastating epidemics. This reassortment arises because two flu viruses can infect the same cell and therefore the new virions' genomes will be composed of segment reassortments of the two parental strains. A treatment developed against parents could then be ineffective if the virions' genomes are different enough from their parent's genomes. It is therefore essential to simulate such reassortment phenomena to assess the risk of apparition of new flu strain.
Findings: So we decided to upgrade the forward simulator VIRAPOPS, containing already the necessary options to handle non-segmented viral populations. This new version can mimic single or successive reassortments, in birds, humans and/or swines. Other options such as the ability to treat populations of positive or negative sense viral RNAs, were also added. Finally, we propose output options giving statistics of the results.
Conclusion: In this paper we present a new version of VIRAPOPS which now manages the viral segment reassortments and the negative sense single strain RNA viruses, these two issues being the cause of serious public health problems.
{"title":"VIRAPOPS2 supports the influenza virus reassortments.","authors":"Michel Petitjean, Anne Vanet","doi":"10.1186/1751-0473-9-18","DOIUrl":"https://doi.org/10.1186/1751-0473-9-18","url":null,"abstract":"<p><strong>Background: </strong>For over 400 years, due to the reassortment of their segmented genomes, influenza viruses evolve extremely quickly and cause devastating epidemics. This reassortment arises because two flu viruses can infect the same cell and therefore the new virions' genomes will be composed of segment reassortments of the two parental strains. A treatment developed against parents could then be ineffective if the virions' genomes are different enough from their parent's genomes. It is therefore essential to simulate such reassortment phenomena to assess the risk of apparition of new flu strain.</p><p><strong>Findings: </strong>So we decided to upgrade the forward simulator VIRAPOPS, containing already the necessary options to handle non-segmented viral populations. This new version can mimic single or successive reassortments, in birds, humans and/or swines. Other options such as the ability to treat populations of positive or negative sense viral RNAs, were also added. Finally, we propose output options giving statistics of the results.</p><p><strong>Conclusion: </strong>In this paper we present a new version of VIRAPOPS which now manages the viral segment reassortments and the negative sense single strain RNA viruses, these two issues being the cause of serious public health problems.</p>","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"9 ","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2014-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/1751-0473-9-18","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32636514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-10eCollection Date: 2014-01-01DOI: 10.1186/1751-0473-9-16
Guillaume Lamour, Julius B Kirkegaard, Hongbin Li, Tuomas Pj Knowles, Jörg Gsponer
Background: A growing spectrum of applications for natural and synthetic polymers, whether in industry or for biomedical research, demands for fast and universally applicable tools to determine the mechanical properties of very diverse polymers. To date, determining these properties is the privilege of a limited circle of biophysicists and engineers with appropriate technical skills.
Findings: Easyworm is a user-friendly software suite coded in MATLAB that simplifies the image analysis of individual polymeric chains and the extraction of the mechanical properties of these chains. Easyworm contains a comprehensive set of tools that, amongst others, allow the persistence length of single chains and the Young's modulus of elasticity to be calculated in multiple ways from images of polymers obtained by a variety of techniques (e.g. atomic force microscopy, electron, contrast-phase, or epifluorescence microscopy).
Conclusions: Easyworm thus provides a simple and efficient tool for specialists and non-specialists alike to solve a common problem in (bio)polymer science. Stand-alone executables and shell scripts are provided along with source code for further development.
{"title":"Easyworm: an open-source software tool to determine the mechanical properties of worm-like chains.","authors":"Guillaume Lamour, Julius B Kirkegaard, Hongbin Li, Tuomas Pj Knowles, Jörg Gsponer","doi":"10.1186/1751-0473-9-16","DOIUrl":"https://doi.org/10.1186/1751-0473-9-16","url":null,"abstract":"<p><strong>Background: </strong>A growing spectrum of applications for natural and synthetic polymers, whether in industry or for biomedical research, demands for fast and universally applicable tools to determine the mechanical properties of very diverse polymers. To date, determining these properties is the privilege of a limited circle of biophysicists and engineers with appropriate technical skills.</p><p><strong>Findings: </strong>Easyworm is a user-friendly software suite coded in MATLAB that simplifies the image analysis of individual polymeric chains and the extraction of the mechanical properties of these chains. Easyworm contains a comprehensive set of tools that, amongst others, allow the persistence length of single chains and the Young's modulus of elasticity to be calculated in multiple ways from images of polymers obtained by a variety of techniques (e.g. atomic force microscopy, electron, contrast-phase, or epifluorescence microscopy).</p><p><strong>Conclusions: </strong>Easyworm thus provides a simple and efficient tool for specialists and non-specialists alike to solve a common problem in (bio)polymer science. Stand-alone executables and shell scripts are provided along with source code for further development.</p>","PeriodicalId":35052,"journal":{"name":"Source Code for Biology and Medicine","volume":"9 ","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2014-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/1751-0473-9-16","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32561011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}