Perhaps the most-surprising discovery of the genome era has been the extent to which prokaryotic and many eukaryotic genomes incorporate genetic material from sources other than their parent(s). Lateral genetic transfer (LGT) among bacteria was first observed about 100 years ago, and is now accepted to underlie important phenomena including the spread of antibiotic resistance and ability to degrade xenobiotics. LGT is invoked, perhaps too readily, to explain a breadth of awkward data including compositional heterogeneity of genomes, disagreement among gene-sequence trees, and mismatch between physiology and systematics. At the same time many details of LGT remain unknown or controversial, and some key questions have scarcely been asked. Here I critically review what we think we know about the existence, extent, mechanism and impact of LGT; identify important open questions; and point to research directions that hold particular promise for elucidating the role of LGT in genome evolution. Evidence for LGT in nature is not only inferential but also direct, and potential vectors are ubiquitous. Genetic material can pass between diverse habitats and be significantly altered during residency in viruses, complicating the inference of donors, In prokaryotes about twice as many genes are interrupted by LGT as are transferred intact, and about 5Short protein domains can be privileged units of transfer. Unresolved phylogenetic issues include the correct null hypothesis, and genes as units of analysis. Themes are beginning to emerge regarding the effect of LGT on cellular networks, but I show why generalization is premature. LGT can associate with radical changes in physiology and ecological niche. Better quantitative models of genome evolution are needed, and theoretical frameworks remain to be developed for some observations including chromosome assembly by LGT.
In scientific fields such as systems biology, evaluation of the relationship between network members (vertices) is approached using a network structure. In a co-expression network, comprising genes (vertices) and gene-to-gene links (edges) representing co-expression relationships, local modular structures with tight intra-modular connections include genes that are co-expressed with each other. For detecting such modules from among the whole network, an approach to evaluate network topology between modules as well as intra-modular network topology is useful. To detect such modules, we combined a novel inter-modular index with network density, the representative intra-modular index, instead of a single use of network density. We designed an algorithm to optimize the combinatory index for a module and applied it to Arabidopsis co-expression analysis. To verify the relation between modules obtained using our algorithm and biological knowledge, we compared it to the other tools for co-expression network analyses using the KEGG pathways, indicating that our algorithm detected network modules representing better associations with the pathways. It is also applicable to a large dataset of gene expression profiles, which is difficult to calculate in a mass.
Efficient execution of data-intensive workflows has been playing an important role in bioinformatics as the amount of data has been rapidly increasing. The execution of such workflows must take into account the volume and pattern of communication. When orchestrating data-centric workflows, a centralized workflow engine can become a bottleneck to performance. To cope with the bottleneck, a hybrid approach with choreography for data management of workflows is proposed. However, when a workflow includes many repetitive operations, the approach might not gain good performance because of the overheads of its additional mechanism. This paper presents and evaluates an improvement of the hybrid approach for managing a large amount of data. The performance of the proposed method is demonstrated by measuring execution times of example workflows.
We present a method for She classification of cancer based on gene expression profiles using single genes. We select the genes with high class-discrimination capability according to their depended degree by the classes. We then build classifiers based on the decision rules induced by single genes selected. We test our single-gene classification method on three publicly available cancerous gene expression datasets. In a majority of cases, we gain relatively accurate classification outcomes by just utilizing one gene. Some genes highly correlated with the pathogenesis of cancer are identified. Our feature selection and classification approaches are both based on rough sets, a machine learning method. In comparison with other methods, our method is simple, effective and robust. We conclude that, if gene selection is implemented reasonably, accurate molecular classification of cancer can be achieved with very simple predictive models based on gene expression profiles.
We introduce a new data structure, a localized suffix array, based on which occurrence information is dynamically represented as the combination of global positional information and local lexicographic order information in text search applications. For the search of a pair of words within a given distance, many candidate positions that share a coarse-grained global position can be compactly represented in term of local lexicographic orders as in the conventional suffix array, and they can be simultaneously examined for violation of the distance constraint at the coarse-grained resolution. Trade-off between the positional and lexicographical information is progressively shifted towards finer positional resolution, and the distance constraint is reexamined accordingly. Thus the paired search can be efficiently performed even if there are a large number of occurrences for each word. The localized suffix array itself is in fact a reordering of bits inside the conventional suffix array, and their memory requirements are essentially the same. We demonstrate an application to genome mapping problems for paired-end short reads generated by new-generation DNA sequencers. When paired reads are highly repetitive, it is time-consuming to naïvely calculate, sort, and compare all of the coordinates. For a human genome re-sequencing data of 36 base pairs, more than 10 times speedups over the naïve method were observed in almost half of the cases where the sums of redundancies (number of individual occurrences) of paired reads were greater than 2,000.