Pages

Wednesday, 12 February 2014

Genomic sequencing in newborn to improve healthcare: pilot projects funded by NIH!


As NGS technology became cheaper and more robust in the next few years and our knowledge about genotype-phenotype associations increase, the idea of performing whole genome sequencing as a standard test on newborns may became an actual strategy in healthcare.


The genomes of infants may be able to be sequenced shortly after birth and allow parents to know what diseases or conditions their child may be affected by or have a propensity for developing, giving them the chance to possibly head them off or start on early treatments.

To test this approach, the US National Institutes of Health has awarded $5 million in research grants to four pilot programs to study newborn screening. The research programs aim to develop the science and technology ad well as to investigate the ethical issues related to such screening.
This pilot project, covered also by The New York Times, is the first effort to asses impact of whole genome screening on the quality of life and on our ability to provide better healthcare. 
Genomic sequencing may reveal many problems that could be treated early in a child’s life, avoiding the diagnostic odyssey that parents can endure when medical problems emerge later, said Dr. Cynthia Powell, winner of one of the research grants.

However, the role of each and every variant and how to interpret their contribution to disease risk is not yet fully understood and many questions remain on which variants to report and if and how genomic findings translate into improved therapies or life quality. This matter will also be addressed in the funded studies.

“Many changes in the DNA sequence aren't disease-causing,” said Dr. Robert Nussbaum, chief of the genomic medicine division at the School of Medicine of the University of California, San Francisco, and leader of one of the pilot grants. “We aren't very good yet at distinguishing which are and which aren’t.”

“You will get dozens of findings per child that you won’t be able to adequately interpret,” said , Dr. Jeffrey Botkin, a professor of pediatrics and chief of medical ethics at the University of Utah. The ethical issues of sequencing are sharply different when it is applied to children. Adults can decide which test information they want to receive, but children won’t usually have that option. Their parents will decide and children will get information that he will rather prefer to ignore when he became adult.

"We are not ready now to deploy whole genome sequencing on a large scale," said Eric Green, the director of the National Human Genome Research Institute, that promote the research program, "but it would be irresponsible not to study the problem."
"We are doing these pilot studies so that when the cost of genomic sequencing comes down, we can answer the question, 'Should we do it?' " he adds.

Here are the four pilot project funded by the NIH, as reported in the official news:
  • Brigham and Women's Hospital and Boston Children's Hospital, Boston
    Principal Investigators: Robert Green, M.D., and Alan Beggs, Ph.D.

    This research project will accelerate the use of genomics in pediatric medicine by creating and safely testing new methods for using information obtained from genomic sequencing in the care of newborns. It will test a new approach to newborn screening, in which genomic data are available as a resource for parents and doctors throughout infancy and childhood to inform health care.  A genetic counselor will provide the genomic sequencing information and newborn screening results to the families.  Parents will then be asked about the impact of receiving genomic sequencing results and if the information was useful to them.  Researchers will try to determine if the parents respond to receiving the genomic sequencing results differently if their newborns are sick and if they respond differently to receiving genomic sequencing results as compared to current newborn screening results. Investigators will also develop a process for reporting results of genomic sequencing to the newborns' doctors and investigate how they act on these results.
     
  • Children's Mercy Hospital - Kansas City, Mo.
    Principal Investigator: Stephen Kingsmore, M.D.

    Many newborns require care in a neonatal intensive care unit (NICU), and this group of newborns has a high rate of disability and death. Given the severity of illness, these newborns may have the most to gain from fast genetic diagnosis through the use of genomic sequencing. The researchers will examine the benefits and risks of using rapid genomic sequencing technology in this NICU population. They also aim to reduce the turnaround time for conducting and receiving genomic sequencing results to 50 hours, which is comparable to other newborn screening tests. The researchers will test if their methods increase the number of diagnoses or decrease the time it takes to reach a diagnosis in NICU newborns. They will also study if genomic sequencing changes the clinical care of newborns in the NICU.  Additionally, the investigators are interested in doctor and parent perspectives and will try to determine if parents' perception of the benefits and risks associated with the results of sequencing change over time.
     
  • University of California, San Francisco 
    Principal Investigator: Robert Nussbaum, M.D.

    This pilot project will explore the potential of exome sequencing as a method of newborn screening for disorders currently screened for and others that are not currently screened for, but where newborns may benefit from screening. The researchers will examine the value of additional information that exome sequencing provides to existing newborn screening that may lead to improved care and treatment. Additionally, the researchers will explore parents' interest in receiving information beyond that typically available from newborn screening tests. The research team also intends to develop a participant protection framework for conducting genomic sequencing during infancy and will explore legal issues related to using genome analysis in newborn screening programs. Together, these studies have the potential to provide public health benefit for newborns and research-based information for policy makers.
     
  • University of North Carolina at Chapel Hill 
    Principal Investigators: Cynthia Powell, M.D., M.S., and Jonathan Berg, M.D., Ph.D.

    In this pilot project, researchers will identify, confront and overcome the challenges that must be met in order to implement genomic sequencing technology to a diverse newborn population. The researchers will sequence the exomes of healthy infants and infants with known conditions such as phenylketonuria, cystic fibrosis or other disorders involving metabolism. Their goal is to help identify the best ways to return results to doctors and parents. The investigators will explore the ethical, legal and social issues involved in helping doctors and parents make informed decisions, and develop best practices for returning results to parents after testing. The researchers will also develop a tool to help parents understand what the results mean and examine extra challenges that doctors may face as this new technology is used. This study will place a special emphasis on including multicultural families.

Monday, 10 February 2014

PubMed Highlights: NGS library prepartion

Starting with a robust sequencing library is the first and crucial step to obtain unbiased, high-quality results from your next generation sequencher!
Here are a couple of interesting paper reviewing problems and solutions related to the NGS library preparation. They also give a sinthetic overview on the present library prepartion methods and how they fit to different downstream applications.
Take a look!

Library preparation methods for next-generation sequencing: Tone down the bias.
Exp Cell Res. 2014 Jan 15

Abstract
Next-generation sequencing (NGS) has caused a revolution in biology. NGS requires the preparation of libraries in which (fragments of) DNA or RNA molecules are fused with adapters followed by PCR amplification and sequencing. It is evident that robust library preparation methods that produce a representative, non-biased source of nucleic acid material from the genome under investigation are of crucial importance. Nevertheless, it has become clear that NGS libraries for all types of applications contain biases that compromise the quality of NGS datasets and can lead to their erroneous interpretation. A detailed knowledge of the nature of these biases will be essential for a careful interpretation of NGS data on the one hand and will help to find ways to improve library quality or to develop bioinformatics tools to compensate for the bias on the other hand. In this review we discuss the literature on bias in the most common NGS library preparation protocols, both for DNA sequencing (DNA-seq) as well as for RNA sequencing (RNA-seq). Strikingly, almost all steps of the various protocols have been reported to introduce bias, especially in the case of RNA-seq, which is technically more challenging than DNA-seq. For each type of bias we discuss methods for improvement with a view to providing some useful advice to the researcher who wishes to convert any kind of raw nucleic acid into an NGS library.


Abstract
High-throughput sequencing, also known as next-generation sequencing (NGS), has revolutionized genomic research. In recent years, NGS technology has steadily improved, with costs dropping and the number and range of sequencing applications increasing exponentially. Here, we examine the critical role of sequencing library quality and consider important challenges when preparing NGS libraries from DNA and RNA sources. Factors such as the quantity and physical characteristics of the RNA or DNA source material as well as the desired application (i.e., genome sequencing, targeted sequencing, RNA-seq, ChIP-seq, RIP-seq, and methylation) are addressed in the context of preparing high quality sequencing libraries. In addition, the current methods for preparing NGS libraries from single cells are also discussed.

Monday, 3 February 2014

The sequencing frenzy! Not only human genomes!

With NGS cutting down the costs, Illumina pushing hard to increase sequencing production and some expectation from the new nanopore technology, the past year saw an explosion of genome sequencing.
Lot of organisms, even some bizarre creatures, had their genome sequenced and lot of new sequencing programs had been started, aiming to sequence thousand and thousand of new genomes in the next few years!
I've a personal interest in evolutionary and comparative genomics, so I always appreciate a new genome and I'm particularly intrigued by exotic organisms genomes...Sometimes they may be not so informative, but it is always funny to tell your friend the story of the genome of the white tiger! 
I made a rapid survey of what I've missed in the last couple of months and here are some cool new genomes:

The Burmese python genome (total of 1.44 Gb) and the King cobra genome (total of 1.66 Gb).

These two snake genomes have been published in December on PNAS and give new insights on the evolution of snakes and the peculiar adaptation related to their metabolism and to venom production. Both paper report results from genome sequencing as well as transcriptome characterization, providing a complete picture on several interesting and poorly understood aspects of snakes biology.



The first paper is focused on the molecular basis of morphological and physiological adaptations in snakes. Positive selection acted in ancestral snakes on many genes related to metabolism, development, lungs, eyes, heart, kidney, and skeletal structure—all highly modified features in snakes. To better study genetic basis of the extreme phenotypes of the python, they also compared the python genome with king cobra genome and genomic samples from other snakes. They also performed a detailed transcriptome analysis and found responsive genese associated with metabolism, development, and also mammalian diseases.


The second paper is focused on snake venom, a fascinating toxic protein cocktails. The authors investigate the evolution of these complex biological weapon by sequencing the genome of the king cobra and perform transcriptome analysis to assess the composition of venom gland expressed genes, small RNAs, and secreted venom proteins. They found that "toxin genes important for prey capture have massively expanded by gene duplication and evolved under positive selection, resulting in protein neofunctionalization.". There is a lot of interest in animals venom as a source of new bio-active peptides with a possible application as human drugs, and this article advance the field providing lots of new information on the origin and evolution of venom proteins.


The Locust genome (total of 6.5 Gb)

The genome of L. migratoria has been published in Nature on January. 
There is no doubt that locusts are one of the world’s most destructive agricultural pests, as demonstrated also by their use as a God punishment! Locusts are grasshopper species and they exhibit a remarkable ability in swarming and long-distance migration. Locust swarms form suddenly from the congregation of billions of insects and they can fly hundreds of kilometres each day, and even cross oceans. They are also quite voracious and a single individual could consume its own body weight in food every day! The authors of this paper combined genome sequencing with a set of transcriptome and methylome data from gregarious and solitarious locusts to get insights on the adaptations behind the locust machine! They revealed peculiar findings on neuronal regulatory mechanisms underlying phase change in the locust, together with a significant expansion of gene families associated with energy consumption and detoxification, consistent with long-distance flight capacity and phytophagy. Moreover they also identified hundreds of potential insecticide target genes, such as ion channels, G-protein-coupled receptors and lethal genes. Beware locust!

The Elephant shark genome (0.93 Gb)

In this paper published by Nature on January, authors report the whole-genome analysis of a cartilaginous fish, the elephant shark (Callorhinchus milii). This genome will provide new insights on the evolution of gnathostomes from jawless vertebrates, a transition accompanied by many morphological and phenotypic innovations: jaws, paired appendages and an adaptive immune system based on immunoglobulins, T-cell receptors and major histocompatibility complex (MHC). Moreover they also found a lack of genes encoding secreted calcium-binding phosphoproteins, suggesting an explanation to the the absence of bon
They also found that "the C. milii genome is the slowest evolving of all known vertebrates and features extensive synteny conservation with tetrapod genomes, making it a good model for comparative analyses of gnathostome genomes". The paper analyze also some peculiar aspects of the adaptive immune system of cartilaginous fishes: "it lacks the canonical CD4 co-receptor and most transcription factors, cytokines and cytokine receptors related to the CD4 lineage, despite the presence of polymorphic major histocompatibility complex class II molecules. It thus presents a new model for understanding the origin of adaptive immunity."

The Giant galapagos tortoise (C. nigra) transcriptome

In this study published by Genome Biology on December, authors performed transcriptome sequencing on five C. nigra individuals from three distinct subspecies. Moreover they also analyzed samples from the congeneric red-footed tortoise C. carbonaria and from the Spanish pond turtle Mauremys leprosa. To get a complete picture on tortoise evolution, transcriptome data from the previously published European pond turtle Emys orbicularis and pond slider Trachemys scripta were also considered.
Based on this dataset, they perform a population genomic study of the giant Galápagos tortoise, a species endemic from the Galápagos archipelago. C. nigra is an interesting turtle species: it is the largest known living species of terrestrial turtles and can live well above 100 years. From mtDNA analyses authors suggested that "this insular species has been isolated from the South American continent during millions of years". C. nigra is therefore a perfect model for the study of adaptation following island colonization and "point to island endemic species as a promising model for the study of the deleterious effects on genome evolution of a reduced long-term population size". Among other interesting results, authors found a reduced diversity of immunity genes, supporting the hypothesis of attenuated pathogen diversity in the island restricted habitat, and an increased selective pressure on genes involved in response to stress, potentially involved in the response to the climatic instability and in the elongated lifespan of this species.


After these intriguing examples, here are the major sequencing programs that promise to provide us with more and more genomes in the next few years:

This project aim to sequence the genome of 10k vertebrate species, covering amphibian, birds, reptiles, mammal, fishes and teleosts. The declared goal is "To understand how complex animal life evolved through changes in DNA and use this knowledge to become better stewards of the planet."
The project is co-directed by David Haussler (Howard Hughes Medical Institute, University of California, Santa Cruz); Stephen J. O'Brien (Chief Scientific Officer, Theodosius Dobzhansky Center for Genome Bioinformatics, St. Petersburg State University, St. Petersburg, Russia) and Oliver A. Ryder (San Diego Zoo, Institute for Conservation Research, San Diego, CA). Between collaborators and promoters they have also the BGI.

The i5K Insect and other Arthropod Genome Sequencing Initiative
Started officially in 2011, "the i5k initiative plans to sequence the genomes of 5,000 insect and related arthropod species over the next 5 years. This project will be transformative because it aims to sequence the genomes of all insect species known to be important to worldwide agriculture, food safety, medicine, and energy production; all those used as models in biology; the most abundant in world ecosystems; and representatives in every branch of insect phylogeny so as to achieve a deep understanding of arthropod evolution and phylogeny". The collaborators on the project has already produced more than 60 genomes, such as various species of Drosophila, Apis mellifera, Bombyx mori, Aedes aegypti, Anopheles gambiae, Iodex scapularis and many others. 

Sustained by BGI and China National Genebank (CNGB) this project is started at the end of 2013 and aims to "unveil the mysteries of the origin, evolution and diversification of the largest group of vertebrates." Morover, "all data generated from Fish T1K will be made available publicly through CNGB, ensuring that scientists have access to new developments and trends in fish research and the use of RNA-seq technology."

This is another insects related initiative. Insects are one of the most species-rich groups of metazoan organisms. They play a pivotal role in most non-marine ecosystems and they are of enormous economical and medical importance. With about 20 international partners involved, the 1K Insect Transcriptome Evolution project "aims to study the transcriptomes (that is the entirety of expressed genes) of more than 1,000 insect species encompassing all recognized insect orders. For each species, so-called ESTs (Expressed Sequence Tags) will be produced using next generation sequencing techniques (NGS). [...]. The expected data will allow inferring for the first time a robust phylogenetic backbone tree of insects. Furthermore, the project includes the development of new software for data quality assessment and analysis."

The Global Invertebrate Genomics Alliance (GIGA) is an initiative started in 2013, that group together diverse scientists "with the intent of growing a collaborative network that can address the major problems associated with genomic sequencing of a large taxonomic spectrum - sample collection and processing, data handling, sequence annotation, alignment and access, as well as intellectual property issues." The entire project is focused on (non-insect/ non-nematode) invertebrate, a taxonomic group that "comprise over 70% of all described metazoan species diversity, yet most of their genomes (complete hereditary material, DNA code) remain relatively unknown and understudied".

Saturday, 25 January 2014

Illumina presents two new sequencing platforms: population scale genomics and the 1000$ genome


Few days ago Illumina announced two new NGS platforms: a huge factory scale sequencer called HiSeq X Ten and a new benchtop sequencher called NextSeq 500, half-way between a MiSeq and a HiSeq.


Both platforms represent a huge advancement in data production, made possible by several technical innovations and a new chemistry. First of all, Illumina worked hard to increase accuracy and speed of image acquisition using an increased number (up to 6) of new LED cameras for image snapshot and a new flow cell design with larger random clusters enabling it to work with the lower-resolution optics as well as new surface chemistry to enhance the signal.
The HiSeq X Ten will also integrate a dual direction image scan system dubbling the scan speed and a new flow cell containing nanowell that allow for a precise cluster separation resulting in more dense clustering.

Both instruments will run with the new 2-color chemistry. This methods use only 2 different fluorescent molecule, red and green: T and C bases are marked as green or red signal, respectively; A is marked with both signal and G lack any marker. Thus, only 2 image acquisitions, one per color channel, are needed every cycle, instead of the classic 4, cutting down the processing time. The chemistry is well explained in CoreGenomics blog and the Illumina tech sheet.
  
The NextSeq 500 will come together with two diffent flowcell and two different run mode, resembling the fast run mode of the HiSeq 2500.
The mid-output flow cell includes 130 million clusters and will support a 2x75 base kit that will generate 16-19 gigabases of data per 15-hour run, or a 2x150 base kit that will generate 32-39 gigabases of data in a 26-hour run.
The high-output flow cell includes 400 million clusters and will support a 2x150 base kit that will generate 100-120 gigabases of data per 29-hour run, a 2x75 base kit that will generate 50-60 gigabases per 18-hour run, and a 1x75 base kit that will generate 25-30 gigabases per 11-hour run.
This numbers will allow a WGS sequencing in little more than a day at about 30X, making the NextSeq 500 the first benchtop sequencer to hit the goal of whole genome.

The HiSeq X Ten is a huge sequencher and it's actually composed by 10 single sequencing unit that will cost you a total of 10M$. Illumina will accept only a minimum order of ten units, with each supplementary unit costing 1M$. One unit will be able to generate 600 gigabases of data in one day, enough to sequence five human genomes, or 1.8 terabases of data in under three days, so that the total data production will be 18Tb every 3 days, allowing the sequencing of 18000 genome every year!!
Illumina claims that this juggernaut will respond to the need of population scale sequencing programs, often national health programs, such as UK initiative to sequence 100000 individuals or the Denmark project to sequence the entire population of an isolated island.
The HiSeq X Ten will enable the "first real $1,000 genome," said Flatley, CEO of Illumina. One reagent kit to support 16 genomes per run will cost $12,700, or $800 per genome for reagents. Hardware will add an additional $137 per genome, while sample prep will range between $55 and $65 per genome.
However, the new machine will sequence ONLY whole human genomes, no other applications are supported by now, and, given the hard work needed to produce and set up such a huge instruments, Illumina will deliver only 5 of them in the first year.

Despite the 10M$ price, Illumina has already sold 4 HiSeq X Ten: to the South Korean sequencing service provider Macrogen, the Garvan Institute in Australia, the New York Genome Center, and to the Broad Institute, which purchased a 14-unit system.

Detailed information and interesting discussions around the two new platforms and their technical innovations can be found around the web: CoreGenomics (presentation, HiSeq X Ten, NextSeq 500), MassGenomics, Omics!Omics!, Opniomics, GenomeWeb, Nature News

So is the mythical goal of 1000$ genome finally achieved? Well, it seems almost...
First of all, one have to consider the initial investment and the overhead costs to run the 10 machines. Moreover, the cost estimate made by Illumina are based on 4 year full activity of the HiSeq X Ten, which means 18.000 genomes per year per 4 year with machine running 24h/day...This scenario seems unlikely to many experts, since we simply don't have so many samples to sequence.
Finally data analysis costs, besides the simple sequence alignment and maybe SNP call, are not included as usually. For a more detailed evaluation of the real costs, read the interesting post on allseq blog.

Thursday, 2 January 2014

Human Genome Variation Journal announced

Nature group has just announced a new open access journal focused on study and discoveries about variation in the human genome and their relation to human phenotypes, with particular interest in disease related studies. "The journal was born from a demand by the community for a place to publish important discoveries, observations and analysis about research on the human genome." Nature reports in the home page of the new journal.

The topic is quite interesting, but the most intriguing aspect of this new born publication is a new kind of article named "Data Reports". Under this category the journal will publish "standardized reports about genomic variation and variability, especially in relation to disease". Even if peer reviewed, Data Reports will full a short editorial procedure, to allow for rapid publication. Moreover they will create a open access database to query around these data providing a powerful new instrument to rapidly find association between genomic variation and particular health traits.
As defined in the Journal guidelines "Data Reports are short reports about human genome variation and variability, which describe disease-causing variation and/or their frequencies. In addition, Data Reports can describe, document and analyze human multifactorial disease-associated variations and their frequencies."

The journal will start considering submission from March 2014.

Friday, 27 December 2013

Happy new Human Genome! GRCh38 is here!

This Christmas bring a long desired gift for anyone involved in genomics: a brand new release of the Human Genome Assembly. The new GRCh38 is a kind gift from the Genome Research Consortium (GRC), including The Wellcome Trust Sanger Institute (WTSI), the Washington University Genome Sciences Center (WUGSC), the European Bioinformatics Institute (EBI) and the The National Center for BiotechnologyInformation (NCBI)This new version is the first major release since 4 years and provide two major improvements: fewer gaps and centromere model sequences.

 




The first draft of the human genome contained around 150000 gaps, years of hard work have reduced them to 357 in the GRCh37 version. The new GRCh38 take care of several of these gaps, including one on chromosome 10 associated with the mannose receptor C Type 1 (MRC1) locus and one on chromosome 17, associated with the chemokine (C-C-motif) ligand 3 like 1 and ligand 4 like 1 (CCL3L1/CCL4L1 ) genes. In addition, there have been additions from whole-genome shotgun sequencing at nearly 100 of GRCh37′s assembly gaps.
Telomeres continue to be represented by default 10 kilobase gaps, while some improvments have been done on acrocentric chromosomes and GRCh38 includes new sequences on the short arms of chromosomes 21 and 22.

The other major feature added to the new release is the model sequence representation for centromeres and some heterochromatin. Using a method developed by a research team at University of California at Santa Cruz (UCSC) and reads generated during the Venter genome assembly, scientists created models for the centromeres. “These models don’t exactly represent the centromere sequences in the Venter assembly, but they are a good approximation of the ‘average’ centromere in this genome” says Church, a genomicist formerly at the US National Center for Biotechnology Information. Even if these sequence models are not exact representations of any real centromere, they will likely improve genome analysis and allow study of variation in centromere sequences.


GRC recently submitted the data for GRCh38 to GenBank, and the assembly is available with accession GCA_000001405.15. These data are also available by FTP at ftp.ncbi.nlm.nih.gov/genbank/genomes/Eukaryotes/vertebrates_mammals/Homo_sapiens/GRCh38.
However keep in mind that this sequence is provided withou any annotation and that it will take at least a couple of week for the NCBI annotation pipeline to process the whole data and produce a new set of RefSeqs. As the NCBI reports: "The chromosome sequences will continue to have accessions NC_000001-NC_000024, but their versions will update as GRCh38 includes a sequence change for all chromosomes. This process generally takes about 2 weeks, and when that is done we will incorporate these sequences into various analysis and display tools, such as genomic BLAST and genome viewers. Thus, at the end of this process each chromosome will be represented by both an unannotated sequence in GenBank (the original GRC data) and an annotated sequence in the RefSeq collection."

Further details on the properties and development of the GRCh38 are reported by Methagora blog from Nature Methods, NCBI Insights blog and the GRC consotium blog.

Thursday, 21 November 2013

Fred Sanger, father of sequencing, died at the age 95


As researchers interested in genomics and NGS we could not miss to spent few words in memory of Fred Sanger, the two-time nobel prize winner that developed the Sanger sequencing method. He was a dedicated and brilliant scientists that, unusual for someone of his stature, spent his most of his career in a laboratory. Even after receiving his first Nobel for the discovery of insulin protein structure, he shifted is interest on DNA and continued to perform many experiments himself.
Thank
s to his genius we have been able to read the sequence from DNA molecule, deciphering the secrets of gene information. Its work opened the door for sequencing automation, so being the foundation for the entire genomic era and finally for the assembly of the complete human DNA sequence. To honor his achivements, the Wellcome Trust Sanger Institute at Hinxton, where work on the genome continues, is named after him.
Even if something new has appeared in the field with NGS and its innovative techniques, Fred Sanger still the real father of sequencing!


"Fred can fairly be called the father of the genomic era: his work laid the foundations of humanity's ability to read and understand the genetic code, which has revolutionized biology and is today contributing to transformative improvements in healthcare," Jeremy Farrar, the director of the Wellcome Trust.

“Fred was one of the outstanding scientists of the last century and it is simply impossible to overestimate the impact he has had on modern genetics and molecular biology. Moreover, by his modest manner and his quiet and determined way of carrying out experiments himself right to the end of his career, he was a superb role model and inspiration for young scientists everywhere." Venki Ramakrishnan, deputy director of the Laboratory for Molecular Biology.

"Fred was an inspiration to many, for his brilliant work, for his quiet determination and for his modesty. He was a outstanding investigator, with a dogged determination to solve questions that have led to transformations in how we perceive our world" - Prof Sir Mike Stratton, director of the Wellcome Trust Sanger Institute


Thursday, 24 October 2013

Flash Report: Oxford Nanopore is announcing an early access program for its MinIon nanopore sequencer

The MinION is a USB stick (!!!) DNA sequencer announced by Oxford Nanopore Technologies about one year and a half ago. The company, which is currently exhibiting at the American Society of Human Genetics, yesterday announced the launch of the MinION Access Program.
More details in this GenomeWeb article as well as on this post on Nick Loman's blog. Interesting insights into the DNA sample preparation have been posted on the Future Continuous blog.

MinION Access Programme
In late November, Oxford Nanopore will open registration for a MinION Access Programme (MAP – product preview). This is a substantial but initially controlled programme designed to give life science researchers access to nanopore sequencing technology at no risk and minimal cost.
MAP participants will be at the forefront of applying a completely novel, long-read, real-time sequencing system to existing and new application areas. MAP participants will gain hands-on understanding of the MinION technology, its capabilities and features. They will also play an active role in assessing and developing the system over time. Oxford Nanopore believes that any life science researcher can and should be able to exploit MinION in their own work. Accordingly, Oxford Nanopore is accepting applications for MAP participation from all1, 2.
About the programme
A substantial number of selected participants will receive a MinION Access programme package. This will include:
* At least one complete MinION system (device, flowcells and software tools).
* MAP participants will be asked to pay a refundable $1,000 deposit on the MinION USB device, plus shipping.
* Oxford Nanopore will provide a regular baseline supply of flowcells sufficient to allow frequent usage of the system. MAP participants will ONLY pay shipping costs on these flowcells. Any additional flowcells required at the participants’ discretion may be available for purchase at a MAP-only price of $999 each plus shipping and taxes.
* Oxford Nanopore will provide Sequencing Preparation Kits. MAP participants may choose to develop their own sample preparation and analysis methods; however, at this stage on an unsupported basis.
What are the terms of the MAP agreement?
Participation in the MAP product preview program will require participants to sign up to an End User License Agreement (EULA) and simple terms intended to allow Oxford Nanopore to further develop the utility of the products, applications and customer support while also maximising scientific benefits for MAP participants. Further details will be provided when registration opens, however in outline:
* MAP participants will be invited to provide Oxford Nanopore with feedback regarding their experiences through channels provided by the company.
* All used flow cells are to be returned to Oxford Nanopore3.
* MAP participants will receive training and support through an online participant community and support portal.
* MAP participants will go through an initial restricted ‘burn-in’ period, during which test samples will be run and data shared with Oxford Nanopore. After consistent and satisfactory performance has been achieved under pre-agreed criteria, the MAP participants will be able to conduct experiments with their own samples. Data can be published whilst participants are utilising the baseline supply of flowcells.
* MAP participants or Oxford Nanopore may terminate participation in the programme at any time, for any reason. Deposits will be refunded after all of the MAP hardware is returned.
* MAP participants will be the first to publish data from their own samples. Oxford Nanopore does not intend to restrict use or dissemination of the biological results obtained by participants using MinIONs to analyse their own samples. Oxford Nanopore is interested in the quality and performance of the MiniION system itself.
* Oxford Nanopore intends to give preferential status for the GridION Access Programme (GAP) when announced to successful participants in the MinION access programme.
* The MinION software will generate reports on the quality of each experiment and will be provided to Oxford Nanopore only to facilitate support and debugging.
Registration process
Registration will open in late November for a specific and limited time period. Oxford Nanopore will operate a controlled release of spaces on the programme.
MAP participants will be notified upon acceptance to the programme. They will then able to review and accept the EULA before providing the refundable deposit and joining the programme. MAP participants will then receive a login for the participant support portal and a target delivery date for their MinION(s) and initial flow cells.
The online participant support portal will provide training materials, FAQs, support and other information such as data examples from Oxford Nanopore. It will also include a community forum to allow participants to share experiences.
Who can join?
Anybody who is not affiliated with competitors of Oxford Nanopore. Strong preference will be given to biologists/researchers working within the field of applied NGS where long reads, simple workflow, low costs, and real time analysis can be shown to make a key difference. Preference may also be given to individuals/sites opting for multiple MinIONs. If the programme is oversubscribed, some element of fairly applied random selection may be used to further prioritise participants.
1. If you would like us to keep you informed of the opening of this registration please visit our contact page and select the box marked ‘Keep me informed on the MinION Access programme’.
2. The MinION system is for Research Use Only
3. Flowcells can be easily, quickly and thoroughly washed through with water and dried before return.

Thursday, 17 October 2013

Sad but true: six years after acquisition Roche shuts down 454 sequencing business

I think that October 15, 2013 is a sad day for the NGS community. Using a pyrosequencing-based technology originally developed by John Rothberg, in 2007 it has been possible to sequences a for the first time the complete genome of an individual of our species (James Watson) using a NGS based approach. Now these days are over and new technologies are  more "commercially-sustainable" than some of the ones how made the history of NGS. I'm also wondering what's the future of the SOLID system in a world dominated by Illumina and a rapidly growing community io Ion Torrent/Proton users.

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) - Roche is shuttering its 454 Life Sciences sequencing operations and laying off about 100 employees, the company confirmed today. The 454 sequencers will be phased out in mid-2016, and the 454 facility in Branford, Conn., will be closed "accordingly," Roche said in a statement e-mailed to GenomeWeb Daily News.

The 100 layoffs will take place during the next three years, and, "Roche is committed to finding socially responsible solutions for the employees affected by these changes," it added.

Until the business is shut down, Roche will provide service and support to 454 instruments, parts, reagents, and consumables.

"Sequencing is a fast-evolving technology," the firm said. "With the continuous efforts of the sequencing unit in building a diverse pipeline of potentially differentiated sequencing technologies, Roche is committed to introducing differentiated and competitive products to the market and offer[ing] a sequencing product pipeline for both life science and clinical applications." Roche bought 454 from Curagen in 2007 for $155 million in cash and stock, saying at the time the deal would solidify its access to future 454 sequencers and enable it to use the tools for in vitro diagnostic applications. Prior to the purchase, Roche had been 454's exclusive distributor starting in 2005. Roche does not break out revenue figures for the 454 business, but in recent years, with the ascent of other sequencing technologies, the 454 instruments were pushed to the research margins. More recently, lower-throughput sequencers such as Illumina's MiSeq and Life Technologies' Ion Torrent systems have further moved the 454 technology toward irrelevance.

Over the past few years, Roche has made efforts to stay involved with the next wave of sequencing technologies in development. It forged alliances with IBM and DNA Electronics in 2010, and in early 2012 it made a hostile bid for Illumina that eventually reached a price tag of $6.7 billion - a bid that was rejected by Illumina.

After being rebuffed by Illumina, Roche continued trying to resuscitate its sequencing operations, and in December 2012, reports surfaced that Roche was again courting Illumina, though no deal materialized.

Then, earlier this year, Roche announced it had ended its deals with IBM and DNA Electronics, and said simultaneously it would shut down its R&D efforts in semiconductor sequencing and nanopore sequencing, resulting in about 60 layoffs at the Branford site.

The company said at the time that it was consolidating its 454 and NimbleGen products into a new sequencing unit covering both life science and clinical diagnostic applications. Dan Zabrowski, head of Roche Applied Science, who was to head the new sequencing unit, told In Sequence then, "We are fully committed to [our] life science business, and this decision did not have an impact on any of our businesses or customers. We continue to think that sequencing is going to play an important role in life science and in the clinic."

Roche is not abandoning the sequencing space entirely, though. Last month the firm forged a deal for up to $75 million with Pacific Biosciences to develop diagnostic products based on PacBio's SMRT technology.

"Roche sees great potential in this sequencing technology which offers unprecedented read length, accuracy, and speed for the development of future sequencing based applications in clinical diagnostics," Roche said.

Tuesday, 20 August 2013

Ready for Avalanche?

It's been a while (I remember rumors more than 1 year ago) since Life Tech started talking about a new chemistry based on isothermal amplification that promise to eliminate emPCR, shorten clonal amplification down to less than 1h and provide better uniformity...They called it Avalanche...But no more has been revealed in the past months, such that Avalanche was becoming like a mythological creature: sure it's fascinating, but does it really exist?

However keep your new stuff secret since the time of commercial release seems to be a common way of acting in the competitive field of NGS technology...and now finally Avalanche is ready to hit our bench and provide the long waited improvement for SOLiD and Torrent sequencing platforms.
Indeed, researchers from Life Technologies has recently published on PNAS a paper demonstrating the feasibility, efficiency and rapidity of the new method. As anticipated, it's based on isothermal amplification of a properly prepared DNA template using the bst DNA polymerase and substrate immobilized primers. Citing the abstract, it use "a template walking mechanism using a pair of low-melting temperature (Tm) solid- surface homopolymer primers and a low-Tm solution phase primer". Authors report results obtained with the new method applied on a SOLiD 5500 W flowchip and they are quite exciting: reaction time of slightly more than 30 min with high percentage of monoclonal colonies and 3- to 4-fold more mapped reads than with traditional method and easy paired-end protocol.




Since the procedure reported in the paper is developed on SOLiD technology, I expect that the new chemistry will be commercially available in short time on this platform...However, I'm wondering if Life Technology is already working to extend this innovation also to Torrent sequencers and if they will introduce Avalanche with the PII chip (announced to be out in these months) or with the PIII ( probably in the first half of 2014). The method demonstrated in the paper require surface immobilized primers on a flowcell, so they have to adapt it to work on the small beads that are required for the PGM and Proton chips...or they have to re-think the chips themselves to avoid use of beads, but I think this solution is not so easy to apply on the current sequencer machine. Another question that bother most of the Torrent cutomers: will Avalanche be compatible with the current library/template preparation equipment or we will have to buy some new expensive piece to actually do the upgrade?

Now waiting to be trampled by the Avalanche!

For more technical details read the full paper:
Isothermal amplification method for next-generation sequencing. 
Ma Z, Lee RW, Li B, Kenney P, Wang Y, Erikson J, Goyal S, Lao K.