Interview: Misha Angrist, Science Writer and Genetics Professor

March 17, 2010 · Posted in Interviews · Leave a Comment 

By Hsien-Hsien Lei, PhD, HUGO Matters Editor

Misha Angrist, PhDOne of the the most rewarding experiences I’ve had since I began writing about genetics and health is the opportunity to meet interesting and inspiring characters in the field, whether virtually or in-person. Dr. Misha Angrist aka Genomeboy is someone who I’ve learned a great deal from over the years. His candidly astute (astutely candid?) observations on genetics and life and his participation as one of the original study subjects of the Personal Genome Project mark him as someone to watch as the genome revolution unfolds. Once you’re done reading this interview, hop on over to Twitter and follow Dr. Angrist’s stream of consciousness. You’ll be glad you did.

HUGO Matters: On your profile page at the Duke University Institute for Genome Sciences and Policy, it says you’re Assistant Professor of the Practice. What does that mean and how did you end up going from an MFA from the Bennington Writing Seminars to an MS in genetic counseling to a PhD in genetics to Assistant Professor of the Practice?

Dr. Angrist: I actually got the MS in genetic counseling first, had so much fun doing research that I went for a PhD in genetics, and then years later, after burning out as a postdoc and floundering a bit in the real world, decided to get an MFA. After that I took a job at Duke as a science editor and eventually became Assistant Professor of the Practice. “PoPs” are full-time faculty who are non-tenure-track. They generally teach more than they do research, although that’s not always the case. I enjoy both and am fortunate that I get to do both. I teach, I write grants, I do research, I write papers, and I have written what I hope will be the first of several books. (You can read about PoPs here:

HUGO Matters: You’re currently working on a book, Here Is a Human Being: At the Dawn of Personal Genomics, that’s due for release in November 2010 about personal genomics and the characters involved in the development of the field. Can you tell us more about the process of writing a popular science book?

Dr. Angrist: Writing “HiaHB” was both the most gratifying thing I’ve ever been paid to do and the hardest. Despite having an MFA, I’m not convinced that anything could have prepared me for it. I made many, many mistakes and I continue to make them in the editing process.

I think for me what clinched the decision to go forward with the book was meeting George Church–such a fascinating, charismatic, eccentric, visionary and brilliant guy. And extremely warm and generous, too. He made my job so much easier than it would have been had I chosen to focus on someone else. And indeed, I was blessed to be able to talk to/follow around dozens of other compelling people inside and on the fringes of the personal genomics world. I imagine any writer of a narrative nonfiction/journalistic book relies on the kindness of strangers–I certainly did.

HUGO Matters: As if all the above weren’t enough, you’re also the fourth subject in the Personal Genome Project. Why did you decide to make your personal genome public? Do you think we should all do the same?

Dr. Angrist: I decided to make my genome public because I thought I needed to walk the walk. I’m someone who decries genetic determinism and says we shouldn’t be afraid of this stuff, so I thought I should put my DNA where my mouth was. Does that mean everyone should do it? Absolutely not. I think one of the things about personal genomics that gets lost sometimes, particularly by some of my colleagues in the humanities, is that it’s personal. You should be free to share or hide as much of yourself as you want; it’s not for me to say whether it’s appropriate or not. It’s none of my business. The fact that I chose to do it is a decision I made for me, not for anyone else.

I have heard the objections: “What about your family? Aren’t you exposing them?” Yes and no. I have young daughters and a family history of early-onset breast cancer. So yeah, I wanted to see whether I carried a mutation in BRCA1 or BRCA2 before I went public. (My BRCA genes are clean as far as I can tell.) Not because I didn’t want my daughters to know what might be in their genomes, but because I’m their Dad and if their risks were high I wanted them to learn about those risks from me and their Mom, not from the internet. But still, I would rather know than not know. If their children were at risk for a late-onset disease, some parents might not want to know–I respect that. But I would. As for the public aspect, I still maintain that genomes are probabilistic things and we learn about these same probabilities to some extent every time Uncle Joe can’t remember where he parked his car or Grandpa needs angioplasty.

I suspect that within a few years the power of these arguments will diminish. Anonymous genomes will always be hard to keep anonymous and their usefulness will be limited by their anonymity. Perhaps even more important, my generation (I’m 45) and its qualms about genetic information will have been overtaken by the Facebook generation and its willingness to let it all hang out.

Inaugural Issue of The HUGO Journal

March 16, 2010 · Posted in The HUGO Journal · Leave a Comment 

image The time has come! The HUGO Journal’s first articles are now online.

Which one will you be reading first?

Read more

Past HUGO President Professor Leena Peltonen-Palotie (1952-2010)

March 11, 2010 · Posted in General · Leave a Comment 

HUGO Mourns the Loss of a Colleague and Friend

We welcome your thoughts and memories of Prof. Peltonen-Palotie’s life as well as the tremendous contributions she made to science and genetics. Please share your comments with us.

clip_image002Professor Leena Peltonen-Palotie, past President of HUGO (2005-2007), sadly passed away March 10, 2010. Her premature death has left us with a deep sense of loss for a remarkable scientist, a respected leader, and a wonderful friend. The impact of her work, her imprint on the lives of her many students, and her stewardship of the many institutions that she led will be remembered. The whole HUGO community mourn her loss and send our prayers to the Palotie family in their grief.

We print below, the obituary released by the Academy of Finland. More about Leena Peltonen-Palotie (

Obituary: Professor Leena Peltonen-Palotie, Academician of Science

Professor Leena Peltonen-Palotie, Academician of Science, has passed away after a serious illness. Professor Peltonen-Palotie, MD, PhD, was awarded the honorary title of Academician of Science in October 2009 by the President of the Republic of Finland, an honor held by no more than twelve Finnish scientists and scholars at a time. Peltonen-Palotie was one of the world’s foremost and most respected experts in genetic research. Her research serves as an excellent example of how basic molecular biology can be combined with medicine to gain a better understanding of different diseases.

Peltonen-Palotie was the recipient of several international accolades, including the Antoine Marfan Award, the Anders Jahre Prize, the European van Gysel Prize for Biomedical Research and the Eric K. Fernström Prize.

Over her career that spanned 37 years, Peltonen-Palotie ran research groups at the University of Oulu, the University of Helsinki, the National Public Health Institute of Finland, the University of California Los Angeles, the Broad Institute of MIT Harvard in Boston and the Sanger Institute in Cambridge, UK.

Her team identified genetic mutations associated with dyslipidemias, lactose intolerance, MS disease, schizophrenia, obesity and heart diseases. The team also established how these mutations mechanically lead to the actual onset of disease. Their efforts have paved the way to new diagnostic tests and to screenings for disease carriers. She also excelled in training of young students for science, having, among other things, mentored more than 70 PhD theses, thus influencing and inspiring several new generations of scientists. Importantly, she was also very determined to pass on this new information about human genetics and disease to the general public, being always open to explaining these sometimes difficult issues in an open, clear and personal way.

Read more

Interview: Dr. Keith Grimaldi of Eurogene on Nutrigenomics

March 9, 2010 · Posted in Interviews · 4 Comments 

By Hsien-Hsien Lei, PhD, HUGO Matters Editor

Several years ago, I became acquainted with Dr. Keith Grimaldi who was then Chief Scientist at Sciona, a company offering nutrigenomic tests. Nutrigenomics is the study of the interaction between genetics and diet. Nutrigenomic tests are genetic tests that are used to help people determine the ideal diet for optimizing their health.

Interest in nutrigenomics started around 2003 and peaked in 2005 (around the time I started blogging about genetics). This was the time before direct-to-consumer, personalized genetic testing became the attention-grabbing industry that we are familiar with today which includes tests for specific disease-related genetic mutations to SNP analysis to whole genome sequencing. Nutrigenomics has now claimed a corner of consumer genetic testing and Dr. Grimaldi, who holds a PhD in Clinical Biochemistry from the University of Cambridge, is spearheading the nutrigenomics movement in Europe as Scientific Director of Eurogene, which we’ll learn more about in this interview.

I hope you’ll find this interview enlightening. Dr. Grimaldi has had an interesting career in the field of genetic testing and has much to share. Don’t miss his comments about science and social networking below the fold! If you have any questions for Dr. Grimaldi, please leave a comment.


HUGO Matters: Eurogene is an interesting endeavor in that it’s personal genomic/nutrigenomic testing supported by a consortium of partners. Can you tell us how Eurogene came about and some of the project’s immediate and long-term plans?

Dr. Grimaldi: At the time the project began I was working with Sciona, we had been involved with a number of EU consortium research grants and had also worked for several years with the Biomedical Engineering Laboratory at the National Technical University of Athens (BEL-NTUA) with a group of excellent software and systems developers. A call came out from the EU under the eTEN programme for market validation products. The scope of the call was to use the funds to overcome barriers to market facing new technology products and services that could be useful in society. We put together a small consortium – Sciona, BEL-NTUA, three clinical partners in Italy, Germany and Spain, plus a marketing / business development company from the UK. Fortunately we were one of the chosen few and our project was to use e-technologies to improve the product – to transform the existing largely paper based nutrigenetic test  (hardcopy questionnaire and report) into electronic format and develop an interactive website for individuals and practitioners to manage their genetic & personal data and create their own personal reports.

That was the immediate plan but the project evolved along a slightly but significantly different course. It began in January 2008 and phase 1 EU funding saw it through until October 2009. Due to the economic downturn, when Lehman went bust etc, in December 2008 Sciona failed to secure some required funding and had to drastically reduce its operations (and sadly ceased trading a few months later). Sciona left the project and I left Sciona, becoming a member of BEL (it was great, I returned to being an academic and got to be in Athens a lot!). The direction of Eurogene changed then because suddenly it was no longer tied to one product or service – we outsourced genotyping of the remaining patients to a European genetics lab and carried on. One of the interesting findings, in the current debate, was that although Eurogene was set up to deliver both DTC and through practitioners, it was largely the latter market that was more receptive for this type of product/service in Europe  and we ended up avoiding DTC. Regarding longer term plans, I think I’ll answer that as part of the reply to the next question

HUGO Matters: How is Eurogene different from other personal genomic services companies like Navigenics, 23andMe, and Knome?

Dr. Grimaldi: As Eurogene was conceived it could have been seen as a sort of competitor to these companies, but it was really complementary. The Eurogene concept is to deliver highly personalised information based on genetics plus diet & lifestyle and other biomarkers such as traditional blood analyses. The aim is to integrate personal genomics with the rest of the person’s lifestyle and health status. We don’t do whole genome scanning or sequencing and restrict the genetic analysis only to those SNPs (and indels, copy number variants, etc) that are relevant for a particular purpose, e.g. a type 2 diabetes (T2DM) profile which we are working on at the moment. The companies you mention  do the genotyping and provide quite a lot of interesting information in their reports but it’s not highly personal and is of limited use for clinical decision making. On T2DM for example the information is general, does not quantify, for the individual, the effects of diet, lifestyle, biomarkers and family history on the T2DM risk, nor does it enable personalised treatments, based on all those parameters, to be devised – Eurogene will do all that. A loose analogy could be financial information – if you are a small investor maybe you find enough information for free on Yahoo finance, or with a small subscription to get a bit extra. If you are a serious investor concentrating on a particular sector you will pay a subscription to someone like Reuters or Bloomberg to receive highly specific, detailed information targeted to your sector of interest which will be an important factor in your investment decisions – Eurogene is more like the latter in a healthcare setting (but rather smaller at the moment!).

As the project developed though, the situation has changed subtly, and here come the longer term plans. We are no longer tied to a single commercial genetic test provider and when Sciona left we had to decide whether to continue along the same lines and develop our own tests. We decided not to – there are plenty of genotyping companies out there already and we decided that with the infrastructure and systems that we had developed that our core competency was information interpretation and secure, confidential delivery.

The software tools that we have developed, the Eurogene “Rules Toolset” include several components, the kernel of which is the Modeler – this takes genetic data plus ANY other kind of data (diet, exercise levels, blood analysis markers such as lipids, insulin, glucose, etc) and creates a personal report. The rest of the Toolset includes modules for safe data encryption, transmission and storage; real time generation of updated personal reports managed by the practitioner or customer, through web services, and continual quality control of the algorithms to make sure that the data in is interpreted correctly in the personal report / advice that comes out. The QC module is a very important piece as it also creates a log of all the advice statements created for each report – we think that this sort of exhaustive QC data collection, where we can go back and check what advice / results were given in any particular report from any date will be a key requirement in future regulations (and mistakes can be made – see Daniel Macarthur on decodeme). The system handles information and the end report can be as broad or as specific as needed, it is also interactive so that when you change you diet, or get new blood test results you can create updated personal reports – e.g. it would quantify the risk change for T2DM based on the parameters that change. Of course being an EU project it is also multilingual (the system will also handle Chinese, Japanese, Hebrew, etc), one cool thing about that is that if you happen to require a medical visit while travelling in a different country you can access your account and, with a few clicks, create your personal reports in the local language.

I think you could describe the Eurogene Toolset as a sort of operating system for the application of personal genomics in healthcare, either direct to the consumer or via a practitioner. So now we are even more complementary to 23andme etc. I mentioned above a T2DM model – we are developing this more as a demonstration than as a product – it will allow anyone who already has their results to register with the website (anonymously), input their genetic data plus other personal information (diet, biomarkers, etc) and produce a personal report based on all the elements, not just the genetics. The system can handle all sorts of complexity, I have always been closely involved with NuGO (in fact we held our 2nd workshop at Nugoweek in Italy last year) and we have made sure that the Rules Toolset will be compatible with all the “omics” data once that begins to have clinical applicability.

So longer term…we are a small consortium and the phase 1 funding is over. We are now deciding whether to pursue private funding (we have created a business plan) or whether to look for more public funding, both have their pros and cons. But as we are very clear about what we are and we expect to work with other companies who have, or want to develop, personal healthcare services for which we would provide all of the infrastructure and systems for interpreting data and delivering services. It allows each partner to concentrate on its expertise, we have built a complex set up that would cost several €100,000s and a couple of years to reproduce and we will make it available to companies who want to deliver their expertise through healthcare personal services.

HUGO Matters: What are the major challenges facing the field of nutrigenomics?

Dr. Grimaldi: Although in Eurogene we are moving away from a strictly nutrigenomics field to more broadly cover personal genomics the challenges are similar, and familiar:

Regulations – we need some framework and we need it sooner rather than later. At Eurogene we researched this quite a lot (a review will be submitted for publication soon) and there is basically no real regulation of any sort anywhere except in some isolated cases like the recent German legislation. Even with the strict (widely criticised) German legislation it’s hard to see how it will stop DTC sales into the country over the internet. Our position is that we strongly support self-regulation as I describe in more detail on my blog and the main reasons are time and flexibility – we would be happy to work with any government regulation but that will take too long and will probably be obsolete by the time it comes into force. Look at the reports from the recent  #AGBT Conference, sequencing cost is tumbling – what will it be like in 3-5 years time? Our longer term view at Eurogene is that soon there will be so many people with their own DNA results that it will no longer be necessary to offer genotyping to start up a personal genomics company. All you will need is a website and an internet connection to start selling interpretation services – can you imagine the free for all that will happen? Even now with the significant start up costs there have still been some very dubious companies appearing over the last few years, imagine what it will be like when the start up costs will be low. Actually they will be low to start up a poor service, the costs will still be high to provide a real service, providing personal healthcare information and interpreting results is not cheap or easy if it is done properly, but is if it is done badly. We urgently need some movement on self-regulation so that the public (and professionals) will be able to identify who are the credible companies; we need it for the protection of ALL the stakeholders (except the scam companies).

Credibility – this is essential of course but we have all taken hits over the years plus anything involving nutrition is vulnerable. We have to be open and transparent; the companies mentioned above are very transparent but there are several unmentioned who are not. We all have to be careful about our marketing and claims. Episodes like the recent press release of a conference presentation of a trial to support a weight management genetic test does not help. The test may or may not be valid, but weight loss is such an exploited area that much more care is needed and I don’t agree with press releasing a conference presentation to claim scientific validity to help sell a test, any test let alone a genetic test. Until the data are available for scrutiny either online or written up and published then there is no basis for using them as support for sales – I know it goes on all the time in the supplement industry but maybe that’s precisely the point, personal genomics and especially nutrigenomics has to be a long way from the level of the supplement industry – the bar is much higher, maybe artificially higher because of the G-word, but that’s the reality we have to live with.

These are the two main barriers I think – there are many others, e.g. reaching levels where there is undisputed clinical utility & demonstrating it, educating healthcare professionals and providing them with the tools to integrate genomics, to name a couple, but without overcoming the main barriers any long term growth will be painful.

HUGO Matters: Once whole genome sequencing becomes affordable and efficient, how do you think the field of nutrigenomics will change?

Read more

3-D Genome Sequencing

March 6, 2010 · Posted in Tools of the Genome Trade · Leave a Comment 

By Hsien-Hsien Lei, PhD, HUGO Matters Editor

 image for id B0006458Congratulations to to Erez Lieberman-Aiden, graduate student at the Harvard-MIT Division of Health Science and Technology, on winning the Lemelson-MIT Student Prize. One of his several impressive innovations is the Hi-C method for three-dimensional genome sequencing which Lieberman-Aiden likens to “MRI for genomes.”

From Medical News Today:

Mapping the Human Genome in 3-D
Lieberman-Aiden’s most recent invention is the "Hi-C" method for three-dimensional genome sequencing. It has been hailed as a revolutionary technology that will enable an entirely new understanding of cell state, genetic regulation and disease. Developed together with postdoctoral student Nynke van Berkum of UMass Medical School, and their advisors Eric Lander and Job Dekker, Hi-C makes it possible to create global, three-dimensional portraits of whole genomes as they fold. Three dimensional genome sequencing is a major advance in solving the mystery of how the human genome – which is two meters and three billion chemical letters long – fits into the tiny nucleus of a cell.

Applied to the human genome, the technology enabled Lieberman-Aiden, van Berkum and their team to make two significant discoveries. First, they found that the genome is organized into separate active and inactive compartments; chromosomes weave in and out of these compartments, turning the individual genes along their length on and off. When they examined this process more closely, they found evidence that the genome adapts into a never-before-seen state called a fractal globule. This allows cells to pack DNA extraordinarily tightly without knotting, and to easily access genes when the information they contain is needed.

Paper: Comprehensive Mapping of Long-Range Interactions Reveals Folding Principles of the Human Genome, Lieberman-Aiden E, et al., Science, 326:5950, 289-293, 9 October 2009

Image: Spirals of DNA molecules, Annie Cavanagh, Wellcome Images

Whole Genome Sequencing for Cancer

March 5, 2010 · Posted in Genetics of Disease, Tools of the Genome Trade · 1 Comment 

By Hsien-Hsien Lei, PhD, HUGO Matters Editor

Last month, researchers at the Johns Hopkins Kimmel Cancer Center announced that they had successful sequenced the complete genomes of cancer patients. The sequences were analyzed using a technique called “personalized analysis of rearranged ends” or PARE. PARE can detect genome rearrangements that can then be used as cancer biomarkers indicating tumor growth. Cancer genomes can be used to identify:

  • Driver mutations that cause cancerous growth through mechanisms such as the alteration of gene expression
  • Mutations that are the same in different tumors of the same type
  • New drug targets based on the mutations identified in the cancer genome
  • Diagnostic tools based on a complete list of driver mutations in each cancer type
  • Effective drugs or a cocktail of drugs tailored to each individual based on their tumor profile of driver mutations

    (Source: The Scientist)

The results from whole genome sequencing of cancer patients can be used to “monitor the growth of tumors, determine appropriate levels of therapy, and show instances of recurrence.” (BioTechniques)

“Eventually, we believe this type of approach could be used to detect recurrent cancers before they are found by conventional imaging methods, like CT scans,” Luis Diaz, assistant professor of oncology at Johns Hopkins, said in a press release.

The Cancer Genome Atlas (TCGA), part of the National Human Genome Research Institute, is also working on using large-scale genome sequencing to study cancer. In June 2009, their Genome Sequencing Centers began including whole exome and whole genome data. And in July 2009, the Genome Sequencing Centers completed the first of 24 whole genome sequence analyses of glioblastoma multiforme and ovarian tumor samples. Here’s Dr. Raju Kucherlapati, Principal Investigator, Genome Characterization Center, The Cancer Genome Atlas, speaking about cancer genetics and genomics.


Recently, Amy Harmon of the New York Times explored targeted cancer therapies in a three-part series. She profiled Dr. Keith Flaherty who was in charge of clinical trials testing PLX4032 in melanoma patients.

Healthy cells turned cancerous, biologists knew, when certain genes that control their growth were mutated, either by random accidents or exposure to toxins like tobacco smoke and ultraviolet light. Once altered, like an accelerator stuck to the floor, they constantly signaled cells to grow.

What mattered in terms of treatment was therefore not only where a tumor originated, like the lungs or the colon, but also which set of these “driver” genes was fueling its growth. Drugs that blocked the proteins that carried the genes’ signals, some believed, could defuse a cancer without serious side effects.

Targeted cancer therapies also include gene therapy although this approach has thus far been unsuccessful. More information on targeted cancer therapies is available at the National Cancer Institute (link).

Systems Biomedicine: Concepts and Perspectives

March 2, 2010 · Posted in Books about Genetics and Genomics · 14 Comments 

Edited to add: Leave a comment to enter a drawing for a free copy of Systems Biomedicine. Contest is open until Friday, 5 March, 11:59 pm PST (GMT –8).

8 March 10:
Congratulations to Shameer Khader and Ciprian Gheorghe! They’ve each won a copy of Systems Biomedicine by HUGO President Edison Liu.

We’ll be holding another contest in April. Keep your eyes on HUGO!


Have you read HUGO President Prof. Edison Liu’s latest book? Co-authored with Douglas Lauffenburger of MIT, Systems Biomedicine: Concepts and Perspectives examines systems biology and its application to biomedical research.

You can read more about the book at A*STAR Research.

Systems biology, as we now conceive of it, differs in scale and formalism from these earlier quantitative traditions. As any new field, there are many opinions as to the scope of systems biology. In essence, it can be described as a discipline that seeks to quantify and annotate the complexity of biological systems in order to construct algorithmic models capable of predicting outcomes from component inputs. Systems biomedicine is an extension of these strategies to the study of biomedical problems. This demarcation is relevant given the challenges of the complexity of the human organism and the human impact of the results of these investigations.

What other books about genetics or genomics would you recommend?

Petascale Computing and Genomics

March 1, 2010 · Posted in Research, Tools of the Genome Trade · Leave a Comment 

By Hsien-Hsien Lei, PhD, HUGO Matters Editor

Last week, I mentioned the use of petascale supercomputers to manage and analyze the overwhelming amount of genomic data being generated currently and into the foreseeable future. Last week was also the first time I’d ever heard the terms “petascale” and “petaflop.” I assume that I’m not the only one who hasn’t given much thought to the specifics of supercomputing so I’m sharing here what I’ve learned so far.

First, a couple of definitions:

  • “peta”  is one quadrillion (1015)
  • FLOPS stands for FLoating point OPeration which is a measure of a computer’s performance
  • 1 petaflop is equal to 1000 teraflops or 1 quadrillion floating point operations per second


  • According to Wikipedia, a simple calculator functions at about 10 FLOPS.
  • Most personal computers process a few hundred thousand calculations per second.
  • One petabyte of data is equivalent to six billion digital photos. (Blue Waters)
  • Google processes 20 petabytes of data per day (GenomeWeb)
  • 1 petabyte = 1,024 terabytes

image Image: I See Your Petaflop and Raise You 19 More, Wired Science, February 2, 2009. Sequoia is the supercomputer planned by the Department of Energy and IBM that will be able to perform at the 20 petaflop level.

David A. Bader, author of Petascale Computing: Algorithms and Applications explained in an interview with,

Computational science enables us to investigate phenomena where economics or constraints preclude experimentation, evaluate complex models and manage massive data volumes, model processes across interdisciplinary boundaries, and transform business and engineering practices.

Petascale computing is run off clusters of computers. An article in Cloud Computing Journal explains why:

The main benefits of clusters are affordability, flexibility, availability, high-performance and scalability. A cluster uses the aggregated power of compute server nodes to form a high-performance solution for parallel applications. When more compute power is needed, it can be simply achieved by adding more server nodes to the cluster.

In November 2009, it was announced that a four-year $1 million project, supported by the National Science Foundation’s PetaApps program, was awarded to study genomic evolution using petascale computers. Researchers will first use GRAPPA, an open-source algorithm, to study genome rearrangements in Drosophila. From this analysis, new algorithms will be developed which have the potential to make sense of genome rearrangements leading to better identification of microorganisms, the development of new vaccines, and a greater understanding of how microbial communities evolve along with biochemical pathways.

In 2011, the world’s most powerful supercomputer, Blue Waters, will come online. According to GenomeWeb, Blue Waters will contain more than 200,000 processing cores and can perform at multi-petaflop levels. A partnership between University of Illinois at Urbana-Champaign, its National Center for Supercomputing Applications, IBM, and the Great Lakes Consortium for Petascale Computation, Blue Waters is supported by the National Science Foundation and the University of Illinois. Researchers can apply for time on Blue Waters from the National Science Foundation.

"I think petascale computing comes at a very good time for biology, especially genomics, which has to deal with … increasingly large data sets trying to do a lot of correlation between the data that’s held in several massive datasets," says Thomas Dunning, director of the NCSA at University of Illinois, Urbana-Champaign. "This is the time that biology is now going to need this kind of computing capability — and the good thing is that it’s going to be here."

~Petascale Coming Down the Pike, GenomeWeb, Jun 2009

Here’s a video of Saurabh Sinha, a University of Illinois assistant professor of computer science, talking about his research using NCSA’s supercomputers.

Genome-wide search for regulatory sequences in a newly sequenced genome: comparative genomics in the large divergence regime

Next topic for thought: cloud computing. More to come.

NB: HUGO President Prof. Edison T. Liu is currently attending the Bioinformatics of Genome Validation and Supercomputer Applications workshop at NCSA in Urbana, Illinois. I’m looking forward to hearing more about their discussions!

Do you have any knowledge to share with regards to petascale computing and genomics?

Movie – Naturally Obsessed, the making of a scientist

February 27, 2010 · Posted in Research · Leave a Comment 


Naturally Obsessed: the making of a scientist is a documentary by Richard Rifkind and Carole Rifkind

Mixing humor with heartbreak, the film tells a profoundly real yet intensely dramatic story about life in a molecular biology lab. “I want the viewer to stand in the shoes of a scientist at work in a lab, glimpse the world of research as it really is, and understand what it takes to fill an ample pipeline of future scientists,” says scientist turned filmmaker, Sloan-Kettering Institute Chairman Emeritus, Richard Rifkind.

For another behind-the-scenes look at the high pressure environment of a life sciences lab, I recommend Intuition by Allegra Goodman. (New York Times review)

(via Misha Angrist)

Drinking from the Fire Hose of Genomic Data

February 26, 2010 · Posted in Tools of the Genome Trade · Leave a Comment 

By Hsien-Hsien Lei, PhD, HUGO Matters Editor

At a meeting with HUGO President Prof. Edison Liu on Monday, we talked about the tremendous opportunities for utilizing personal genome data if we weren’t limited by the lack of computing power. Just last month, the DOE Joint Genome Institute held an invitation-only workshop to discuss the use of high performance computing for analyzing and managing data from genome sequencing. As sequencing becomes more efficient and cost-effective, we are reaching the next hurdle of data management, data analysis, and translational research.

image One laboratory that is interested in using higher-performance computing (HPC) for genomics is the Zhulin lab at the National Institute for Computational sciences, Oak Ridge National Laboratory, University Tennessee.

The exponential growth of genomic datasets leads to a situation, where computing power becomes a critical issue. We invest in adapting powerful bioinformatics tools for use with HPC architectures. The UT-ORNL Kraken and ORNL Jaguar petascale supercomputers offer tens of thousands of processors that enable researchers to computationally analyze data on an unforeseen scale. Our first successful implementation of the HMMER software (now scalable to thousands of processors) permitted us to match every sequence in the NCBI non-redundant database (roughly 5 million protein sequences) to all Pfam domain models in less than a day.

Supercomputers used for petascale computing can perform a quadrillion (1015) calculations per second. Singapore is set to have its own cluster of supercomputers in a joint R&D partnership between Fujitsu and the Agency for Science Technology and Research.

Personal genomics company, Knome, announced the launch of KnomeDISCOVERY in November 2009. Targeting research groups, KnomeDISCOVERY will help researchers sequence DNA, manage data, and perform preliminary analyses. The service will also help clinical researchers identify novel alleles that are associated with specific diseases of interest.

Researchers with expertise in medical genomics who want to streamline data management and preliminary analysis in forthcoming mass sequencing projects - Leveraging high-volume access to sequencing platforms, Knome handles the logistical hurdles of rapid-turnaround sequencing, and carries out the important but computationally intensive process of "background" genome analysis, freeing researchers to focus on specific question-driven hypothesis testing that can yield novel discoveries in genetic medicine.

Clinically trained researchers with extensive expertise in specific diseases, for whom mass sequencing approaches are novel and unfamiliar tools – Knome’s expertise in analyzing whole genome data can directly help these researchers pinpoint novel alleles that contribute to a disease of interest. Knome takes a "fine-toothed" approach to genomic data analysis, grounded in a thorough understanding of genome structure and function; protein biochemistry; population/evolutionary genetics; statistical analysis; and basic disease etiology, as refined by close consultation with the researcher. This approach can quickly identify potentially disease-relevant candidate alleles for researchers to consider for follow-up empirical assessment.

Now that we’re getting a handle on the technicalities of sequencing, it’s time to grapple with the challenge posed by the massive volumes of data that are being produced. Translational genomics research will enable us to better understand the biology of living organisms and holds the key to better diagnosis, treatment, and cures of the diseases that ail us.

« Previous PageNext Page »