Thursday, April 24, 2008

SciFoo and BioBarCamp

(Via Attila) The invitations for the 3rd SciFoo have apparently been sent. It will be held from the 8th to the 10th of August at the Googleplex. There is also an idea floating around to organize a BarCamp at the same time as SciFoo. A BarCamp Check out the BioBarCamp wiki and discussion group. There are already several suggestions for venues to organize it and several people interested in attending.

On a side note it's fun to see something like this getting thought of and set up from Twitter/FriendFeed conversations. I have been trying out FriendFeed for a while now and although I am not a big fan of micro blogging (yet?) I really like the conversations around the feed streams.

Wednesday, April 16, 2008

The shuffle project


Most of my work in the last few years was computational, either looking at the evolution of protein-protein interactions or at the prediction of domain-peptide interactions. The nice thing of working on a lab were a lot of people were doing wet lab experiments was that I had the oportunity to, once in a while, grab some pipettes and participate in some of the work that was going on. One project that worked out well was published today (not open access sorry). My contribution to this project was small but it was a lot of fun and I am very interested in the topic that we worked on. We called it the shuffle project in lab.

The main objective of this work was to study how the addition of gene regulatory interactions impacts on a cell's fitness. We introduced different combinations of existing E.coli promoters and transcription/sigma factors either as plasmids or integrated in the genome. In effect, each construct mimics a duplication of one of the E.coli's sigma factors or transcription factors with a change in its promoter. We then tested the impact on fitness by measuring growth curves under different conditions or performing competition assays.

There were a couple of interesting findings but the two the I found most interesting were:
- The vast majority of the constructs had no measurable impact on growth even by testing different experimental conditions.
- A few constructs could out-compete the control in competition assays (stationary phase survival or passaging experiments in rich medium).

Both of these suggest that the gene regulatory network of E. coli is very tolerant to the addition of novel regulatory interactions. This is important because it tells us that regulatory networks are free to explore new interactions given that there is a limited impact on fitness. From this we could also argue that if there are many equivalent (nearly neutral) ways of regulating gene expression we can't expect to see individual gene regulatory interactions conserved across different species. There are a several recent studies, particularly in eukaryotic species, showing that there is in fact a fast divergence of transcription factor binding sites (see recent review by Brian B. Tuch and colleagues) and many other examples that show that although the selectable phenotype is found to be conserved the underlying interactions or regulations have diverged in different species. (see Tsong et al. and Lars Juhl Jensen et al.)

There are a couple of questions that come from these and other related works. What is the fractions of cellular interactions that are simple biologically irrelevant ? Is it possible to predict to what degree purifying selection restricts changes at different levels of cellular organization ? What is the extent of change in protein-protein interactions ?

Having previously worked on the evolution of protein-protein interactions this is the direction that most interests me. This is why I am currently looking at the evolution of phospho-regulation and signaling in eukaryotic species.

Monday, April 14, 2008

Life Sciences Virtual Conference and Expo

IBM Deep Computing will hold a 2 day virtual conference on Innovations in Drug Discovery and Development (16th and 17th of April 2008). The talks will be recorded and available for playback for those that register. The focus of the talks will be on the impact of High Performance Computing for life science research. The current list of talks:
  • Dr. Paul Matsudaira, Director Whitehead Institute Professor of Biology and Bioengineering, MIT : Advanced Imaging and Informatics Methods for Complex Life Sciences Problems
  • Professor Jan-Eric Litton, Director of Informatics, Karolinska Institute - Biobanking : The Challenge of Infrastructure for Large Scale Population Studies
  • Dr. Joel Saltz, Professor and Chair, Department of BioMedical Informatics, Ohio State University : The Cancer Biomedical Informatics Grid (caBIG™)
  • Professor Peter J. Hunter, University of Auckland, Bioengineering Institute : Innovation in biological system simulations
  • Dr. Ajay Royyuru, IBM Research, Computation Biology at IBM : Update on the IBM Genealogy Project co-sponsored with National Geographic
  • Dr. Michael Hehenberger, Solutions Executive, Global Life Sciences : IT Architectures and Solutions for Imaging Biomarkers

Tuesday, April 08, 2008

Structure based prediction of SH2 targets

One of the last few things I worked on during the PhD is now available in PLoS Comp Bio. It is about the structure based prediction of binding of SH2 domains to phospho-peptide targets.

The SH2 domain (src homology domain 2) is a small domain of around 100 amino-acid that has a strong preference to bind peptides that have phosphorylated tyrosines. The selectivity of each domain is typically further restricted by variable surfaces near the phospho-tyrosine binding pocket. See figure below:

The binding preference of each domain can be experimentally determined using for example peptide library screening, phage display or protein arrays. Alternatively we should be able to analyze the increasing amount of structural information and predict the binding specificity of peptide binding domains.
We tried to show here that given a structure of an SH2 domain in complex with a peptide it is possible to predict the binding specificity of this domain. It is also possible, to some extent, predict how mutations on these domains might affect their binding preferences. Finally, combining predictions of specificity with known human phospho sites allows for very reasonable predictions of in vivo SH2-target interactions.

The obvious limitation here is that we need to start with structure of the domain we know from some unpublished work that for families with good structural coverage, homology models can produce specificity predictions that as accurate as from x-ray structure. The other limitation is that giving the lack of dynamics a single conformation of the interactions is modeled and this should in part help determine the binding specificity. One possible to this problem that we have used with some success is to model different peptide conformation for each binding domain.

I should make clear that although I think there is an improvement over previous works there is already a considerable amount of research on this topic that we tried to cite in the introduction and discussion. I would say that some of the best previous work on structure based predictions of domain-peptide interactions has come from Wei Wang lab (see for example McLaughlin et al. or Hou et al.)

This manuscript was the first (and only so far) I collaborated on with Google Docs. It worked well and I recommend it to anyone that needs to co-write a manuscript with other people. It saves a lot of emails and annotations on top of annotations.

Bio::Blogs#20 - the very late edition

I said I would organize the 20th edition of Bio::Blogs here on the 1st of April but April fools and my current work load did not allow me to get Bio::Blogs up on time.

There were a couple of interesting discussions and blog posts in March worth noting. For example, Neil mentioned a post by Jennifer Rohn started that initiated what could be one of the longest threads in Nature Network :"In which I utterly fail to conceptualize". It started off as small anti-Excel rant but turned in the comments to 1st) a discussion of bioinformatic tools to use, 2nd) a discussion of wet versus dry mindset and how much one should devote to learn the other. Finally it ended up as a exchange about collaborations and how a social networking site like Nature Network could/should help scientists find collaborators. There was even a group started by Bob O'Hara to discuss this last issue further.

I commented on the thread already but can try to expand a bit on it here. Nature Network is positioned as a social networking site for scientists. So far the best that it has to offer has been the blog posts and forum discussions. This is not very different from a "typical" forum. It facilitates the exchange of ideas around scientific topics but NN could try to look at all the typical needs of scientists (lab books, grant managing, lab managing, collaborations, protocols, paper recomendations,etc) and decide on a couple that they could work into the social network site. Ways to search for collaborators and maybe paper recommendation engines that take advantage of your network (network+connotea) are the most obvious and easier to implement. Thinking long term, tools to help manage the lab could be an interesting addition.

Another interesting discussion started from a post by Cameron Neylon on a data model for electronic lab notebooks (part I, II, III). Read also Neil's post, and Gibson's reply to Cameron on FuGE.
How much of the day to day activities and results need to be structured ? How heavy should this structure be to capture enough useful computer readable information ? Although I find these questions and discussion interesting, I would guess that we are far from having this applied to any great extent. If most people are reluctant to try out new applications they will be even less willing to convey their day to day practices via a structured data model. I mentioned recently the experiment under way at FEBS letters journal to create structured abstracts during the publishing process. As part of the announcement the editors commissioned reviews on the topic. It is worth reading the review by Florian Leitner and Alfonso Valencia on computational annotation methods. They argue for the creation of semi-automated tools that take advantage of the automatic methods and the curators (authors or others). The problems and solutions for annotation of scientific papers are shared with digital lab notebooks. It hope that more interest in this problem will lead to easy to use tools that suggest annotations for users under some controlled vocabularies.

Several people blogged about the 15 year old bug found in the BLOSUM matrices and the uncertainty in multiple sequence alignments. See posts by Neil, Kay Lars and Mailund.
Both cases remind us of the importance of using tools critically. The flip side of this is that it is impossible to constantly question every single tool we use since this would slow our work down to a crawl.

In the topic of Open Science, in March the Open Science proposal drafted by Shirley Wu and Cameron Neylon, for the Pacific Symposium on Biocomputing was accepted. It was accepted as a 3 hour workshop consisting of invited talks, demos and discussions. The call for participation is here along with the important deadlines for submissions (talk proposals due June 1st and poster abstracts due the 12th of September).

On a related note Michael Barton has set up a research stream (explained here) He is collecting updates on his work, tagged papers and graphs posted to Flickr into one feed that gives an immediate impression of what he is working on at present time. This is really a great set up. Even for private use withing a lab or across labs for collaboration this would give everyone involved the capacity to tap into the interesting feeds. I would probably not like to have everyone's feeds and maybe a supervisor should have access to some filtered set of feeds or tags to get only the important updates but this looks a step in the right direction. The same way, machines could also have research feeds that I could subscribe too to get updates on some data source.

Also in March, Deepak suggested we need more LEAP (Lightly Engineered Application Products)in science. He suggests that it is better to have one tool that does a job very well than one that does many somewhat well. I guess we have a few examples of this in science. Some of the most cited papers of all time are very well known cases of a tool that does one job well (ex: BLAST).


Finally, some meta-news on Bio::Blogs. I am currently way behind on many work commitments and I don't think I can keep up the (light) editorial work required for Bio::Blogs so I am considering stopping Bio::Blogs altogether. It has been almost two years and it has been fun and hopefully useful. The initial goal of trying to nit together the bioinformatic related blogs and offering some form of highlighting service is still required but I am not sure this is the best way going forward.
Still, if anyone wants to take over from here let me know by email (bioblogs at gmail.com).

Tuesday, April 01, 2008

(April fools update) Leveling the playing field – NIH to ban brain enhancing practices

Update - This post was part of an April 1st news but I am sure everyone got it :). Still the pressure in science is real and worth thinking about.

There has been quite a buildup of discussion surrounding the idea of brain enhancing drugs in the last couple of days. It started early march with a New York Time piece “Brain Enhancement Is Wrong, Right?” and it has culminated with the recent announcement of the World Anti Brain Doping Authority (WABDA) a joint effort from the NIH and EU to initiates studies on the reach of brain enhancing practices in science today.
There are many points of view already expressed on the web, see for example: ·Chris Patil
·Bora
·Anna Kushnir
·Genome Technology
·Egghead
·Eye on DNA
·Bob Ohara
·Martin Fenner
·Jennomics

My first reaction was of pure skepticism, this must be some kind of joke I thought, so I tried to probe a little bit around the UCSF campus to see if anyone has ever heard of this as well. One of my supervisors mentioned that about a year ago he had to fill out a NIH survey addressing the current problem of very high rejection rates for NIH grants. It looks like within this survey there was a section regarding the problems of competition in science and some of these brushed around the topic of brain enhancing practices. It could be that at the time NIH was trying to measure how far would people go under an extreme competitive environment.
This really got me thinking about how we are engaged in an environment that is not that far removed from highly competitive sports. How many stories have we heard about data forgery and scandalous retractions in the last couple of years? To what extent will people go to secure their place in science? To be recognized?
So maybe NIH is right in being proactive. Even if the issue is not as serious in science as it is in sports, unless there is an amazing influx of money or a considerable decrease of working scientists this might become an important problem. If nothing else we will get to know the current extent of these practices and it highlights yet again how far we deviated from course. The money society puts into scientific research is being wasted on overlapping competitive projects. Research agendas should be open and free for anyone to participate in. Maybe NIH should regulate that as well.