Tag Archives: Nature

Rescuing Science

http://commons.wikimedia.org/wiki/Category:Scientists#mediaviewer/File:Scientist.svg
http://commons.wikimedia.org/wiki/Category:Scientists#mediaviewer/File:Scientist.svg

Last week, the same editorial was published on the two top scientific journals, Nature and Science1,2, describing the intent of the scientific community to address reproducibility and transparency of scientific results.

Last June, a group of editors of scientific journals, members of funding agencies, and scientific leaders have met to discuss guidelines and principles for future publication policy to guarantee reliable scientific methods. The meeting resulted in the publication of a list of guidelines that the journal should follow to report preclinical studies (http://www.nih.gov/about/reporting-preclinical-research.htm). According to the guidelines, the journals should carefully check accuracy of statistical analysis use a checklist to assure a complete report of the methodology (standards, replicates, statistics, sample size, blinding, inclusion and exclusion criteria); the journal should encourage sharing datasets and software in public domains, be responsible of refutations, and guarantee the accurate description of all sources as well as check for image manipulation.

Sometimes it is difficult to incorporate all the information on the methodology in a paper, especially because of the word count limit. To guarantee a careful and accurate description of all the methodologies, some journal should revise the word count limit that induces the author to cut or to move a huge part of the methods section to the supplementary section. To encourage accuracy, some journals are already requesting a minimum word count for the methods section. To assure a good quality of the research published on both high and low impact journals, the new guidelines should be followed by all the journals and not only by a set of top scientific journals, otherwise affecting the whole research reliability.

These guidelines should not only be adopted by the journals but also by all researchers. Honestly, I thought that the principles outlined in the guidelines—statistics, blindness, sample size, etc.—were obvious steps when outlining an experiment. Obviously, they are not, since the scientific community had to meet and had to delineate them in an official document. The increasing number of retractions—the last most clamorous case of the Obokata’s papers published on Nature and then retracted previously this year—highlights the necessity to draft such a guide. To avoid future problems on science reliability, a training should be provided not only to the new generation of scientists but also to the old generation, who is mostly responsible of the recent scientific scandals. Unfortunately, scientists are not the only responsible for what is happening in science: everyone in the scientific community is guilty from the journals to the funding agencies.

I am still shocked that the people in the scientific community had to meet and draft these guidelines. Now it is time to slow down and to promote transparency and reliability instead than sensationalism.

1Journals unite for reproducibility. Nature. 2014 Nov 6;515(7525):7. doi: 10.1038/515007a.

 2 Journals unite for reproducibility. McNutt M. Science. 2014 Nov 7;346(6210):679.

New information on mammalian expression patterns

April 2014An extended atlas of transcription initiation sites (TSSs) has been described by the functional annotation of the mammalian genome 5 (FANTOM5) consortium in a recent  article on Nature1. It is a comprehensive overview of expression profiles in mice and humans complementing and extending the information already present in other datasets, such as ENCODE2.

The large international FANTOM consortium led by RIKEN PMI has performed cap analysis of gene expression (CAGE)3 —a technique that sequences the 5’ of the RNA messenger —on a large cohort of mouse and human samples: 573 human primary cells, 128 mouse primary cells, 250 cancer cells lines, 152 human post-mortem tissues, and 271 mouse tissue samples.  They found more than three million peaks in humans and more than two million in mice, including sequences in internal exons. To reduce the background, they applied tag evidence threshold. The TSSs found were confirmed to belong to known promoters, based on known sequence analyses— expressed sequence tags, histone H3 lysine 4 trimethylation marks and DNase hypersensitive sites. The peaks were classified as non-ubiquitous/cell-type specific (cell-adhesion and signal transduction), ubiquitous-uniform/housekeeping (ribonucleoprotein complex and RNA processing), and ubiquitous non-uniform (cell cycle genes). Most of the peaks belong to cell-type specific genes, highlighting the abundance of tissue specific gene regulation; also, the housekeeping genes are the most conserved between human and mouse, confirming the importance of their function. Although some level of conservation, only 43% of human TSSs could be aligned to the mouse genome and only 39% of mouse TSSs could be aligned to the human genome, thus indicating a remodeling of transcription initiation during evolution.

This is research offers more than a collection of information. For instance, it also analyzed TSSs in cancer cells opposed to primary cells, finding that transcription factors are more expressed in the transformed than in the primary cells, because of chromosomal rearrangements and mutations; this highlights an underestimated problem of using cancer cells for transcription studies.

The FANTOM5 atlas provides an extensive  collection and material on TSSs, which is available online (http://fantom.gsc.riken.jp/5) together with specialized tools (ZENBU3 or SSTAR)3 to analyze the data and integrate the information from the ENCODE and the FANTOM consortium datasets, thus allowing scientists to have the access to a  huge amount of information on mammalian gene regulation.

 

1FANTOM Consortium and the RIKEN PMI and CLST (DGT). (2014) A promoter-level mammalian expression atlas.Nature 507(7493):462-70. doi: 10.1038/nature13182.

2ENCODE Project Consortium, Birney E, Stamatoyannopoulos JA, Dutta A, Guigó R, Gingeras TR, Margulies EH, Weng Z, Snyder M, Dermitzakis ET, et al. (2007).“Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project”. Nature 447 (7146): 799–816. .doi:10.1038/nature05874

3 Shiraki T, Kondo S, Katayama S, Waki K, Kasukawa T, Kawaji H, Kodzius R, Watahiki A, Nakamura M, Arakawa T, Fukuda S, Sasaki D, Podhajska A, Harbers M, Kawai J, Carninci P, Hayashizaki Y. (2003) Cap analysis gene expression for high-throughput analysis of transcriptional starting point and identification of promoter usage”. Proc Natl Acad Sci U S A. 100 (26): 15776–81.. doi:10.1073/pnas.2136655100.

4Jessica Severin, Marina Lizio, Jayson Harshbarger, Hideya Kawaji, Carsten O Daub, Yoshihide Hayashizaki, the FANTOM consortium, Nicolas Bertin, and Alistair RR Forrest. (2014) “Interactive visualization and analysis of large-scale NGS data-sets using ZENBU”. Nature Biotechnology,

On the road of science: reproducibility, fraud, and authorship for sale


Argentina 015 
Happy 2014!

This is my first post of the year and I would like to start this New Year reflecting on some concerning habits in science. I will just focus on only three recent articles that undermine the trustfulness of science, but there are many others out there in the jungle of scientific journals.

In a Comment published in Nature last November, Mina Bissell justifies the non-reproducibility of some in vitro experiments1`. Nowadays, “the techniques and reagents are sophisticated, time-consuming and difficult to master”, challenging the ability to reproduce complicated experiments in different laboratories. The solution, for Dr Bissell, “is to consult the original authors thoughtfully […] ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours.” However, this can be true for in vitro assays that are already an artifact, but I hope that it isn’t for in vivo studies used for preclinical studies. This article has opened an intense discussion on scientific reproducibility, as you can read in the comments published this month by Nature2. You can either agree or disagree with her statements, but we are spending words and time on a topic, experimental reproducibility, that shouldn’t be an issue.

Last December, Nature has reported the news regarding duplicated images used on different journals from the same group3.  Professor Fusco at the University of Naples (Italy) and an associate professor from the Academia of Lincei (Italy) are now under investigation by the police and the university. The misconduct has been revealed by Enrico Bucci, a molecular biologist founder of a small startup (BioDigitalValley) aimed at creating a database of all images from Italian papers published since 2000. Running all images on his gel-checking software, he found that out of 300 papers published from Fusco, 53 contained duplicated or cut and paste images, even one from 1985.  Some of the papers have already been retracted; one of them was published on Journal of Clinical Investigation in 2007.  It is highly possible that Fusco is only the first target of this operation that is going to reveal other misconducts. This is quite concerning not only because it is not a right practice, but also because it comes from a country, Italy, where scientific research is well behind and the funding situation is not good; this circumstance is not going to help Italian science. However, fortunately, this fraud has been unmasked.

An article published last November on Science describes a concerning practice in the Chinese scientific community4. Some intermediary agencies sell authorships on papers that have already been accepted for publication, sometimes without the consent or the knowledge of the actual authors. The price for this service varies based on the position in the author’s list. Thus someone can publish a paper not only without doing anything, but also without even knowing the authors. These practices worry the same Chinese scientific community because “hinder China’s growth in original science, damage the reputation of Chinese academics, and dampen the impact of science developed in China”, as asserted by the president of the National Natural Science Foundation of China, Wei Yang in the Editorial published on the same number of Science5.

These are just three examples of the present scientific world. Every month there is at least one retraction, at every meeting there is someone who mistrusts other’s experiments.  Unfortunately we are in a system that gives rewards and funding to sexy science and high Impact Factors publications at the expenses of the truth, the real science, made by simple and reproducible experiments and not artifactual and sexy assays.

Where is science going?

1 Bissell M. Reproducibility: The risks of the replication drive. Nature. 2013 Nov 21;503(7476):333-4.

2 NATURE ’S READERS COMMENT ONLINE.Nature. 2014 Jan 2;505:27.

3 Abbott A. Image search triggers Italian police probe. Nature. 2013 Dec 5;504(7478):18.

4 Hvistendahl M. China’s publication bazaar. Science. 2013 Nov 29;342(6162):1035-9.

5 Yang W. Research Integrity in China. Science. 2013 Nov 29;342(6162):