Q&A: Navigating academia and industry in India
Arjun R shares advice on choosing the career path for you
What do you do? Do you enjoy it? Why?
I am a doctoral candidate at the National Institute of Technology Karnataka. My dissertation focuses on intelligent systems for predictive modelling in financial applications. Aside from my PhD research topic, I enjoy working on problems that have the potential to make an impact through different aspects of research such as social, business and technology. Through my PhD programme, I have moulded research skills to work on funded projects or serve as independent consultant.
What were your early career ambitions?
After not performing well at school, my parents suggested that I take up a course at an ITI (Industrial Training Institute). This meant that I would be a skilled technician in an organisation after graduating. However, I decided to study for a Diploma in Electronics and Communication Engineering, which is technically a higher degree, as I did not want to limit my career prospects. The initial semesters were tough, but later I picked up with the help of learning in peer student groups.
How did you make the decision between pursuing a career in academia or industry?
Once I completed my diploma, thoughts of applying for jobs surfaced. However, at that time in India, a new scheme was introduced whereby diploma holders could apply and join the second year of an undergraduate programme. I stayed at the same institute where I had studied for my diploma and was awarded a BTech in Information Technology in 2007. Around this time, the economic recession was prevailing, the dot-com bubble had mostly subsided and the industry in India was changing rapidly. I felt that my place was not in industry, as I did not believe I had the excellent coding skills it required. I took up my first job as a lecturer on a contractual basis at Cochin University Engineering College at Kuttanad, which is a state-funded public university. After my first stint as a lecturer, I personally felt that academia provided a better comfort zone and space for professional growth.
What were some of the challenges on your journey?
Most of colleges where I worked already started to regulate for more qualified (PG/PhD) teaching staff. I was interviewed and received a job offer from Amrita University in Quilon in early 2008. But in that year, a breakthrough occurred while I was trying to qualify for GATE (Graduate Aptitude for Engineering), a national exam that provide chances to pursue Master’s and PhD programmes. More recently, this exam has been a criterion for selection for some positions public sector companies. I received a strong grade and rank, which I don’t think I would have got without the exposure and subject knowledge I’d acquired during my undergraduate training.
What did you do next?
After unsuccessful applications to graduate schools for Master’s programmes, I worked for a short period as technical staff at an institute for a government funded project on digitisation of a library with nationwide reach. Later in 2009, I applied for an MTech (Master of Technology) programme at a state public university and was selected for a teaching assistance scholarship. As part of the dissertation project work, I applied for an internship at ISRO (Indian Space Research Organization). Even without support from the university or a scholarship from ISRO, I worked on a two-semester project on the development of a software tool prototype for space research applications, which resulted in a related IEEE publication.
How does being based in India affect the way you work?
There have not been many drastic changes in India, from academic point of view, in recent years. There are constant checks and performance reviews either in government posts or private institutions. To an extent, although private institutions offer higher salaries, they also demand a higher workload as part of accreditations that may actually work positively in long run.
What advice would you have for others trying to work in a similar sort of environment?
There can be a sense of lethargy and inertia certain positions. The best policy is to keep searching for grants for funded projects, extend your professional skills, such as research reviewing and talking at conference and workshops. Undertake student support programmes like mentoring and community initiatives for spreading knowledge.
What do you love about living and working in India?
In my case, the government funded my research and hence I feel a sense of moral duty to give back to my nation. India has potential for growth both scientifically and economically; at least historically that has been evident.
What’s your top career tip to younger colleagues?
Stay focused and keep your eyes open for higher education and research opportunities. Reach out to your seniors, teachers and peers for advice.
What else would you say to others trying to build a scientific career in India?
From my experience, joining the best-ranked institute does not necessarily mean you will receive top training or skills, unless you have a true passion for your research. Smart work and motivation can instil students with the confidence to perform well and be recognised in academia. Make use of generous government scholarships as well as privately funded schemes.
Indian initiatives aim to break science’s language barrier
Drive for accessibility sees research relayed in regional tongues instead of English Scientists and policymakers across India are aiming to bring science to the nation’s citizens and residents whose main language is not English. They’re producing content such as articles and podcasts, and giving talks about discoveries and studies in health science, biology, biotechnology and astronomy in some of the nation’s 22 official languages, including Hindi, Marathi, Kannada and Tamil. As one of the languages used officially by the Indian government, English is largely considered to be the country’s language for science — but just 12% of the nation’s 1.3 billion citizens can speak and write it. Those who are trying to broaden the mix note that many more people will be able to access scientific content if it is available in other languages. “Speaking and writing in regional languages makes science more inclusive,” says Maggie Inbamuthiah, founder of Mandram (which means ‘platform’ in Tamil), an organization based in Chennai that seeks to create a platform on which ideas in science and technology are communicated in regional languages, including Tamil and Kannada. Initiatives to write about science and produce science-related content in languages other than English have been under way for several decades, as many urban schools and most higher-education institutions moved to an English-based curriculum. Those multi-language efforts started to expand with the advent of the Internet, which has provided easy access to content, new media, platforms for distribution, and the ability to find collaborators and new audiences. Digital spaces and media have brought new players to the undertaking, and have re-energized those who have been involved in these efforts for years. Language evolution Although digital platforms and social media help researchers and others to communicate scientific findings and discoveries to the public, any such endeavour is pointless if readers, viewers or listeners cannot speak or read that language. Few of India’s languages have an up-to-date lexicon of scientific terms, and many researchers in the country have long become accustomed to thinking and writing about science in English, says Inbamuthiah. Still, she notes, language is fluid and adaptable. “We enrich a language by adding new words,” she says. “With time, we become more comfortable using them.” Today, the effort to communicate science in multiple languages has a number of participants. Kollegala Sharma, a zoologist and senior principal scientist at the Central Food Technological Research Institute (CSIR) in Mysuru, India, has been producing Janasuddi (jana means both smart and knowledge and suddi means news in Kannada), a weekly science podcast, since September 2017. The 20-minute episodes, which comprise science research, news and interactive sessions that might include audience questions or comments, is in Kannada and is circulated through the WhatsApp platform. Listeners mainly include public high-school teachers — about 1,000, up from the 20 or so when Sharma first launched the programme. It’s also available on public radio. The Indian government is supporting the endeavour. K. VijayRaghavan, a molecular biologist and principal scientific adviser for the government, is a vocal proponent of making science accessible to people in their first language. He is working to provide increased funding and support for such efforts, and engages with many science communicators on social media, including Twitter. Spreading influence Other initiatives are emerging. Research Matters, a website that curates science news and articles in multiple languages and has more than 700,000 visitors, launched in November 2016. TED Talks India was launched in December 2017, and features prominent scientists who discuss topics such as neuroscience and astronomy in Hindi, the most widely spoken language in India. In January, the Indian Department of Science and Technology teamed with Doordarshan, a public service broadcaster, to launch two science-communication initiatives, DD Science, shown on Doordarshan, and Internet-based India Science. Both feature science-based programming in Hindi and English. Others are also exploring the podcast realm. Last July, IndSciComm, an online science-communication collective, started Sea of Science, a podcast series in Hindi, Kannada, Marathi, Assamese and Tamil that talks about model organisms used in biological research. The series producers say that it is challenging to translate scientific terms and concepts into regional languages. “But working on them is just as much about love of language and wanting to reach out to people as it is an exercise of scientific understanding and language experience,” says Shruti Muralidhar, a neuroscience postdoc at the Massachusetts Institute of Technology in Cambridge, and one of the producers. She says that they had to turn to online dictionaries, scientific lexicons and Google for support. The producers also worked out a system of ‘romanization’ that helped them to keep some terms intact while maintaining the cadences and sentence structure specific to the language. While writing the script for the Tamil podcast, producer Abhishek Chari, a freelance science communicator in Cambridge, Massachusetts, had to translate the English word ‘metabolism’ into Tamil. Etymologically, it comes from the Greek word metabolē (from metaballein, ‘to change’) plus the suffix ‘-ism’. But trying to directly translate this word into Tamil (possibly with the term maatram, meaning ‘change’) won’t capture the biological meaning that the word ‘metabolism’ is used to convey, says Muralidhar. “Looking it up on Google Translate, valarchithai came up as a Tamil equivalent of metabolism. This worked perfectly because valarchithai is a compound word consisting of ‘grow’ (valar) and ‘disperse or shatter into pieces’ (chithai), so the term would mean ‘grow plus disperse’,” she says. “The agglutinative (putting multiple words together) nature of the Tamil language came to our rescue.” Last June, Inbamuthiah partnered with the Bangalore Life Science Cluster (BLiSC) to organize The Jigyasa Project, a one-night presentation in Bengaluru of science talks and audience-interactive sessions in Kannada, Hindi and Tamil. A second presentation was held in December 2018. Each event included six scientist-presenters, had 100–150 attendees, and covered topics from genetics to intellectual property. The organization plans to continue to hold events every June and December. The 12 scientists who took part agreed that their presentations were challenging because they required them not just to translate talks into another language, but also to translate the underlying scientific concepts. “I was quite nervous giving a talk in Hindi, and it was a big challenge for me,” says Uma Ramakrishnan, a molecular ecologist at the National Centre for Biological Sciences in Bengaluru. “I thought about what I was going to say, and rehearsed it with some of my Hindi-speaking students, just to make sure I was communicating my thoughts correctly. Local benefits Ramakrishnan thinks that the effort to communicate science in languages other than English is very important, especially for field researchers like herself, whose work is local and regional, such as investigating tigers in Rajasthan or biodiversity in the Western Ghats, a biodiversity hotspot along India’s west coast. “Doing fieldwork across India, my team and I have often informally communicated our research in Hindi or Malayalam to local people,” she says. “For the people who live in these places, this is one way in which science can feel tangible and local. Platforms like Jigyasa provide an opportunity to make this more accessible to a larger audience.” Mahinn Ali Khan, spokesperson for BLiSC, says that she observed a real sense of camaraderie between the audience and the scientists at Jigyasa. “Speaking in your own language helps you immediately drop the formality and reserve,” she says. Khan thinks that, although researchers are eager to engage with non-scientists, the shift to accepting science as a subject that can be discussed in a language other than English still faces some resistance from the public. “At this point, these are passion-driven projects for most of us,” agrees Inbamuthiah. If you have a career story that you'd like to share, then please complete this form, or send your outline by email.
Why ‘hike fellowship’ is a recurrent war cry for India’s researchers
Microbiologist Yogesh Chawla was part of the team that led the protests demanding hike in research fellowships in India during 2014-15. He rues in this guest post that not much seems to have changed in the country’s treatment of its research scholars since. Following months of agitation by young scientists across India, the Indian government announced a hike in fellowships for research scholars earlier this month (February 2019). The stipends for junior research fellows (JRFs) were raised from a monthly Rs 25,000 to Rs 31,000, and that for senior research fellows (SRFs) from Rs 28,000 to Rs 35,000. The research scholars have been protesting every few years to bring to light the abysmal pay parity, delayed and irregular disbursal of stipends, semester fee charges, and scarcity of fund allocated to science. The protests typically last for a few months reaching a crescendo on social media, and finally end with the science administration promising and then delivering a hike. India’s current government has enhanced their fellowship twice, almost doubling it from Rs 16,000 in 2014 to Rs. 31,000. It is a step, albeit small, in the right direction to bridge the gap in pay disparity of researchers. However, the challenges facing India’s research scholar are far from over. History of protests During the fellowship hike movement of 2014-15, five of us scholars represented the protesting researchers in negotiations with the institutional authorities and government representatives. Several issues were discussed at length then, and still remain unresolved. Policy changes that were mooted then to streamline the system are still pending. A hike is not the only thing to fulfill the vision of better scientific rigour or improvement in the quality of Indian science. One of the objectives of such fellowship hikes is to attract talent to science disciplines by providing economic emoluments parity, laurels, awards and recognition. The need of the hour is to have a multi-pronged approach to bring Indian science at par with world standards, to make Indian research relevant to the country’s needs, to transform India into a torch bearer of scientific excellence, technological advancements and innovations. These are important but imposing challenges for India and the country’s science policy is a key tool to overcome them. Rewarding merit How do we bring rigour into India’s science? Can we have measures to reward scholars – the backbone of our scientific quest – who work tirelessly beyond stipulated office hours? Will rewarding the first author for publishing quality research be a game changer? Publishing in high impact journals may not be the ultimate or accurate parameter of judging the quality of science but it is a practical parameter. A thorough scientific study in a reputed journal does suggest a work of excellence. Impact factors, citations or the impact of research on problems specific to India can be taken as criteria to judge merit. The overarching idea is to reward hard work, judged and scrutinised for scientific quality and rigour by independent peers. This way, we would be able to bring equity to the hard and diligent work. Any scientific misconduct or falsification of data should be made punishable. Currently, Indian authors publish around 100,000 articles every year but their average citation impact is around 0.8, which is nearly half of the citation impact of articles published from USA or UK (~1.6)1. Rewards for and equity to good quality work would boost the overall scientific rigour. It wouldn’t cost much to the government exchequer but would certainly impact the morale and enthusiasm of researchers favourably. It could be a robust way to kick start ideas, innovations and excellence. Likewise, universities, departments and institutions should be rewarded for their scientific excellence. However, when impact factors of publications become the criteria for a reward, they potentially exclude scholars and scientists looking at grass root problems (that may not be very popular research areas but are high on social benefits) or high impact work in a scientific journal. Scholars of such fields should be recognised through other laurels and awards. Another policy change that may ensure a respectable life for senior researchers wanting to continue research in India is to enhance the fold increase of the fellowships between JRF to SRF and SRF to the postdoctoral level (say, around 1.4 to 1.5-fold of their previous level). SRF and postdoctoral researchers are generally in their late 20s or early 30s, a time they typically start or support a family. Scholars who earn their PhDs in Indian institutions should be rewarded since many JRFs leave Indian PhD programmes to pursue PhDs in foreign labs or institutes. JRF fellowship shouldn’t be a stop-gap arrangement for aspiring graduates of foreign universities. A JRF scholar who continues research in India and gets promoted to SRF should be rewarded with a healthy raise in stipend to pursue research in India. The same logic applies to postdoctoral fellows. The long-debated issue of brain drain could have a solution in a good postdoctoral fellowship with independent grants. The Chinese initiatives “Thousand Talents Plan” and “Thousand Youth Talents Plan”2are great examples of how to attract scholars to postdoctoral positions through government grants and fellowships and to pursue them to return and serve home institutions. This way, trained and qualified PhD scientists could fuel the nation’s economic and scientific growth and Prime Minister Narendra Modi’s cry of “Jai Jawan, Jai Kisaan, Jai Vigyaan and Jai Anusandhaan” would sound real. India by the numbers China’s plan to recruit talented researchers (Yogesh Chawla is a PhD from the National Institute of Immunology, New Delhi and currently a postdoctoral fellow at the Weill Cornell Medical College, New York. He can be contacted at firstname.lastname@example.org.)
Is Science In Trouble?
A Conversation With Colin Camerer About the Replication Crisis News Writer: Emily Velasco Credit: Caltech If there's a central tenet that unites all of the sciences, it's probably that scientists should approach discovery without bias and with a healthy dose of skepticism. The idea is that the best way to reach the truth is to allow the facts to lead where they will, even if it's not where you intended to go. But that can be easier said than done. Humans have unconscious biases that are hard to shake, and most people don't like to be wrong. In the past several years, scientists have discovered troubling evidence that those biases may be affecting the integrity of the research process in many fields. The evidence also suggests that even when scientists operate with the best intentions, serious errors are more common than expected because even subtle differences in the way an experimental procedure is conducted can throw off the findings. When biases and errors leak into research, other scientists attempting the same experiment may find that they can't replicate the findings of the original researcher. This has given the broader issue its name: the replication crisis. Colin Camerer Colin Camerer, Caltech's Robert Kirby Professor of Behavioral Economics and the T&C Chen Center for Social and Decision Neuroscience Leadership Chair, executive officer for the Social Sciences and director of the T&C Chen Center for Social and Decision Neuroscience, has been at the forefront of research into the replication crisis. He has penned a number of studies on the topic and is an ardent advocate for reform. We talked with Camerer about how bad the problem is and what can be done to correct it; and the "open science" movement, which encourages the sharing of data, information, and materials among researchers.What exactly is the replication crisis? What instigated all of this is the discovery that many findings—originally in medicine but later in areas of psychology, in economics, and probably in every field—just don't replicate or reproduce as well as we would hope. By reproduce, I mean taking data someone collected for a study and doing the same analysis just to see if you get the same results. People can get substantial differences, for example, if they use newer statistics than were available to the original researchers. The earliest studies into reproducibility also found that sometimes it's hard to even get people to share their data in a timely and clear way. There was a norm that data sharing is sort of a bonus, but isn't absolutely a necessary part of the job of being a scientist.How big of a problem is this? I would say it's big enough to be very concerning. I'll give an example from social psychology, which has been one of the most problematic areas. In social psychology, there's an idea called priming, which means if I make you think about one thing subconsciously, those thoughts may activate related associations and change your behavior in some surprising way. Many studies on priming were done by John Bargh, who is a well-known psychologist at Yale. Bargh and his colleagues got young people to think about being old and then had them sit at a table and do a test. But the test was just a filler, because the researchers weren't interested in the results of the test. They were interested in how thinking about being old affected the behavior of the young people. When the young people were done with the filler test, the research team timed how long it took them to get up from the table and walk to an elevator. They found that the people who were primed to think about being old walked slower than the control group that had not received that priming. They were trying to get a dramatic result showing that mental associations about old people affect physical behavior. The problem was that when others tried to replicate the study, the original findings didn't replicate very well. In one replication, something even worse happened. Some of the assistants in that experiment were told the priming would make the young subjects walk slower, and others were told the priming would make them walk more quickly—this is what we call a reactance or boomerang effect. And what the assistants were told to expect influenced their measurements of how fast the subjects walked, even though they were timing with stopwatches. The assistants' stopwatch measures were biased compared to an automated timer. I mention this example because it's the kind of study we think of as too cute to be true. When the failure to replicate came out, there was a big uproar about how much skill an experimenter needs to do a proper replication.You recently explored this issue in a pair of papers. What did you find? In our first paper, we looked at experimental economics, which is something that was pioneered here at Caltech. We took 18 papers from multiple institutions that were published in two of the leading economics journals. These are the papers you would hope would replicate the best. What we found was that 14 out of 18 replicated fairly well, but four of them didn't. It's important to note that in two of those four cases, we made slight deviations in how the experiment was done. That's a reminder that small changes can make a big difference in replication. For example, if you're studying political psychology and partisanship and you replicate a paper from 2010, the results today might be very different because the political climate has changed. It's not that the authors of the original paper made a mistake, it's that the phenomenon in their study changed. In our second paper, we looked at social science papers published between 2010 and 2015 in Science and Nature, which are the flagship general science journals. We were interested in them because these were highly cited papers and were seen as very influential. We picked out the ones that wouldn't be overly laborious to replicate, and we ended up with 21 papers. What we found was that only about 60 percent replicated, and the ones that didn't replicate tended to focus on things like priming, which I mentioned before. Priming has turned out to be the least replicable phenomenon. It's a shame because the underlying concept—that thinking about one thing elevates associations to related things—is undoubtedly true.How does something like that happen? One cause of findings not replicating is what we call "p-hacking." P-value is a measure of the statistical likelihood that your hypothesis is true. If the p-value is low, an effect is highly unlikely to be a fluke due to chance. In social science and medicine, for example, you are usually testing whether changing the conditions of the experiment changes behavior. You really want to get a low p-value because it means that the condition you changed did have an effect. P-hacking is when you keep trying different analyses with your data until you get the p-value to be low. A good example of p-hacking is deleting data points that don't fit your hypothesis—outliers—from your data set. There are statistical methods to deal with outliers, but sometimes people expect to see a correlation and don't find much of one, for example. So then they think of a plausible reason to discard a few outlier points, because by doing that they can get the correlation to be bigger. That practice can be abused, but at the same time, there sometimes are outliers that should be discarded. For example, if subjects blink too much when you are trying to measure visual perception, it is reasonable to edit out the blinks or not use some subjects. Another explanation is that sometimes scientists are simply helped along by luck. When someone else tries to replicate that original experiment but doesn't get the same good luck, they won't get the same results.In the sciences, you're supposed be impartial and say, "Here's my hypothesis, and I'm going to prove it right or wrong." So, why do people tweak the results to get an answer they want? At the top of the pyramid is outright fraud and, happily, that's pretty rare. Typically, if you do a postmortem or a confessional in the case of fraud, you find a scientist who feels tremendous pressure. Sometimes it's personal—"I just wanted to be respected"—and sometimes it's grant money or being too ashamed to come clean. In the fraudulent cases, scientists get away with a small amount of deception, and they get very dug in because they're really betting their careers on it. The finding they faked might be what gets them invited to conferences and gets them lots of funding. Then it's too embarrassing to stop and confess what they've been doing all along.There are also faulty scientific practices less egregious than outright fraud, right? Sure. It is the scientist who thinks, "I know I'm right, and even though these data didn't prove it, I'm sure I could run a lot more experiments and prove it. So I'm just going to help the process along by creating the best version of the data." It's like cosmetic surgery for data. And again, there are incentives driving this. Often in Big Science and Big Medicine, you're supporting a lot of people on your grant. If something really goes wrong with your big theory or your pathbreaking method, those people get laid off and their careers are harmed. Another force that contributes to weak replicability is that, in science, we rely to a very large extent on norms of honor and the idea that people care about the process and want to get to the truth. There's a tremendous amount of trust involved. If I get a paper to review from a leading journal, I'm not necessarily thinking like a police detective about whether it's fabricated. A lot of the frauds were only uncovered because there was a pattern across many different papers. One paper was too good to be true, and the next one was too good to be true, and so on. Nobody's good enough to get 10 too-good-to-be-trues in a row. So, often, it's kind of a fluke. Somebody slips or a person notices and then asks for the data and digs a little further.What best practices should scientists follow to avoid falling into these traps? There are many things we can do—I call it the reproducibility upgrade. One is preregistration, which means before you collect your data, you publicly explain and post online exactly what data you're going to collect, why you picked your sample size, and exactly what analysis you are going to run. Then if you do very different analysis and get a good result, people can question why you departed from what you preregistered and whether the unplanned analyses were p-hacked. The more general rubric is called open science, in which you act like basically everything you do should be available to other people except for certain things like patient privacy. That includes original data, code, instructions, and experimental materials like video recordings—everything. Meta-analysis is another method I think we're going to see more and more of. That's where you combine the results of studies that are all trying to measure the same general effect. You can use that information to find evidence of things like publication bias, which is a sort of groupthink. For example, there's strong experimental evidence that giving people smaller plates causes them to eat less. So maybe you're studying small and large plates, and you don't find any effect on portion size. You might think to yourself, "I probably made a mistake. I'm not going to try to publish that." Or you might say, "Wow! That's really interesting. I didn't get a small-plate effect. I'm going to send it to a journal." And the editors or referees say, "You probably made a mistake. We're not going to publish it." Those are publication biases. They can be caused by scientists withholding results or by journals not publishing them because they get an unconventional result. If a group of scientists comes to believe something is true and the contrary evidence gets ignored or swept under the rug, that means a lot of people are trying to come to some collective conclusion about something that's not true. The big damage is that it's a colossal waste of time, and it can harm public perceptions of how solid science is in general.Are people receptive to the changes you suggest? I would say 90 percent of people have been very supportive. One piece of very good news is the Open Science Framework has been supported by the Laura and John Arnold Foundation, which is a big private foundation, and by other donors. The private foundations are in a unique position to spend a lot of money on things like this. Our first grant to do replications in experimental economics came when I met the program officer from the Alfred P. Sloan Foundation. I told him we were piloting a big project replicating economics experiments. He got excited, and it was figuratively like he took a bag of cash out of his briefcase right there. My collaborators in Sweden and Austria later got a particularly big grant for $1.5 million to work on replication. Now that there's some momentum, funding agencies have been reasonably generous, which is great. Another thing that's been interesting is that while journals are not keen on publishing a replication of one paper, they really like what we've done, which is a batch of replications. A few months into working on the first replication paper in experimental economics funded by Sloan, I got an email from an editor at Science who said, "I heard you're working on this replication thing. Have you thought about where to publish it?" That's a wink-wink, coy way of saying "Please send it to us" without any promise being made. They did eventually publish it.What challenges do you see going forward? I think the main challenge is determining where the responsibility lies. Until about 2000, the conventional wisdom was, "Nobody will pay for your replication and nobody will publish your replication. And if it doesn't come out right, you'll just make an enemy. Don't bother to replicate." Students were often told not to do replication because it would be bad for their careers. I think that's false, but it is true that nobody is going to win a big prize for replicating somebody else's work. The best career path in science comes from showing that you can do something original, important, and creative. Replication is exactly the opposite. It is important for somebody to do it, but it's not creative. It's something that most scientists want someone else to do. What is needed are institutions to generate steady, ongoing replications, rather than relying on scientists who are trying to be creative and make breakthroughs to do it. It could be a few centers that are just dedicated to replicating. They could pick every fifth paper published in a given journal, replicate it, and post their results online. It would be like auditing, or a kind of Consumer Reports for science. I think some institutions like that will emerge. Or perhaps granting agencies, like the National Institutes of Health or the National Science Foundation, should be responsible for building in safeguards. They could have an audit process that sets aside grant money to do a replication and check your work. For me this is like a hobby. Now I hope that some other group of careful people who are very passionate and smart will take up the baton and start to do replications very routinely.