Sunday, 27 July 2014

Big Bang breakthrough team allows they may be wrong

US-THEORY

Washington, Jun 20 (AFP) American astrophysicists who announced just months ago what they deemed a breakthrough in confirming how the universe was born now admit they may have got it wrong.

The team said it had identified gravitational waves that apparently rippled through space right after the Big Bang.

If proven to be correctly identified, these waves -- predicted in Albert Einstein's theory of relativity – would confirm the rapid and violent growth spurt of the universe in
the first fraction of a second marking its existence, 13.8 billion years ago.

The apparent first direct evidence of such so-called cosmic inflation -- a theory that the universe expanded by 100 trillion trillion times in barely the blink of an eye -- was
announced in March by experts at the Harvard-Smithsonian Center for Astrophysics.

The detection was made with the help of a telescope called BICEP2, stationed at the South Pole.

After weeks in which they avoided the media, the team published its work yesterday in the US journal Physical Review Letters.

In a summary, the team said their models "are not sufficiently constrained by external public data to exclude the possibility of dust emission bright enough to explain the
entire excess signal," as stated by other scientists who questioned their conclusion.

The team was led by astrophysicist John Kovac of Harvard.

BICEP2 stands for Background Imaging of Cosmic Extragalactic Polarization.

"Detecting this signal is one of the most important goals in cosmology today," Kovac, leader of the BICEP2 collaboration at the Harvard-Smithsonian Center for Astrophysics, said back in March.

By observing the cosmic microwave background, or a faint glow left over from the Big Bang, the scientists said small fluctuations gave them new clues about the conditions in the early universe.

The gravitational waves rippled through the universe 380,000 years after the Big Bang, and these images were captured by the telescope, they claimed.

For weeks, some scientists have expressed doubts about the findings of the BICEP2 team. (AFP)

Less than 10 per cent of human DNA useful: scientists

July 26, 2014

Ian Sample

More than 90 per cent of human DNA is doing nothing very useful, and large stretches may be no more than biological baggage that has built up over years of evolution, Oxford researchers claim.

The scientists arrived at the figure after comparing the human genome with the genetic makeup of other mammals, from dogs and mice to rhinos and horses.

The researchers looked for sections of DNA that humans shared with the other animals, which split from our lineage at different points in history. When DNA is shared and conserved across species, it suggests that it does something valuable.

Gerton Lunter, a senior scientist on the team, said that, based on the comparisons, 8.2 per cent of human DNA was “functional,” meaning that it played an important enough role to be conserved by evolution.

“Scientifically speaking, we have no evidence that 92 per cent of our genome is contributing to our biology at all,” Lunter said.

Researchers have known for some time that only 1 per cent of human DNA is held in genes that are used to make crucial proteins to keep cells — and bodies — alive and healthy. The latest study, reported in the journal Plos Genetics , suggests that a further 7 per cent of human DNA is equally vital, regulating where, when, and how genes are expressed.

But if much of our DNA is so worthless, why do we still carry it around? “It’s not true that nature is parsimonious in terms of needing a small genome. Wheat has a much larger genome than we do,” Lunter said. “We haven’t been designed. We’ve evolved, and that’s a messy process. This other DNA really is just filler. It’s not garbage. It might come in useful one day. But it’s not a burden.” — © Guardian Newspapers Limited, 2014

A disappointment for scientific community

July 26, 2014

V.V. Krishna

The quality of science production and excellence in science is falling rapidly due to poor peer evaluation systems and merit-based incentives

The budget not only failed to commit substantial funding to Science and Technology, it also failed to signal the strengthening of research and innovation

After a decade-long policy paralysis in science, technology and higher education research during UPA I and II, the Indian science community was waiting with optimism. To its disappointment, the budget did not commit any substantial funding that the field of Science and Technology (S&T) deserves. It also failed to give any signals to strengthen the research and innovation base of our ailing 700 universities and 30,000-plus colleges.

Investing in R&D

Benchmarking OECD and other front-ranking economies, the Manmohan Singh government from 2004 reiterated its commitment to increase the Gross Expenditure on Research and Development (GERD) to 2 per cent of GDP. In the last decade, Indian GERD/GDP either stagnated at a little less than 0.9 per cent or even relatively declined when adjusted to inflation. During the same period, the Chinese figure witnessed a jump from 1 per cent to 1.9 per cent of GDP. China’s economy is three to four times larger than the Indian economy. It is investing at least five times more money in Research and Development (R&D) compared to us. Going back from earlier commitments, the Science, Technology and Innovation Policy 2013 promised to increase the magic figure to 2 per cent only “if the private sector raises its R&D investment.” The budget statement has a somewhat similar tone.

Without a clear-cut commitment and a road map from the government to increase the public share of GERD to 2 per cent of GDP, the laudable goals of “positioning India among the top five global scientific powers by 2020” (STIP 2013); building a new science and technology base in nanotechnology, material science, bio-medical research and devices; a range of biotechnology-related front-ranking research clusters in Faridabad and Bengaluru; science component of ‘smart cities’; high speed and bullet trains etc., as desired in the budget, will remain a distant dream. Without allocating sufficient funds for research, we may even become dependent on foreign countries for critical technologies. The issue of higher government commitment to GERD also assumes significance as more than 55 per cent of GERD in the last few decades is consumed by the strategic sectors of defence, atomic energy and space. Hence what is broadly left under the civilian R&D is less than 45 per cent of GERD. With a series of missions planned to Mars and the moon, ‘big science’ international projects and emerging geo-political scenario, the dominance of strategic sectors in GERD is likely to continue in the coming decade. The progress of science agencies and R&D is intimately connected with human potential in higher education.

Research in higher education

Research in higher education has been a victim of policy paralysis in S&T in the last few decades. Two crucial features of research in higher education are its share in national GERD and research intensity. In the last decade, universities accounted for over 52 per cent of total cumulative national research publications measured by international databases such SCI or SCOPUS. But they were allocated just 5 per cent of GERD. Second, just about 10-12 per cent of our universities and colleges can be described as research intensive. The rest — nearly 88 per cent of our universities — are only teaching institutions. The bulk of our higher education sector is yet to attain what is known as the Humboldtian goal of teaching and research excellence. These facts have been repeatedly pointed out to both HRD and the Department of Science and Technology. But so far very little has been done to create a level-playing field between higher education and science agencies.

Universities in the OECD region (25 countries) accounted for 20 per cent and Japanese universities accounted for 17 per cent of GERD in the last decade. Even Chinese universities increased their share of GERD from around 5 per cent in the 1990s to over 15 per cent at present. The policy measures taken to increase research intensity in universities and nurture them to attain world-class standards in China were part of its national innovation strategies. Project 211 in the mid-1990s allocated U.S. $7.98 billion for 100 universities. Project 985 further shortlisted 39 universities to develop them into their version of ‘Chinese Ivy League’ from the late 1990s with a budget of 4.87 U.S.$ billion. We have not only fallen behind our global competitors, but have failed to adequately address the question of research intensity and gross enrolment ratios in the higher education sector.

So far, national leadership has failed to forge close collaborative links and channels of mobility between public science agencies, universities and user industries. For instance, there is very little collaboration between the best of our software firms and universities. The Nano mission funded 150 projects but did not lead to any fruitful joint collaboration between institutions and industry. All three operate in relative isolation with each other. Such inter-institutional collaborations are important as they not only arrest fossilisation of research areas but also enhance the mobility of research personnel. Acquiring sophisticated equipment and instrumentation is a capital-intensive affair. Inter-institutional mobility will optimise the usage of scarce S&T resources. In this context we can learn from the French experience.

In the last two decades, 80 per cent of CNRS (French National Research Council) laboratories were reorganised to establish joint R&D units with universities in their close proximity. They follow a system of joint appointments to enhance mobility between different institutions and establish joint incubation and innovation centres to commercialise technologies.

With Public-Private Partnership and FDI looming large in S&T institutions, we need to formulate a series of S&T laws to govern and regulate knowledge production, incentives and research innovation schemes. There is no national intellectual property law governing all public research and educational institutions at the moment. Without this step, PPP in research is going to be problematic. This assumes significance as the new policy intends to open up research and innovation schemes to the private and corporate sector firms in future.

The other area where our science leadership has failed us is in science organisation. The quality of science production and excellence in science is falling rapidly due to poor peer evaluation systems and merit-based incentives. A large number of S&T journals continue to lag far behind international benchmarks. They are not the first or even second priority. They are a fallback preference if papers are rejected by foreign journals. Science agencies do not attract the best of talents. Young faculty in universities feel quite frustrated to discover after entering that it takes 12 long years to get their first promotion. There is large-scale internal brain drain taking place from science and engineering to commerce, management and marketing areas.

Over the years, one can witness the loss of research and academic autonomy in science agencies and academia. The scientific elite who are leading these large institutions no longer see themselves as representatives of the scientific academic community. It seems, this elite draws its legitimacy not from its membership in the academic-science community but from its access and proximity to political power. It is indeed a dangerous trend that the scientific elite is getting embedded with political powers sacrificing their research and academic autonomy in the process. Only time will only tell us where we are heading.

(V.V. Krishna is professor in science policy, Centre for Studies in Science Policy, JNU and Editor-in-Chief, Science, Technology and Society, Sage.)

 

http://www.thehindu.com/todays-paper/tp-opinion/a-disappointment-for-scientific-community/article6250930.ece

Wednesday, 23 July 2014

EU may restrict genome editing of crops: scientists

July 23, 2014

Updated: July 23, 2014 00:58 IST

Fiona Harvey

A fledgling technology to manipulate the genes of crops in order to make them less susceptible to disease and more productive is at risk of falling foul of the European Union’s genetic modification rules, scientists warned on Monday.

Genome editing is different to genetic modification, because it does not usually involve transplanting genes from one plant or species to another, but on pinpointing the genetic mutations that would occur naturally through selective breeding. This means that, in most cases, it mimics natural actions and does not require the wholesale transformation of genes with which GM is often associated.

Genome editing typically involves finding the part of a plant genome that could be changed to render it less vulnerable to disease, or resistant to certain herbicides, or increase yields or other desirable traits. Researchers use “molecular scissors” to break apart the genome and repair it, which is a process that occurs naturally when plants are under attack from diseases and can throw up new mutations that enable the plant to survive future attacks. This evolutionary process can effectively be speeded up now that it is possible to examine plant genomes in detail in laboratories, and create mechanisms through which the relevant genes can be altered very precisely, without the need to import DNA from other organisms, one of the key criticisms of GM foods.

“Using these methods to introduce new variations, our ability to create new genes is nearly limitless,” said Sophien Kamoun, of the Sainsbury Laboratory at the John Innes Research Centre in Norwich, east England. “We can be much more precise [than with conventional plant breeding].” As the processes mimic those of nature, but speeded up, the end result is the same as if the sort of selection routinely practised by farmers for centuries had been used, scientists said. Huw Jones, of Rothamsted Research, south-east England, said: “These plants are indistinguishable from those that would occur through selective breeding.” Ottoline Leyser, director of the Sainsbury Laboratory at the University of Cambridge, said gene editing could offer an alternative to GM that could be much more palatable to consumers.

But green campaigners are far from convinced. The European Parliament’s Green party said: “While the biotech sector has sought to trumpet the benefits and precision of gene editing, compared to existing GM technology, there are many uncertainties as regards the impact of gene-edited organisms on the environment and health.” The technology is very new, as the first commercial application of it in a plant for human consumption was approved this spring, when the U.S.-based Cibus announced an edited version of canola. Scientists believe there is huge potential for the technology because it avoids the slower, scattergun approach of selective breeding.

It has only become possible to edit plant genes in the past few years following decades of work on mapping genomes and inventing ways in which they can be precisely altered.

Under EU laws, however, it is unclear whether gene editing should be treated in the same way as genetic modification. GM crops are effectively banned in Europe, and licences to experiment in GM are rare and very expensive. In some other parts of the world, most importantly the U.S., the regulations are much lighter and GM food faces few barriers to animal and human consumption.

The European Commission is expected to offer guidance on the technology soon, perhaps next year, but it is not clear whether that could involve a ruling on whether and how the current regulations should apply, or a commitment to further study with the possibility of new regulations. — © Guardian Newspapers Limited, 2014

One of math’s great unsolved problems

June 17, 2014

Updated: June 17, 2014 12:21 IST

The twin prime conjecture

Shubashree Desikan

While prime numbers occur frequently in a series when they have small values, they appear to get spaced out further and further as we go to very large numbers

Flickr/yduarte While prime numbers occur frequently in a series when they have small values, they appear to get spaced out further and further as we go to very large numbers

People have been trying to prove the twin prime conjecture throughout history, and it looks like we're getting closer

We all know what prime numbers are – they are numbers that are divisible only by themselves and by the number one. Here are a few primes: 1, 2, 3, 5, 7, 11, 13, 17, 19 and so on. For small numbers, it is easy to check whether they are prime numbers and to get a handle on their properties. But it is not so easy when they are really large. From experience, people develop beliefs or “conjectures” about the properties of large primes.

One thing that people observe is that while prime numbers occur frequently in a series when they have small values, they appear to get spaced out further and further as we go to very large numbers. So do they ultimately disappear at some point?

This is where a famous conjecture called the twin prime conjecture comes in.

Twin primes

This conjecture was first documented by Alphonse de Polignac in 1849 – that’s 165 years ago. The belief is that there are pairs of primes separated only by 2 units (i.e. n, n+2: for example 3 and 5 or 17 and 19) and that there are infinitely many of them. Now, this is a conjecture – which means it is a statement which has not been strictly proved using existing mathematical theorems. People trying to solve it on and off throughout history, but it has eluded people for years

Then, in April 2013, came along a Chinese mathematician Yitang Zhang, who proved a result which is very close to the twin prime conjecture. He proved that there are infinitely many such pairs of primes and the separation between them is a number that is not more than 70 million. That is, that there are infinitely many pairs of the form (n, n+N), and he had found an upper bound on their separation N. According to him, N cannot be more than 70 million.

Shrinking gap

Do you think 70 million is too huge a number as compared to 2? But the important thing here was that Yitang Zhang proved that there are infinite pairs of prime numbers that differ only by a finite number.

Since his discovery, there has been rapid progress on Prof. Zhang’s result.

Independently, James Maynard, a post-doctoral fellow and Terence Tao, a Fields medallist, have come up with proofs that the bound is not as high as 70 million, but only 600.

Tao also put his work on a blog – The Polymath Problem 8 – and crowdsourced for a solution. Many mathematicians got into it and due to their joint effort they brought down the gap to a mere 252.

They believe, if you assume a further conjecture (which hasn’t been proved yet) you can bring down the bound to 6 or so…

What about the twin prime conjecture?

But to bring down the gap to two and prove the twin prime conjecture — is that feasible? Mathematicians believe that however will require one more piece of the puzzle to solved — one more discovery or breakthrough is needed to make that final leap.

Still these are exciting times for mathematicians in Number Theory, for what has only been a belief for centuries is now emerging as a proven fact!

http://www.thehindu.com/in-school/sh-science/one-of-maths-great-unsolved-problems/article6122643.ece

A blot on Indian science

July 23, 2014

Updated: July 23, 2014 00:41 IST

 

That getting papers published in scientific journals, reputable ones included, using manufactured data is virtually child’s play has been made painfully evident by a team of scientists at the Chandigarh-based Institute of Microbial Technology (IMTECH), which functions under the Council of Scientific and Industrial Research.

Three papers that the team published last year in the PLoS ONE journal were retracted on July 9 after an internal investigation by the institute found unequivocal evidence of data fabrication.

Four more papers are in the process of being retracted. All the seven papers have the same research associate as the first author and the senior scientist of the institute as the corresponding author.

So much was the expertise in data fabrication and “presentation” involved that the reviewers and editors of all the papers failed to spot them.

That even with “hindsight” one of the editors was unable to figure out which of the three “similar” but non-identical graphs in the three papers had been fabricated, is proof enough.

The same must hold true for the four other papers as well. There is no difference whatsoever in terms of scale or implication between the current case and those involving the South Korean stem cell researcher Hwang Woo-suk and the Japanese stem cell researcher Haruko Obokata.

As much as the stem cell researchers shamed their respective countries, the Indian researchers’ unethical practice is a blot on Indian science.

The only consolation in the face of this serious affront to science is that unlike many institutions in India that do not investigate such frauds committed by their scientists,

IMTECH has shown the spine and tremendous alacrity to get to the bottom of the issue. While the first author has already paid the price for his commission, the scientist’s role is now under scrutiny.

By virtue of being a senior author in all the seven papers, he has much explaining to do to prove his non-involvement in the scandal as some data in the PLoS ONE papers are “not supported by raw data in the lab.”

More than the setback suffered by the institute, it would face a barrage of criticism and ridicule and lose all the goodwill it earned as a result of its actions if the probe does not remain unbiased and fails to bring out the truth.

There is an important lesson to be learnt from the way South Korea acted without any bias to prove Hwang’s guilt and thereafter withdraw his licence and suspend him.

Meanwhile, steps need to be taken immediately to teach research students the ethics of doing and reporting science.

For instance, journals have found numerous instances of unacceptable manipulation of images, often arising from researchers’ eagerness to produce perfect pictures.

http://www.thehindu.com/opinion/editorial/a-blot-on-indian-science/article6238663.ece?homepage=true

Sunday, 13 July 2014

Mangalyaan needs 75 more days to enter Mars orbit

 

Vanita Srivastava, Hindustan Times  New Delhi, July 12, 2014

First Published: 20:17 IST(12/7/2014) | Last Updated: 20:23 IST(12/7/2014)

Mangalyaan, India's maiden spacecraft to Mars, will enter the Red Planet in 75 days, Indian Space Research Organisation (Isro) said.

“Mars Orbiter Spacecraft has travelled 525 million km on its heliocentric arc. Radio signals from earth now take 15 minutes to reach MOM and return. MOM’s Mars Orbit Insertion is planned exactly 75 days from today (Saturday),” Isro said in a post on its Mars Orbit Mission Facebook page.

A tricky path correction was performed on Mangalyaan last month. Another correction may be done in August.

India's space programme reached a major milestone on November 5 last year, when it launched the Mars Orbiter Mission, commonly known as Mangalyaan, from Sriharikota on an 11-month journey to find evidence of life on the Red Planet.

It is expected to enter the Mars atmosphere on September 24 this year.

Isro had initially planned four corrections during its journey to Mars. The manoeuvres are needed to keep the spacecraft on the required path. It is also essential for maintaining the required velocity.

Mangalyaan is on its 680-million-km voyage to Mars. If it makes it, India will join an elite club comprising the US, Russia and Europe. Once in the Mars orbit, the orbiter's five payloads will start performing experiments for the next six months.

Only the US, Europe, and Russia have sent probes that have orbited or landed on Mars. Probes to Mars have a high failure rate and a success will be a boost for national pride, especially after a similar mission by China failed to leave Earth’s orbit in 2011.

The Mangalyaan probe, India’s first interplanetary mission, has a Rs. 450-crore price tag, which is less than a sixth of the amount earmarked for a Mars probe to be launched by the National Aeronautics and Space Administration (Nasa).

Sunday, 6 July 2014

MOM begins last leg of journey to Mars

 

DDN Correspondent Posted on 05 Jul, 2014 at 04:32:PM IST

 

Mars Orbiter Mission (MOM), India's maiden mission to Mars, on Friday completed 75% of its journey toward the Red Planet.

The spacecraft is moving at a speed of 25km per second and has covered 510 million km of the total 680 million km distance on its heliocentric arc towards Mars.

"With this, three-fourth of the journey has been completed. MOM and her payloads are in good health," Isro said on its Facebook page.

It is expected that the MOM will reach Mars on September 24.

Isro had earlier announced that the third trajectory correction manoeuvre for the spacecraft will take place in August and the fourth and final one will happen to the Mars orbit insertion in September.

It is learned that before the Mars capture, Isro is planning to carry out a simulation of the orbit insertion.

An Isro official said except uploading the commands to the spacecraft, everything else will be done during the rehearsal to ensure the manoeuvre in order to ensure that the execution of the whole plan does not face any flaw.

NASA's flying observatory is a jet with a 17-tonne telescope

 

PTI | Washington | Published: Jul 06 2014, 18:40 IST

Summary NASA has fitted a heavily modified Boeing jet with a 17-tonne telescope and will use it as a flying observatory to study how stars are formed.

NASA has fitted a heavily modified Boeing jet with a 17-tonne telescope and will use it as a flying observatory to study how stars are formed.

Officially known as the Stratospheric Observatory for Infrared Astronomy, SOFIA is a heavily modified Boeing 747 Special Performance jetliner, with a 17-tonne, 8-foot telescope mounted behind a 16-by-23-foot sliding door that reveals the infrared telescope to the skies.

The plane's ability to fly near the edges of the atmosphere gives it better visibility than ground-based observatories, 'wired.com' reported.

The 747SP was designed by Boeing in the 1970s to fly faster, higher, and farther than other versions of the 747. The company's engineers shortened the fuselage by 55 feet to cut weight, but left the power plants intact.

The plane can stay airborne for over 12 hours and its range is 6,625 nautical miles.

It can fly above the troposphere and 99.8 per cent of the water vapour held in our atmosphere, which obscures infrared light.

That gives its on-board infrared telescope a clear view into outer space.

According to NASA, the data provided by SOFIA "cannot be obtained by any other astronomical facility on the ground or in space."

Unlike grounded telescopes and satellites fixed in orbit, SOFIA is mobile, so it can better spot transient space events like supernovae and comets.

Currently SOFIA is in Germany for extensive maintenance and refitting. This is the last step in NASA's rollout of SOFIA and scientists plan to run 100 observation flights with her in 2015.