Category Archives: Useful Information

Useful scientific information

Lunar dynamo’s lifetime extended by at least 1 billion years

New evidence from ancient lunar rocks suggests that an active dynamo once churned within the molten metallic core of the moon, generating a magnetic field that lasted at least 1 billion years longer than previously thought. Dynamos are natural generators of magnetic fields around terrestrial bodies, and are powered by the churning of conducting fluids within many stars and planets. In a paper published today in Science Advances, researchers from MIT and Rutgers University report that a lunar rock collected by NASA’s Apollo 15 mission exhibits signs that it formed 1 to 2.5 billion years ago in the presence of a relatively weak magnetic field of about 5 microtesla. That’s around 10 times weaker than Earth’s current magnetic field but still 1,000 times larger than fields in interplanetary space today.

Full moon as seen from Earth’s Northern Hemisphere, by Gregory H. Revera (Own work) [CC BY-SA 3.0 ( or GFDL (, via Wikimedia Commons
Several years ago, the same researchers identified 4-billion-year-old lunar rocks that formed under a much stronger field of about 100 microtesla, and they determined that the strength of this field dropped off precipitously around 3 billion years ago. At the time, the researchers were unsure whether the moon’s dynamo – the related magnetic field – died out shortly thereafter or lingered in a weakened state before dissipating completely.

The results reported today support the latter scenario: After the moon’s magnetic field dwindled, it nonetheless persisted for at least another billion years, existing for a total of at least 2 billion years.

Study co-author Benjamin Weiss, professor of planetary sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), says this new extended lifetime helps to pinpoint the phenomena that powered the moon’s dynamo. Specifically, the results raise the possibility of two different mechanisms – one that may have driven an earlier, much stronger dynamo, and a second that kept the moon’s core simmering at a much slower boil toward the end of its lifetime.

“The concept of a planetary magnetic field produced by moving liquid metal is an idea that is really only a few decades old,” Weiss says. “What powers this motion on Earth and other bodies, particularly on the moon, is not well-understood. We can figure this out by knowing the lifetime of the lunar dynamo.”

Weiss’ co-authors are lead author Sonia Tikoo, a former MIT graduate student who is now an assistant professor at Rutgers; David Shuster of the University of California at Berkeley; Clément Suavet and Huapei Wang of EAPS; and Timothy Grove, the R.R. Schrock Professor of Geology and associate head of EAPS.

Since NASA’s Apollo astronauts brought back samples from the lunar surface, scientists have found some of these rocks to be accurate “recorders” of the moon’s ancient magnetic field. Such rocks contain thousands of tiny grains that, like compass needles, aligned in the direction of ancient fields when the rocks crystallized eons ago. Such grains can give scientists a measure of the moon’s ancient field strength.

Until recently, Weiss and others had been unable to find samples much younger than 3.2 billion years old that could accurately record magnetic fields. As a result, they had only been able to gauge the strength of the moon’s magnetic field between 3.2 and 4.2 billion years ago.

“The problem is, there are very few lunar rocks that are younger than about 3 billion years old, because right around then, the moon cooled off, volcanism largely ceased and, along with it, formation of new igneous rocks on the lunar surface,” Weiss explains. “So there were no young samples we could measure to see if there was a field after 3 billion years.”

There is, however, a small class of rocks brought back from the Apollo missions that formed not from ancient lunar eruptions but from asteroid impacts later in the moon’s history. These rocks melted from the heat of such impacts and recrystallized in orientations determined by the moon’s magnetic field.

Weiss and his colleagues analyzed one such rock, known as Apollo 15 sample 15498, which was originally collected on Aug. 1, 1971, from the southern rim of the moon’s Dune Crater. The sample is a mix of minerals and rock fragments, welded together by a glassy matrix, the grains of which preserve records of the moon’s magnetic field at the time the rock was assembled.

“We found that this glassy material that welds things together has excellent magnetic recording properties,” Weiss says.

The team determined that the rock sample was about 1 to 2.5 billion years old – much younger than the samples they previously analyzed. They developed a technique to decipher the ancient magnetic field recorded in the rock’s glassy matrix by first measuring the rock’s natural magnetic properties using a very sensitive magnetometer.

They then exposed the rock to a known magnetic field in the lab, and heated the rock to close to the extreme temperatures in which it originally formed. They measured how the rock’s magnetization changed as they increased the surrounding temperature.

“You see how magnetized it gets from getting heated in that known magnetic field, then you compare that field to the natural magnetic field you measured beforehand, and from that you can figure out what the ancient field strength was,” Weiss explains.

The researchers did have to make one significant adjustment to the experiment to better simulate the original lunar environment, and in particular, its atmosphere. While the Earth’s atmosphere contains around 20 percent oxygen, the moon has only imperceptible traces of the gas. In collaboration with Grove, Suavet built a customized, oxygen-deprived oven in which to heat the rocks, preventing them from rusting while at the same time simulating the oxygen-free environment in which the rocks were originally magnetized.

“In this way, we finally have gotten an accurate measurement of the lunar field,” Weiss says.

From their experiments, the researchers determined that, around 1 to 2.5 billion years ago, the moon harbored a relatively weak magnetic field, with a strength of about 5 microtesla – two orders of magnitude weaker than the moon’s field around 3 to 4 billion years ago. Such a dramatic dip suggests to Weiss and his colleagues that the moon’s dynamo may have been driven by two distinct mechanisms.

Scientists have proposed that the moon’s dynamo may have been powered by the Earth’s gravitational pull. Early in its history, the moon orbited much closer to the Earth, and the Earth’s gravity, in such close proximity, may have been strong enough to pull on and rotate the rocky exterior of the moon. The moon’s liquid center may have been dragged along with the moon’s outer shell, generating a very strong magnetic field in the process.

It’s thought that the moon may have moved sufficiently far away from the Earth by about 3 billion years ago, such that the power available for the dynamo by this mechanism became insufficient. This happens to be right around the time the moon’s magnetic field strength dropped. A different mechanism may have then kicked in to sustain this weakened field. As the moon moved away from the Earth, its core likely sustained a low boil via a slow process of cooling over at least 1 billion years.

“As the moon cools, its core acts like a lava lamp – low-density stuff rises because it’s hot or because its composition is different from that of the surrounding fluid,” Weiss says. “That’s how we think the Earth’s dynamo works, and that’s what we suggest the late lunar dynamo was doing as well.”

The researchers are planning to analyze even younger lunar rocks to determine when the dynamo died off completely.

“Today the moon’s field is essentially zero,” Weiss says. “And we now know it turned off somewhere between the formation of this rock and today.”

This research was supported, in part, by NASA.

For more information, visit:-

via Blogger

Protein-rich diet may help soothe inflamed gut

Immune cells patrol the gut to ensure that harmful microbes hidden in the food we eat don’t sneak into the body. Cells that are capable of triggering inflammation are balanced by cells that promote tolerance, protecting the body without damaging sensitive tissues. When the balance tilts too far toward inflammation, inflammatory bowel disease can result.

Now, researchers at Washington University School of Medicine in St. Louis have found that a kind of tolerance-promoting immune cell appears in mice that carry a specific bacterium in their guts. Further, the bacterium needs tryptophan – one of the building blocks of proteins – to trigger the cells’ appearance.

“We established a link between one bacterial species – Lactobacillus reuteri – that is a normal part of the gut microbiome, and the development of a population of cells that promote tolerance,” said Marco Colonna, MD, the Robert Rock Belliveau MD Professor of Pathology and the study’s senior author. “The more tryptophan the mice had in their diet, the more of these immune cells they had.”

If such findings hold true for people, it would suggest that the combination of L. reuteri and a tryptophan-rich diet may foster a more tolerant, less inflammatory gut environment, which could mean relief for the million or more Americans living with the abdominal pain and diarrhea of inflammatory bowel disease.

A representation of the 3D structure of the protein myoglobin showing turquoise α-helices. By AzaToth (self made based on PDB entry) [Public domain], via Wikimedia Commons
Postdoctoral researcher Luisa Cervantes-Barragan, PhD, was studying a kind of immune cell that promotes tolerance when she discovered that one group of study mice had such cells, while a second group of study mice that were the same strain of mice but were housed far apart from the first group did not have such cells.

The mice were genetically identical but had been born and raised separately, indicating that an environmental factor influenced whether the immune cells developed.

She suspected the difference had to do with the mice’s gut microbiomes – the community of bacteria, viruses and fungi that normally live within the gastrointestinal tract.

Cervantes-Barragan collaborated with Chyi-Song Hsieh, MD, PhD, the Alan A. and Edith L. Wolff Distinguished Professor of Medicine, to sequence DNA from the intestines of the two groups of mice. They found six bacterial species present in the mice with the immune cells but absent from the mice without them.

With the help of Jeffrey I. Gordon, MD, the Dr. Robert J. Glaser Distinguished University Professor, the researchers turned to mice that had lived under sterile conditions since birth to identify which of the six species was involved in inducing the immune cells. Such mice lack a gut microbiome and do not develop this kind of immune cell. When L. reuteri was introduced to the germ-free mice, the immune cells arose.

To understand how the bacteria affected the immune system, the researchers grew L. reuteri in liquid and then transferred small amounts of the liquid – without bacteria – to immature immune cells isolated from mice. The immune cells developed into the tolerance-promoting cells. When the active component was purified from the liquid, it turned out to be a byproduct of tryptophan metabolism known as indole-3-lactic acid.

Tryptophan – commonly associated with turkey – is a normal part of the mouse and the human diet. Protein-rich foods contain appreciable amounts: nuts, eggs, seeds, beans, poultry, yogurt, cheese, even chocolate.

When the researchers doubled the amount of tryptophan in the mice’s feed, the number of such cells rose by about 50 percent. When tryptophan levels were halved, the number of cells dropped by half.

People have the same tolerance-promoting cells as mice, and most of us shelter L. reuteri in our gastrointestinal tracts. It is not known whether tryptophan byproducts from L. reuteri induce the cells to develop in people as they do in mice, but defects in genes related to tryptophan have been found in people with inflammatory bowel disease.

“The development of these cells is probably something we want to encourage since these cells control inflammation on the inner surface of the intestines,” Cervantes-Barragan said. “Potentially, high levels of tryptophan in the presence of L. reuteri may induce expansion of this population.”

For more information visit:-

via Blogger

On this day in science history: oxygen was identified

In 1774, Joseph Priestley, British Presbyterian minister and chemist, identified a gas which he called “dephlogisticated air” – later known as oxygen. Priestley found that mercury heated in air became coated with “red rust of mercury,” which, when heated separately, was converted back to mercury with “air” given off. Studying this “air” given off, he observed that candles burned very brightly in it. Also, a mouse in a sealed vessel with it could breathe it much longer than ordinary air. A strong believer in the phlogiston theory, Priestley considered it to be “air from which the phlogiston had been removed.” Further experiments convinced him that ordinary air is one fifth dephlogisticated air, the rest considered by him to be phlogiston.

Joseph Priestley, by Charles Turner [Public domain], via Wikimedia Commons
However, oxygen was in fact first discovered earlier, by Swedish pharmacist Carl Wilhelm Scheele. He had produced oxygen gas by heating mercuric oxide and various nitrates in 1771–2. Scheele called the gas “fire air” because it was the only known supporter of combustion, and wrote an account of this discovery in a manuscript he titled Treatise on Air and Fire, which he sent to his publisher in 1775. That document was published in 1777. 

Because Priestly published his findings first, he is usually given priority in the discovery.

The French chemist Antoine Laurent Lavoisier later claimed to have discovered the new substance independently. Priestley visited Lavoisier in October 1774 and told him about his experiment and how he liberated the new gas. Scheele also posted a letter to Lavoisier on September 30, 1774 that described his discovery of the previously unknown substance, but Lavoisier never acknowledged receiving it (a copy of the letter was found in Scheele’s belongings after his death). Long before this, one of the first known experiments on the relationship between combustion and air was conducted by the 2nd century BCE Greek writer on mechanics, Philo of Byzantium. In his work Pneumatica, Philo observed that inverting a vessel over a burning candle and surrounding the vessel’s neck with water resulted in some water rising into the neck. Philo incorrectly surmised that parts of the air in the vessel were converted into the classical element fire and thus were able to escape through pores in the glass. Many centuries later Leonardo da Vinci built on Philo’s work by observing that a portion of air is consumed during combustion and respiration.

In the late 17th century, Robert Boyle proved that air is necessary for combustion. English chemist John Mayow (1641–1679) refined this work by showing that fire requires only a part of air that he called spiritus nitroaereus. In one experiment, he found that placing either a mouse or a lit candle in a closed container over water caused the water to rise and replace one-fourteenth of the air’s volume before extinguishing the subjects. From this he surmised that nitroaereus is consumed in both respiration and combustion.

Mayow observed that antimony increased in weight when heated, and inferred that the nitroaereus must have combined with it. He also thought that the lungs separate nitroaereus from air and pass it into the blood and that animal heat and muscle movement result from the reaction of nitroaereus with certain substances in the body. Accounts of these and other experiments and ideas were published in 1668 in his work Tractatus duo in the tract “De respiratione”.

Robert Hooke, Ole Borch, Mikhail Lomonosov, and Pierre Bayen all produced oxygen in experiments in the 17th and the 18th century but none of them recognized it as a chemical element. This may have been in part due to the prevalence of the philosophy of combustion and corrosion called the phlogiston theory, which was then the favored explanation of those processes.

Established in 1667 by the German alchemist J. J. Becher, and modified by the chemist Georg Ernst Stahl by 1731, phlogiston theory stated that all combustible materials were made of two parts. One part, called phlogiston, was given off when the substance containing it was burned, while the dephlogisticated part was thought to be its true form, or calx.

Highly combustible materials that leave little residue, such as wood or coal, were thought to be made mostly of phlogiston; non-combustible substances that corrode, such as iron, contained very little. Air did not play a role in phlogiston theory, nor were any initial quantitative experiments conducted to test the idea; instead, it was based on observations of what happens when something burns, that most common objects appear to become lighter and seem to lose something in the process. The fact that a substance like wood gains overall weight in burning was hidden by the buoyancy of the gaseous combustion products.

This theory, while it was on the right track, was unfortunately set up backwards. Rather than combustion or corrosion occurring as a result of the decomposition of phlogiston compounds into their base elements with the phlogiston being lost to the air, it is in fact the result of oxygen from the air combining with the base elements to produce oxides. Indeed, one of the first clues that the phlogiston theory was incorrect was that metals gain weight in rusting (when they were supposedly losing phlogiston).

For more information visit:-

via Blogger

Moon has a water-rich interior

A new study of satellite data finds that numerous volcanic deposits distributed across the surface of the Moon contain unusually high amounts of trapped water compared with surrounding terrains. The finding of water in these ancient deposits, which are believed to consist of glass beads formed by the explosive eruption of magma coming from the deep lunar interior, bolsters the idea that the lunar mantle is surprisingly water-rich.

Scientists had assumed for years that the interior of the Moon had been largely depleted of water and other volatile compounds. That began to change in 2008, when a research team including Brown University geologist Alberto Saal detected trace amounts of water in some of the volcanic glass beads brought back to Earth from the Apollo 15 and 17 missions to the Moon. In 2011, further study of tiny crystalline formations within those beads revealed that they actually contain similar amounts of water as some basalts on Earth. That suggests that the Moon’s mantle – parts of it, at least – contain as much water as Earth’s.

“The key question is whether those Apollo samples represent the bulk conditions of the lunar interior or instead represent unusual or perhaps anomalous water-rich regions within an otherwise ‘dry’ mantle,” said Ralph Milliken, lead author of the new research and an associate professor in Brown’s Department of Earth, Environmental and Planetary Sciences. “By looking at the orbital data, we can examine the large pyroclastic deposits on the Moon that were never sampled by the Apollo or Luna missions. The fact that nearly all of them exhibit signatures of water suggests that the Apollo samples are not anomalous, so it may be that the bulk interior of the Moon is wet.”

Full Moon photograph taken 10-22-2010 from Madison, Alabama, USA. By Gregory H. Revera (Own work) [CC BY-SA 3.0 ( or GFDL (, via Wikimedia Commons
The research, which Milliken co-authored with Shuai Li, a postdoctoral researcher at the University of Hawaii and a recent Brown Ph.D. graduate, is published in Nature Geoscience.

Detecting the water content of lunar volcanic deposits using orbital instruments is no easy task. Scientists use orbital spectrometers to measure the light that bounces off a planetary surface. By looking at which wavelengths of light are absorbed or reflected by the surface, scientists can get an idea of which minerals and other compounds are present.

The problem is that the lunar surface heats up over the course of a day, especially at the latitudes where these pyroclastic deposits are located. That means that in addition to the light reflected from the surface, the spectrometer also ends up measuring heat.

“That thermally emitted radiation happens at the same wavelengths that we need to use to look for water,” Milliken said. “So in order to say with any confidence that water is present, we first need to account for and remove the thermally emitted component.”

To do that, Li and Milliken used laboratory-based measurements of samples returned from the Apollo missions, combined with a detailed temperature profile of the areas of interest on the Moon’s surface. Using the new thermal correction, the researchers looked at data from the Moon Mineralogy Mapper, an imaging spectrometer that flew aboard India’s Chandrayaan-1 lunar orbiter.

The researchers found evidence of water in nearly all of the large pyroclastic deposits that had been previously mapped across the Moon’s surface, including deposits near the Apollo 15 and 17 landing sites where the water-bearing glass bead samples were collected.

“The distribution of these water-rich deposits is the key thing,” Milliken said. “They’re spread across the surface, which tells us that the water found in the Apollo samples isn’t a one-off. Lunar pyroclastics seem to be universally water-rich, which suggests the same may be true of the mantle.”

The idea that the interior of the Moon is water-rich raises interesting questions about the Moon’s formation. Scientists think the Moon formed from debris left behind after an object about the size of Mars slammed into the Earth very early in solar system history. One of the reasons scientists had assumed the Moon’s interior should be dry is that it seems unlikely that any of the hydrogen needed to form water could have survived the heat of that impact.

“The growing evidence for water inside the Moon suggest that water did somehow survive, or that it was brought in shortly after the impact by asteroids or comets before the Moon had completely solidified,” Li said. “The exact origin of water in the lunar interior is still a big question.”

In addition to shedding light on the water story in the early solar system, the research could also have implications for future lunar exploration. The volcanic beads don’t contain a lot of water – about .05 percent by weight, the researchers say – but the deposits are large, and the water could potentially be extracted.

“Other studies have suggested the presence of water ice in shadowed regions at the lunar poles, but the pyroclastic deposits are at locations that may be easier to access,” Li said. “Anything that helps save future lunar explorers from having to bring lots of water from home is a big step forward, and our results suggest a new alternative.”

The research was funded by the NASA Lunar Advanced Science and Exploration Research Program (NNX12AO63G).
For more information visit:-

via Blogger

On this day in science history: Mars 5 launched

In 1973, the USSR launched Mars 5, on a Proton SL-12/D-1-e booster. It was one of several Soviet Mars probes – Mars 4, 5, 6, and 7 – launched in Jul-Aug 1973. The Mars 5 mission was to orbit Mars, which was achieved on 12 Feb 1974. Each orbit took about 25 hours. It was designed to return information on the composition, structure, and properties of the martian atmosphere and surface. However, after only 22 orbits, the mission ended prematurely due to loss of pressurization in the transmitter housing. Before the failure, data for a small portion of the martian southern hemisphere was captured with about 60 images forwarded over a nine day period. The probe also sent more measurements made by other instruments.

Mars in natural colour in 2007. By ESA – European Space Agency & Max-Planck Institute for Solar System Research for OSIRIS Team ESA/MPS/UPD/LAM/IAA/RSSD/INTA/UPM/DASP/IDA [CC BY-SA 3.0-igo (, via Wikimedia Commons
Mars is the fourth planet from the Sun and the second-smallest planet in the Solar System, after Mercury. Named after the Roman god of war, it is often referred to as the “Red Planet” because the iron oxide prevalent on its surface gives it a reddish appearance. Mars is a terrestrial planet with a thin atmosphere, having surface features reminiscent both of the impact craters of the Moon and the valleys, deserts, and polar ice caps of Earth.

The rotational period and seasonal cycles of Mars are likewise similar to those of Earth, as is the tilt that produces the seasons. Mars is the site of Olympus Mons, the largest volcano and second-highest known mountain in the Solar System, and of Valles Marineris, one of the largest canyons in the Solar System. The smooth Borealis basin in the northern hemisphere covers 40% of the planet and may be a giant impact feature. Mars has two moons, Phobos and Deimos, which are small and irregularly shaped. These may be captured asteroids, similar to 5261 Eureka, a Mars trojan.

There are ongoing investigations assessing the past habitability potential of Mars, as well as the possibility of extant life. Liquid water cannot exist on the surface of Mars due to low atmospheric pressure, which is less than 1% of the Earth’s, except at the lowest elevations for short periods. The two polar ice caps appear to be made largely of water. The volume of water ice in the south polar ice cap, if melted, would be sufficient to cover the entire planetary surface to a depth of 11 meters (36 ft). In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region of Mars. 

The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior.

Mars can easily be seen from Earth with the naked eye, as can its reddish coloring. Its apparent magnitude reaches −2.91, which is surpassed only by Jupiter, Venus, the Moon, and the Sun. Optical ground-based telescopes are typically limited to resolving features about 300 kilometers (190 mi) across when Earth and Mars are closest because of Earth’s atmosphere.

For more information, visit:-

via Blogger

Drinking coffee could lead to a longer life, scientist says

Here’s another reason to start the day with a cup of joe: Scientists have found that people who drink coffee appear to live longer.

Drinking coffee was associated with a lower risk of death due to heart disease, cancer, stroke, diabetes, and respiratory and kidney disease for African-Americans, Japanese-Americans, Latinos and whites.

People who consumed a cup of coffee a day were 12 percent less likely to die compared to those who didn’t drink coffee. This association was even stronger for those who drank two to three cups a day – 18 percent reduced chance of death.

Lower mortality was present regardless of whether people drank regular or decaffeinated coffee, suggesting the association is not tied to caffeine, said Veronica W. Setiawan, lead author of the study and an associate professor of preventive medicine at the Keck School of Medicine of USC.

A small cup of coffee. By Julius Schorzman (Own work) [CC BY-SA 2.0 (, via Wikimedia Commons
“We cannot say drinking coffee will prolong your life, but we see an association,” Setiawan said. “If you like to drink coffee, drink up! If you’re not a coffee drinker, then you need to consider if you should start.”

The study, which will be published in the July 11 issue of Annals of Internal Medicine, used data from the Multiethnic Cohort Study, a collaborative effort between the University of Hawaii Cancer Centre and the Keck School of Medicine.

The ongoing Multiethnic Cohort Study has more than 215,000 participants and bills itself as the most ethnically diverse study examining lifestyle risk factors that may lead to cancer.

“Until now, few data have been available on the association between coffee consumption and mortality in non-whites in the United States and elsewhere,” the study stated. “Such investigations are important because lifestyle patterns and disease risks can vary substantially across racial and ethnic backgrounds, and findings in one group may not necessarily apply to others.”

Since the association was seen in four different ethnicities, Setiawan said it is safe to say the results apply to other groups.

“This study is the largest of its kind and includes minorities who have very different lifestyles,” Setiawan said. “Seeing a similar pattern across different populations gives stronger biological backing to the argument that coffee is good for you whether you are white, African-American, Latino or Asian.”

Previous research by USC and others have indicated that drinking coffee is associated with reduced risk of several types of cancer, diabetes, liver disease, Parkinson’s disease, Type 2 diabetes and other chronic diseases.

Setiawan, who drinks one to two cups of coffee daily, said any positive effects from drinking coffee are far-reaching because of the number of people who enjoy or rely on the beverage every day.

“Coffee contains a lot of antioxidants and phenolic compounds that play an important role in cancer prevention,” Setiawan said. “Although this study does not show causation or point to what chemicals in coffee may have this ‘elixir effect,’ it is clear that coffee can be incorporated into a healthy diet and lifestyle.”

About 62 percent of Americans drink coffee daily, a 5 percent increase from 2016 numbers, reported the National Coffee Association.

As a research institution, USC has scientists from across disciplines working to find a cure for cancer and better ways for people to manage the disease.

The Keck School of Medicine and USC Norris Comprehensive Cancer Center manage a state-mandated database called the Los Angeles Cancer Surveillance Program, which provides scientists with essential statistics on cancer for a diverse population.

Researchers from the USC Norris Comprehensive Cancer Center have found that drinking coffee lowers the risk of colorectal cancer.

But drinking piping hot coffee or beverages probably causes cancer in the esophagus, according to a World Health Organization panel of scientists that included Mariana Stern from the Keck School of Medicine.

In some respects, coffee is regaining its honor for wellness benefits. After 25 years of labelling coffee a carcinogen linked to bladder cancer, the World Health Organization last year announced that drinking coffee reduces the risk for liver and uterine cancer.

“Some people worry drinking coffee can be bad for you because it might increase the risk of heart disease, stunt growth or lead to stomach ulcers and heartburn,” Setiawan said. “But research on coffee have mostly shown no harm to people’s health.”

Setiawan and her colleagues examined the data of 185,855 African-Americans (17 percent), Native Hawaiians (7 percent), Japanese-Americans (29 percent), Latinos (22 percent) and whites (25 percent) ages 45 to 75 at recruitment. Participants answered questionnaires about diet, lifestyle, and family and personal medical history.

They reported their coffee drinking habits when they entered the study and updated them about every five years, checking one of nine boxes that ranged from “never or hardly ever” to “4 or more cups daily.” They also reported whether they drank caffeinated or decaffeinated coffee. The average follow-up period was 16 years.

Sixteen percent of participants reported that they did not drink coffee, 31 percent drank one cup per day, 25 percent drank two to three cups per day and 7 percent drank four or more cups per day. The remaining 21 percent had irregular coffee consumption habits.

Over the course of the study, 58,397 participants – about 31 percent – died. Cardiovascular disease (36 percent) and cancer (31 percent) were the leading killers.

The data was adjusted for age, sex, ethnicity, smoking habits, education, pre-existing disease, vigorous physical exercise and alcohol consumption.

Setiawan’s previous research found that coffee reduces the risk of liver cancer and chronic liver disease. She is currently examining how coffee is associated with the risk of developing specific cancers.

Researchers from the University of Hawaii Cancer Centre and the National Cancer Institute contributed to this study. The study used data from the Multiethnic Cohort Study, which is supported by a $19,008,359 grant from the National Cancer Institute of the National Institutes of Health.

For more information, visit:-

via Blogger

On this day in science history: the earliest recorded confirmed total solar eclipse occurred

In 709 BC, the earliest record of a confirmed total solar eclipse was written in China. From: Ch’un-ch’iu, book I: “Duke Huan, 3rd year, 7th month, day jen-ch’en, the first day (of the month). The Sun was eclipsed and it was total.” This is the earliest direct allusion to a complete obscuration of the Sun in any civilisation. The recorded date, when reduced to the Julian calendar, agrees exactly with that of a computed solar eclipse. Reference to the same eclipse appears in the Han-shu (‘History of the Former Han Dynasty’) (Chinese, 1st century AD): “…the eclipse threaded centrally through the Sun; above and below it was yellow.” Earlier Chinese writings that refer to an eclipse do so without noting totality.

Total Solar Eclipse. I, Luc Viatour [GFDL (, CC-BY-SA-3.0 ( or CC BY-SA 2.5-2.0-1.0 (, via Wikimedia Commons
Having fascinated mankind for years, the Sun is the star at the centre of the Solar System. It is a nearly perfect sphere of hot plasma, with internal convective motion that generates a magnetic field via a dynamo process. It is by far the most important source of energy for life on Earth. Its diameter is about 109 times that of Earth, and its mass is about 330,000 times that of Earth, accounting for about 99.86% of the total mass of the Solar System. About three quarters of the Sun’s mass consists of hydrogen (~73%); the rest is mostly helium (~25%), with much smaller quantities of heavier elements, including oxygen, carbon, neon, and iron.

The Sun is a G-type main-sequence star (G2V) based on its spectral class. As such, it is informally referred to as a yellow dwarf. It formed approximately 4.6 billion years ago from the gravitational collapse of matter within a region of a large molecular cloud. Most of this matter gathered in the center, whereas the rest flattened into an orbiting disk that became the Solar System. The central mass became so hot and dense that it eventually initiated nuclear fusion in its core. It is thought that almost all stars form by this process.

The Sun is roughly middle-aged; it has not changed dramatically for more than four billion years, and will remain fairly stable for more than another five billion years. After hydrogen fusion in its core has diminished to the point at which it is no longer in hydrostatic equilibrium, the core of the Sun will experience a marked increase in density and temperature while its outer layers expand to eventually become a red giant. It is calculated that the Sun will become sufficiently large to engulf the current orbits of Mercury and Venus, and render Earth uninhabitable.

The enormous effect of the Sun on Earth has been recognized since prehistoric times, and the Sun has been regarded by some cultures as a deity. The synodic rotation of Earth and its orbit around the Sun are the basis of the solar calendar, which is the predominant calendar in use today.
For more information, visit:

via Blogger

Reconciling predictions of climate change

Harvard University researchers have resolved a conflict in estimates of how much the Earth will warm in response to a doubling of carbon dioxide in the atmosphere.

That conflict – between temperature ranges based on global climate models and paleoclimate records and ranges generated from historical observations – prevented the United Nations’ Intergovernmental Panel on Climate Change (IPCC) from providing a best estimate in its most recent report for how much the Earth will warm as a result of a doubling of CO2 emissions.

The researchers found that the low range of temperature increase – between 1 and 3 degrees Celsius – offered by the historical observations did not take into account long-term warming patterns. When these patterns are taken into account, the researchers found that not only do temperatures fall within the canonical range of 1.5 to 4.5 degrees Celsius but that even higher ranges, perhaps up to 6 degrees, may also be possible.

The research is published in Science Advances.

CO2 in Earth’s atmosphere if half of global-warming emissions are not absorbed (NASA simulation). By NASA/GSFC [Public domain], via Wikimedia Commons
It’s well documented that different parts of the planet warm at different speeds. The land over the northern hemisphere, for example, warms significantly faster than water in the Southern Ocean.

“The historical pattern of warming is that most of the warming has occurred over land, in particular over the northern hemisphere,” said Cristian Proistosescu, PhD ’17, and first author of the paper. “This pattern of warming is known as the fast mode – you put CO2 in the atmosphere and very quickly after that, the land in the northern hemisphere is going to warm.”

But there is also a slow mode of warming, which can take centuries to realize. That warming, which is most associated with the Southern Ocean and the Eastern Equatorial Pacific, comes with positive feedback loops that amplify the process. For example, as the oceans warm, cloud cover decreases and a white reflecting surface is replaced with a dark absorbent surface.

The researchers developed a mathematical model to parse the two different modes within different climate models.

“The models simulate a warming pattern like today’s, but indicate that strong feedbacks kick in when the Southern Ocean and Eastern Equatorial Pacific eventually warm, leading to higher overall temperatures than would simply be extrapolated from the warming seen to date,” said Peter Huybers, Professor of Earth and Planetary Sciences and of Environmental Science and Engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and co-author of the paper.

Huybers and Proistosescu found that while the slow mode of warming contributes a great deal to the ultimate amount of global warming, it is barely present in present-day warming patterns. “Historical observations give us a lot of insight into how climate changes and are an important test of our climate models,” said Huybers, “but there is no perfect analogue for the changes that are coming.”

For more information visit:- 

via Blogger

On this day in science history: foam rubber was developed

In 1929, foam rubber was developed at the Dunlop Latex Development Laboratories in Birmingham. British scientist E.A. Murphy whipped up the first batch in 1929, using an ordinary kitchen mixer to froth natural latex rubber. His colleagues were unimpressed – until they sat on it. Within five years it was everywhere, on motorcycle seats, on London bus seats, Shakespeare Memorial Theatre seats, and eventually in mattresses.

In 1937 isocyanate based materials were first used for the formation of foam rubbers, after World War II styrene-butadiene rubber replaced many natural types of foam. Foam rubbers have been used commercially for a wide range of applications since around the 1940s. There are two types of foam in use today. One is flexible foam and the other is rigid foam. The flexible version of the foam is used in furniture, car seats, to insulate walls, and even in the very shoes that we wear. The rigid form of foam rubber is used in insulating buildings, appliances like freezers and refrigeration trucks. 

Foam rubber mattress [Public domain], via Wikimedia Commons
So, how is foam rubber manufactured? Rates of polymerization can range from many minutes to just a few seconds. Fast reacting polymers feature short cycle periods and require the use of machinery to thoroughly mix the reacting agents. Slow polymers may be mixed by hand, but require long periods on mixing. As a result industrial application tends to use machinery to mix products. Product processing can range from a variety of techniques including, but not limited to spraying, open pouring, and molding.
  • Material preparation – Liquid and solid material generally arrive on location via rail or truck, once unloaded liquid materials are stored in heated tanks. When producing slabstock  typically two or more polymers streams are used.
  • Mixing – Open pouring, better known as continuous dispensing is used primarily in the formation of rigid, low density foams. Specific amounts of chemicals are mixed into a mixing head, much like an industrial blender. The foam is poured onto a conveyor belt, where it then cures for cutting.
  • Curing and Cutting – After curing on the conveyor belt the foam is then forced through a horizontal band saw. This band saw cuts the pieces in a set size for the application. General contracting uses 4’x12’x2’’.
  • Further processing – Once cut and cured the slabstock can either be sold or a lamination process can be applied. This process turns the slabstock into a rigid foam board known as boardstock. Boardstock is used for metal roof insulation, oven insulation, and many other durable goods.
Unfortunately, because of the variety in polyurethane chemistries, it is difficult to recycle foam materials using a single method. Reusing slab stock foams for carpet backing is how the majority of recycling is done. This method involves shredding the scrap and bonding the small flakes together to form sheets. Other methods involve breaking the foam down into granules and dispersing them into a polyol blend to be molded into the same part as the original. The recycling process is still ever developing for foam rubber and the future will hopefully unveil new and easier ways for recycling.

For more information, visit:-

via Blogger

Tipping points are real: Gradual changes in CO2 levels can induce abrupt climate changes

During the last glacial period, within only a few decades the influence of atmospheric CO2 on the North Atlantic circulation resulted in temperature increases of up to 10 degrees Celsius in Greenland – as indicated by new climate calculations from researchers at the Alfred Wegener Institute and the University of Cardiff. Their study is the first to confirm that there have been situations in our planet’s history in which gradually rising CO2 concentrations have set off abrupt changes in ocean circulation and climate at “tipping points.” These sudden changes, referred to as Dansgaard-Oeschger events, have been observed in ice cores collected in Greenland. The results of the study have just been released in the journal Nature Geoscience.

Ice core sample taken from drill. Photo by Lonnie Thompson, Byrd Polar Research Center, Ohio State University. [Public domain], via Wikimedia Commons
Previous glacial periods were characterised by several abrupt climate changes in the high latitudes of the Northern Hemisphere. However, the cause of these past phenomena remains unclear. In an attempt to better grasp the role of CO2 in this context, scientists from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) recently conducted a series of experiments using a coupled atmosphere-ocean-sea ice model.

First author Xu Zhang explains: “With this study, we’ve managed to show for the first time how gradual increases of CO2 triggered rapid warming.” This temperature rise is the result of interactions between ocean currents and the atmosphere, which the scientists used the climate model to explore. According to their findings, the increased CO2 intensifies the trade winds over Central America, as the eastern Pacific is warmed more than the western Atlantic. This is turn produces increased moisture transport from the Atlantic, and with it, an increase in the salinity and density of the surface water. Finally, these changes lead to an abrupt amplification of the large-scale overturning circulation in the Atlantic. “Our simulations indicate that even small changes in the CO2 concentration suffice to change the circulation pattern, which can end in sudden temperature increases,” says Zhang.

Further, the study’s authors reveal that rising CO2 levels are the dominant cause of changed ocean currents during the transitions between glacial and interglacial periods. As climate researcher Gerrit Lohmann explains, “We can’t say for certain whether rising CO2 levels will produce similar effects in the future, because the framework conditions today differ from those in a glacial period. That being said, we’ve now confirmed that there have definitely been abrupt climate changes in the Earth’s past that were the result of continually rising CO2 concentrations.”

For more information visit:-

via Blogger