Saturday, 29 June 2013

Mystery solved: meteorite caused Tunguska devastation

Vast areas were flattened by a meteorite in Tunguska in 1908.

On the morning of June 30, 1908, a gigantic fireball devastated hundreds of square kilometers of uninhabited Siberian forest around the Tunguska River. The first scientists to investigate the impact site expected to find a meteorite, but they found nothing. Because no traces of a meteorite were found, many scientists concluded that the culprit was a comet. Comets, which are essentially muddy ice balls, could cause such a devastation and leave no trace.

But now, 105 years later, scientists have revealed that the Tunguska devastation was indeed caused by a meteorite. A group of Ukrainian, German, and American scientists have identified its microscopic remains. Why it took them so many years makes for a fascinating tale about the limits of science and how we are pushing them.

Eyewitness reports of the Tunguska event help paint a partial picture. As the fireball streaked across the sky, a blast of heat scorched everything in its wake, to be followed by a shock wave that threw people off their feet and stripped leaves and branches from trees, laying a large forest flat. Photos reveal the extent and force of the impact, showing trees that look like bare telegraph poles, all pointing away from the impact site.

The inability to find any meteorite, however, led to a century of speculation on the origins of the blast. The Tunguska event has spawned a wealth of science fiction that has fed outrageous theories. But the main question has remained: what was it?

An icy comet would evaporate on impact, which could explain the lack of any observable evidence. But a study in the journal Planetary and Space Science provides, for the first time, evidence that the impact was not caused by a comet. Researchers collected microscopic fragments recovered from a layer of partially decayed vegetation (peat) that dates from that extraordinary summer.

Victor Kvasnytsya from the National Academy of Sciences of Ukraine and his colleagues used the latest imaging and spectroscopy techniques to identify aggregates of carbon minerals—diamond, lonsdaleite, and graphite. Lonsdaleite in particular is known to form when carbon-rich material is suddenly exposed to a shock wave created by an explosion, such as that of a meteorite hitting Earth. The lonsdaleite fragments contain even smaller inclusions of iron sulphides and iron-nickel alloys, troilite and taenite, which are characteristic minerals found in space-based objects such as meteorites. The precise combination of minerals in these fragments point to a meteorite source. It is near-identical to similar minerals found in an Arizona impact.

The samples point to one thing: the Tunguska impact is the largest meteorite impact in recorded history. US researchers have estimated that the Tunguska blast could have been as much as the equivalent of a five megaton TNT explosion—hundreds of times more powerful than the Hiroshima blast. The meteorite tore apart as it entered the atmosphere at an angle, so that little of it reached the ground intact. That is why all that remains are such small specks that have been fossilised in the Siberian peat.

We can compare the Tunguska event with the fireball seen during the impact of the Chelyabinsk meteor earlier this year. Although much less powerful than Tunguska, the Chelyabinsk event was similar. A low-angle approach broke up the body, leaving fragments that were found over the vast expanse of Eurasia. More than 1,000 people were injured, some drawn to windows by the flash of the fireball and then hit by the shock wave that followed.

The Tunguska devastation was not investigated for 19 years, partly because of a lack of resources. In contrast, the Chelyabinsk meteorite attracted immediate attention. Dashboard cameras captured the trajectory and brightness of the fireball, while CCTV networks provided fixed reference points. The US space agency NASA has now been able to identify the origins of the meteorite.

The low-frequency rumble of the Chelyabinsk event travelled twice around the globe. The data demonstrate that the energy of the impact was equivalent to a 460 kiloton (TNT) bomb, which is about 40 times the Hiroshima blast.The Conversation

Planetary and Space Science, June 2013. DOI: 10.1016/j.pss.2013.05.003 (About DOIs)

Simon Redfern is professor of mineral physics at the University of Cambridge.

This article was first published at The Conversation.


View the original article here

Brain training app brings big data to human cognition

One of the problems with cognitive and behavioral research is getting a good cross-section of the general population. Although they're convenient to work with, a couple hundred college students rarely represent the full diversity of human capability and behavior, yet that's exactly what many studies rely on. But a brain-training game may now provide access to data on scales that behavioral scientists probably never dreamed of. With a user base of over 35 million, the data obtained through the game could help us tease out very subtle effects. But as a start, a team of researchers have focused on some simpler questions: how aging and alcohol affect our ability to learn.

The software is less a game itself than a game and survey platform. Developed by a company called Lumosity, it's available on mobile platforms and through a Web interface. The platform can run a variety of games (a typical one asks users to answer math questions that appear in raindrops before they hit the ground), all with an emphasis on brain training. A few games are available for free and users can pay to get access to more advanced ones.

The scientific literature on brain training games is a bit mixed, and there's some controversy about whether the games improve mental function in general, or only those specific areas of cognition that the game focuses on. Lumosity clearly argues for the former and one of its employees pointed Ars to a number of studies that he felt validate the company's approach. What's not in doubt, however, is that it has a huge user base with over 35 million registered users. And because the Lumosity platform is flexible, it has been able to get basic demographic information from many of those users; they and others have also filled out personality profiles and other assessments.

Finally, people often play the same game multiple times, potentially over several years. This allows researchers to track both how the game affects the players' abilities to process certain information (namely, do they get better at it?) and may eventually allow some longitudinal studies of a population over time.

For now, however, the company (along with a couple of researchers at outside institutions) is simply trying to validate that its dataset gives it the sort of results that have been hinted at by other work. And two of the things Luminosity started with were sleep and booze.

A lot of the Lumosity client base has answered survey questions about how much sleep they get and how often they have a drink, so the authors correlated that with their scores on a number of tests, targeting working and spatial memory as well as simple arithmetic skills. The key thing in the work was the number of subjects: over 160,000 in two cases and 125,000 in the third.

In all tests, performance sharply increased as people got more sleep, peaking at seven hours of sleep a day. From there, things showed a more gradual decline, with people who got 10 hours of sleep scoring about the same as those who got five. A similar pattern, with a much lower threshold, was apparent when it came to alcohol intake. People who had one or two drinks a day outperformed the teetotalers, but anything above the two-drink threshold saw a drop. In order to see scores similar to those who abstained, people had to have four or five drinks a day.

From there, the researchers turned to learning by tracking people's performance on a single game over 25 iterations. In general, baseline performance dropped with increasing age. But there were distinct differences between the different types of tasks. For example, spatial and working memory seemed to decline steadily from the 20s on. Math skills decayed more slowly and didn't start until the 30s, while verbal fluency stayed stable into the 50s. This seems to largely be a result of people's ability to learn. Improvements in verbal and arithmetic skills stayed largely flat across the range from 20 to 70 years, while the two memory tests showed declines in improvements across roughly the same time.

Overall, the authors admit that the results don't tell them much about the mechanisms behind the effects they're seeing (for example, they note "the apparent cognitive advantage for those who report moderate alcohol intake may be in part due to increased social and cognitive engagement compared to those who report little or no alcohol consumption"). But it is clear that the data available via Lumosity should let people test various mechanisms on a scale that would be impossible without dragging the entire student population of several universities into a behavior lab.

To aid in this, the company has set up a group, Lumos Labs, that evaluates outside research proposals and provides access to the data to any that meet its criteria. Once access to the data is granted, Lumosity takes no further part, allowing researchers to evaluate the data independently. The authors of the current paper were largely Lumosity employees, but it's possible that this will eventually be a relative rarity.

frontiers in Human Neuroscience, 2013. DOI: 10.3389/fnhum.2013.00292  (About DOIs).


View the original article here

More evidence for (and against) groundwater contamination by shale gas

Shale gas well pad in Pennsylvania.

Few topics in the realm of energy are as publicly charged today as fracking, the process by which natural-gas-bearing layers of shale are hydraulically fractured to create pathways for the gas to be extracted. To some, it’s simply a major boon to the US economy and energy independence. To others, it’s a short-sighted resource grab that will leave drinking water resources poisoned for decades. As research in areas where fracking is already prevalent has slowly plodded ahead, what we’ve learned hasn’t fit neatly into either of the black-and-white narratives of the political debate. The realities have, unsurprisingly, been complicated.

A pair of recent papers provides an example, finding different results in areas of Pennsylvania and Arkansas where shale gas is currently being produced.

The first provides an update to an earlier study in northeastern Pennsylvania, where the Marcellus Shale has been a hot target for the natural gas industry. That study analyzed water samples from 68 private wells; methane gas was found to be more prevalent in wells close to shale gas extraction sites. The work implied that the natural gas wells were somehow enabling the migration of natural gas upwards into the drinking water aquifer.

The study was not without its critics, and the researchers returned to the area to collect more data in the hopes of answering the question more clearly. While methane is the primary component of natural gas, it can also be generated by bacteria. To help differentiate such “biogenic” methane from natural gas, they also tested for ethane and propane, longer chain hydrocarbons not produced by bacteria, and carbon isotopes in the gases. They also sampled 81 new water wells for additional compounds and isotopes.

Pulling all their data together, the correlation between methane concentration and proximity to a shale gas well held. Though ethane and propane were less commonly detected (concentrations are typically much lower than methane), the same correlation appeared there. The carbon isotopes in all the high methane samples also indicated a natural gas source, though a few of the low methane samples appeared to be biogenic.

By looking at the carbon isotopes in methane, ethane, and propane, as well as helium-4 in the samples, they were even able to characterize whether it looked like natural gas from the deeper Marcellus Shale or from slightly shallower layers (which also contain some natural gas). They found some of both, which could indicate that both shallower and deeper natural gas is making its way toward the surface due to imperfections in the seals around the gas wells.

It’s not impossible that some of that gas could be naturally present. Natural gas has long been bubbling up with the briney water at a nearby salt spring (an interesting feature at the center of a separate study we covered). But this can't explain all the findings. The gas at the spring fell in with the group resembling shallower sources. And the correlation with proximity to natural gas wells would not be expected for naturally occurring gas in this region.

While those results strengthen the case for natural gas migrating into groundwater as a result of the shale gas activities, a similar study in northern Arkansas tells a different story. The same group of researchers, working together with the United States Geological Survey, looked at 127 private wells in an area where the Fayetteville Shale is being tapped for natural gas.

There, methane was not a significant issue, with only one sample coming in high (of the 51 tested for methane). What little methane was detected showed no correlation with proximity to shale gas wells. The carbon-isotope signature of that methane also looked more like bacteria-generated methane than natural gas. Nothing else about the sampled wells indicated any sort of contamination by deep fluids. They were even able to look at data from 43 wells sampled long before fracking started, finding no differences in the things that were tested for.

So why the difference? Were the shale gas wells in Arkansas constructed more carefully? It’s possible, but it’s more likely that the geology is responsible.

There’s a relatively thick barrier separating the drinking water aquifer there from the Fayetteville Shale below. In Pennsylvania, the barrier rocks have been fractured somewhat by tectonic stresses, but the Arkansas rocks have not experienced that level of deformation. That makes fewer potential pathways for natural gas to flow through, decreasing the chances that any will migrate far enough to intercept imperfections in the well seals and hitch a ride up the borehole.

Pennsylvania also has a history of traditional oil and gas extraction that northern Arkansas does not, which means that many deep wells have been drilled in the past. Any wells that were not completely filled after they were shut down could create additional shortcuts for fluids moving through the rock.

The question “Does fracking contaminate groundwater?” does not have a yes or no answer. It depends on the specifics of the geology in the area and the specifics of the shale gas operation—which involves both the work underground as well as the handling of fracking fluid at the surface. Though some wells in Pennsylvania appear to be impacted by “stray gas”, none of the potentially harmful chemicals used in the actual fracking fluid have shown up (though they have been seen in deeper groundwater elsewhere). And in Arkansas, no groundwater impacts of any kind have been found. Each situation is a little different.

PNAS, 2013. DOI: 10.1073/pnas.1221635110
Applied Geochemistry, 2013. DOI: 10.1016/j.apgeochem.2013.04.013   (About DOIs).


View the original article here

Cancer gene sequencing effort struggles through waves of false IDs

With the development of DNA sequencing centers that are capable of churning out multiple genomes in a week, many scientists saw a resource that they could turn against cancer. By sequencing a person's healthy cells and comparing those results to the sequence of their cancer cells, it would be possible to map all the genetic changes that drive cancers. Within the list of genes, there might also be hints for future therapies.

As the cancer genomes have rolled in, however, reality hasn't kept pace with the promise. As the number of cancer genomes sequenced has risen, the number of genes identified has continued to grow. And as noted by the authors of a paper released by Nature over the weekend, some of the genes are overwhelmingly unlikely to have anything to do with cancer. So a huge team of researchers set out to find out why and to fix the problem.

Although some cancers are caused by viruses, the majority of cases are caused by mutations that alter or disable the genes that normally control a cell's growth. Many of these have been identified over the years: some that are common to many cancers, others that are specific to just a few. Until recently, there was no way to be sure we had a complete catalog of the genes involved, or knew which ones were important in which cancers. Genome sequencing gave us the chance to develop a complete catalog.

The challenge of this approach is that cancer cells carry a lot of mutations. They are constantly adapting to the body's (and doctors') attempts to kill them and mutations are the raw materials for that. As part of their transformations, they also tend to disable the genes that stop cells from dividing if they carry DNA damage. Both of these factors tend to mean that cancer cells have an increased rate of mutations. But these mutations are indiscriminate; they hit irrelevant genes with the same frequency as they hit genes important for cancer's origin and spread.

So, the people doing cancer genomics faced a challenge in trying to weed out the irrelevant mutations and focus on the significant one. For the most part, they were failing.

To illustrate the problem, the authors of the new paper took their own set of normal and cancerous samples from 178 patients with lung cancer. The standard computer analysis used to identify mutations pulled out 450 genes that were mutated at a higher frequency in the cancers, even after accounting for the size of the gene. That's a lot. And some of them were clearly irrelevant to cancer. Nearly a quarter of the 450 genes encoded odorant receptors, which are the basis of your sense of smell, but not expressed anywhere much beyond the nerves of the nasal lining. Other nerve-specific genes were on the list, as were a few that play a structural role in muscles.

Other types of cancer had similarly large lists filled with genes that were probably irrelevant. And a scan of the published literature revealed that many of these had already been reported as associated with cancer.

Why are so many labs being led astray? To sort things out, the authors obtained a large collection of genomes from 27 different types of cancer, and started doing comparisons among them. The first thing they noticed is that different cancer types varied in the frequency of mutations by factors of up to 1,000. Lung cancers and melanomas were at the high end, with rates up to and exceeding one mutation every 10,000 bases. That's likely because these cancers are largely caused by known mutagens—cigarette smoke and UV light, respectively.

Those mutagens are also fairly specific about how they damage the DNA (for example, UV light tends to damage DNA when two Ts are next to each other),so they tended to favor a specific spectrum of mutations. The same was true in some other types of cancer, which suggests they might have a common environmental cause.

In addition to the type and frequency of mutations, there were other variables. Mutation rates could vary greatly among individuals with cancer, so that lung cancers from two different patients might show very different rates. And different areas of the genome were more or less prone to mutation. Active genes seem to be resistant to mutation, possibly because the reside on a section of the chromosome that's accessible to DNA repair genes. Areas that were the last to be copied when a cell divides, in contrast, were more likely to pick up mutations.

Overall, the authors conclude that earlier studies were going wrong because they compared mutations in a gene to the average mutation rate in the genome. Instead, all these other factors—type of cancer type of mutation, the patient's mutation rate, and the region of the genome—need to be taken into account as well. Being the helpful sorts, they even wrote a program called MutSig that did so. (And made it freely available for noncommercial use.)

When MutSig was turned loose on the original data, the list of interesting genes dropped from 450 to just 11. In all probability, the same thing would happen to other data sets if they were subjected to the same analysis; not everything (or not every gene) causes cancer.

This is a great success story, but it's a bit of a silver lining in a dark cloud. 10 of the 11 genes that were identified were already known to be involved in cancer, and the 11th is involved in the immune response, which helps keep cancer in check. So it's not clear that we're getting much in the way of new answers out of a large and expensive project.

Nature, 2013. DOI: 10.1038/nature12213  (About DOIs).


View the original article here

Friday, 28 June 2013

Some of what the Sun spits out violently returns to its surface

Material getting blasted off the surface of the Sun. Some of it will make a violent return trip.

The Sun may look like a calm, steady presence from the safe distance of our vantage point on Earth, but just a slight bit of magnification shows that its surface is seething with violent activity. And every now and then some of that violence gets sent towards Earth in the form of a coronal mass ejection, causing auroras and general worries about the safety of the people and hardware we have in orbit.

In focusing on the material that gets sent toward Earth, however, it's easy to overlook the fact that not everything that gets shot out of the Sun is energetic enough to escape its gravity. A lot of material obeys the dictum "what goes up must come down" and ends up crashing back to the surface of the Sun. Now, scientists have imaged these events as the ejected material returns and strikes the surface of the Sun. Using that, they built a model that shows what happens below the Sun's surface during these impacts, a model that may have applications to the processes that build stars in the first place.

Some video of the solar eruptions, showing the material that returns to the Sun's surface at high speed.

The basic process at issue is fairly simple: the eruptions of the Sun that power coronal mass ejection send a lot of mass away from the Sun, but not all of it has sufficient momentum to escape the Sun's gravity. So at a certain point, it comes to a halt and then reverses, heading back toward the surface of the Sun. A lot of it reaches free-fall speeds before it impacts—which, given the environment, is somewhere around 300-450 kilometers a second.

If the sheer speed isn't enough to impress you, the size of the blobs and strings of material that return to the Sun just might. The authors estimate that, along their short axis, many of these are above 2,000 km across, and some up to 4,000 km. On their long axis, they can be much larger than 10,000 km. The upper figures would place the area they impact as roughly the size of the US (including Alaska). The observations show that the impact can be anything from a single droplet to a train of droplets and longer strings of material.

Different views of the material crashing back into the Sun, taken at different wavelengths. Arrows highlight some of the points of impact.

The data suggests that the Sun's magnetic fields are weak where the impacts take place, so the authors built a model of the events using gravity and fluid dynamics. The model shows that the material will plow straight through the Sun's chromosphere and procede through until it strikes a layer with equal density. At that point, the material creates a thin disk of very hot material, after which the center region sees a bit of a bounce-back effect, creating an upward surge of material heated to 105K. For larger streams of material, the bounce-back structures could be as large as 10,000 km.

The authors' model, showing some of the bounce-back effect. Courtesy of Fabio Reale.

Although these events produce radiation across the visible spectrum and into the UV, a lot of the UV light seems to get absorbed by the surrounding layers of material and never escapes the Sun. And that, the authors say, can help explain a bit of a conundrum regarding the formation of stars.

At the later stages of their growth, young stars have already ignited fusion even as more material from the surrounding disk is being funneled into the surface of the star by its magnetic fields. Observations of young stars have allowed us to estimate the amount of material reaching the growing star but have given different estimates depending on the wavelengths being used for the imaging. The new work suggests that the UV observations may lead to a systematic underestimate of how much material is impacting the star, helping bring different estimates in line.

Science, 2013. DOI: 10.1126/science.1235692  (About DOIs).


View the original article here

Your vegetables’ nutritional content could be affected by jet lag

When you buy vegetables at the grocery store, they are usually still alive. When you lock your cabbage and carrots in the dark recess of the refrigerator vegetable drawer, they are still alive. They continue to metabolize while we wait to cook them.

Why should we care? Well, plants that are alive adjust to the conditions surrounding them. Researchers at Rice University have shown that some plants have circadian rhythms, adjusting their production of certain chemicals based on their exposure to light and dark cycles. Understanding and exploiting these rhythms could help us maximize the nutritional value of the vegetables we eat.

According to Janet Braam, a professor of biochemistry at Rice, her team’s initial research looked at how Arabidopsis, a common plant model for scientists, responded to light cycles. “It adjusts its defense hormones before the time of day when insects attack,” Braam said. Arabidopsis is in the same plant family as the cruciforous vegetables—broccoli, cabbage, and kale—so Braam and her colleagues decided to look for a similar light response in our foods.

They bought some grocery store cabbage and brought it back to the lab so they could subject the cabbage to the same tests they gave their model plant, which involved offering up living, leafy vegetables to a horde of hungry caterpillars. First, half the cabbages were exposed to a normal light and dark cycle, the same schedule as the caterpillars, while the other half were exposed to the opposite light cycle.

The caterpillars tend to feed in the late afternoon, according to Braam, so the light signals the plants to increase production of glucosinolates, a chemical that the insects don’t like. The study found that cabbages that adjusted to the normal light cycle had far less insect damage than the jet-lagged cabbages.

While it’s cool to know that cabbages are still metabolizing away and responding to light stimulus days after harvest, Braam said that this process could affect the nutritional value of the cabbage. “We eat cabbage, in part, because these glucosinolates are anti-cancer compounds,” Braam said.

Glucosinolates are only found in the cruciform vegetable family, but the Rice team wanted to see if other vegetables demonstrated similar circadian rhythms. They tested spinach, lettuce, zucchini, blueberries, carrots, and sweet potatoes. “Luckily, our caterpillar isn’t picky,” Braam said. "It’ll eat just about anything.”

Just like with the cabbage, the caterpillars ate far less of the vegetables trained on the normal light schedule. Even the fruits and roots increased production of some kind of anti-insect compound in response to light stimulus.

Metabolites affected by circadian rhythms could include vitamins and antioxidants. The Rice team is planning follow-up research to begin exploring how the cycling phenomenon affects known nutrients and if the magnitude of the shifts are large enough to have an impact on our diets. “We’ve uncovered some very basic stimuli, but we haven’t yet figured out how to amplify that for human nutrition,” Braam said.

As more research explores how combinations of light cycle, temperature, and other variables could increase the health benefits of eating our vegetables, the refrigerators of the future might have light and dark cycles you could set so your broccoli would be at its nutritional maximum just when you are ready to cook dinner.

Current Biology, 2013 DOI: 10.1016/j.cub.2013.05.034 (About DOIs)


View the original article here

Protein from sushi snack may help detect liver diseases

Akiko Kumagai & Atsushi Miyawaki

Researchers have discovered a fluorescent protein in a Japanese eel consumed as a popular sushi snack. Amongst other applications, the discovery could help provide a simpler and more sensitive test to detect jaundice and other diseases.

The idiom “seeing is believing” is what drives many biologists to use fluorescence microscopy, where specifically tagged proteins glow when a laser is shone on them. This glow allows researchers to observe phenomena inside cells at very minute scales (at some billionths of a meter).

The importance of one class of proteins that are used as tags, called green fluorescent proteins (GFPs), was recognised by the 2008 Nobel Prize in Chemistry. But so far, all the GFPs have been derived from non-vertebrate animals—those that lack a spinal cord—such as jellyfish and corals.

The discovery of the new fluorescent protein, named UnaG after the Japanese eel unagi, is important not just because it comes from a vertebrate, but also because it differs from any of the fluorescent proteins currently available.

The first GFP was discovered in a jellyfish called Aequorea victoria almost 50 years ago. Since then, tiny tweaks to this GFP and a few others that were discovered later on have given researchers a set of reliable tools to help them probe how a cell works (many of the tweaks have gotten the protein to glow in a rainbow of colors). By attaching GFP to the proteins they’re interested in, scientists can pinpoint and study individual proteins among thousands that are at work in a cell.

The discovery of a new fluorescent protein was recently published in the journal Cell. A team led by Atsushi Miyawaki at the RIKEN Institute in Japan found UnaG when they were studying the muscle fibers of freshwater eels. The protein is unique not just because it is the first found in a vertebrate, but also because of the way it fluoresces. In most GFPs, the chromophore (the part of the molecule that can absorb and emit light) is part of the protein itself. UnaG glows by integrating a molecule from outside the protein.

This molecule turns out to be bilirubin, which is present in eel muscles but is also formed when haemoglobin breaks down in human blood. Levels of bilirubin have been used for decades to assess liver health and diagnose diseases such as jaundice. So this feature of UnaG gives it the potential to detect bilirubin and act as an indicator for liver malfunction.

Binding to bilirubin gives UnaG some additional properties that no other GFP currently has. First, since its chromophore isn’t part of the protein itself, it is only half the size of current GFPs, which makes it easier to use it to tag proteins without interfering with their function. Second, most GFPs must react with oxygen to produce their chromophore and thus become fluorescent. UnaG does not. This means it could enable the illumination of cells in tissues where oxygen is scarce, such as some cancerous tumours.

Biology has always been driven by the ability to “see” nature. Modern tools have allowed scientists to see beyond what the naked eye could offer. And the race is on to see phenomena happening on ever smaller scales. UnaG is a promising step in that direction.The Conversation

Cell, June 2013. DOI: 10.1016/j.cell.2013.05.038 (About DOIs)

Luc Henry is postdoctoral fellow at the Swiss Federal Institute of Technology in Lausanne.

This article was first published at The Conversation.


View the original article here

Electrons accelerated on the wildest roller coaster on Earth

The vacuum chamber that's the business end of this accelerator. Note graduate student placed in the backdrop for scale.

A lot of good science is driven by the availability of technology. The laser and nuclear magnetic resonance spectroscopy (MRI) have had an incredible impact on science, medicine, and Western society in general. One key stage in many of these technological developments has been a transition from something like a national facility (through an institute-level facility) to an in-house instrument that individual research groups have and use on a routine basis.

Accelerator facilities, which provide beams of high energy electrons and/or X-ray photons, are still at the national facility stage. A new accelerator technology, however, is promising to change all that. In the not-so-distant future, every science department may have ready access to high-energy electrons and X-ray lasers right in their basement.

Currently, accelerators that can provide beams of electrons in the Giga-electronVolt (GeV) range are massive devices. For instance, the electrons that power the X-ray laser at Stanford are accelerated to around 17GeV over a distance of 1 km. Not only is this simply not feasible to build at every university, but many Western countries can't even afford one of these, so the few that exist end up serving scientists from around the world.

You'll notice that the example I used is an X-ray laser source. The only way to generate intense laser pulses in the X-ray region is by deliberately accelerating electrons in such a way that they generate coherent light. These devices, called free electron lasers, have been around for many years. But they are big devices that require their own staff to operate and maintain.

To get to do an experiment at a free electron laser facility, you have to put in a proposal. Both you and the scientific staff at the facility have to agree that the experiment will generate something useful and interesting. That makes applicants think very hard about what sort of experiments they want to do. Unfortunately, this means they tend to choose relatively safe experiments. There are no Friday night, do-it-to-see-what-happens experiments at free electron laser facilities.

This would change if it were possible to generate beams of high-energy electrons in the confines of your own basement. This is not an easy task, and the usual approach is to gently accelerate electrons up to the required energy over rather long distances. An alternative, however, is laser wakefield acceleration, which achieves very high energy in a very short distance.

Laser wakefield acceleration is one of the coolest things in physics. And the best way to see it is to take a ride on one of the electrons. So there you are, meandering around your home atomic nucleus with your friends. The community is a bit spread out because you are part of a dilute gas in a vacuum chamber. But all is calm and happy.

In the distance, you observe a big, angry-looking light pulse that is rapidly approaching. Having experienced the disruption of a passing light field before, you brace yourself for a bit of a bouncy ride. However, this ride exceeds even your worst expectations. The light field grabs you and all your friends and shakes you so violently that you are thrown out of your comfy atomic home and into the vacuum. Then, without so much as an "oops, sorry," the light pulse moves on.

All those lovely bare nuclei just seem so attractive now—they should, being positively charged and you negatively charged—and there are so many of them, so close, just waiting to be neutralized. This sets up the equivalent of a massive downhill slope, with you and your electron-friends perched precariously at the top in a soapbox derby car that has no brakes. You accelerate down the potential hill toward the waiting nuclei.

But by the time you get there, the light pulse has created even more attractive-looking nuclei in the distance—you know the sort, with a jacuzzi and a bit of backyard to call your own. The hill steepens and seems without end, and there is no other direction but down. You race on down the potential hill, still accelerating.

You keep chasing the light pulse, driven by the cloud of electrons racing behind you and all those positive charges right in front of you. All of a sudden, the light pulse accelerates away and vanishes. Shortly afterward, you blow through the last of the charged nuclei and find yourself drifting in the vacuum at a speed that is very close to the speed of light, and a kinetic energy of around a GeV.

All of this happens over a distance of a just a centimeter or so, and over a few picoseconds. It is truly the electron's equivalent of being struck by a tornado and finding out that the promised basement is actually a third-floor balcony.

The latest development is a good news/bad news joke, accompanied by a terribly overblown conclusion. The good news is that scientists at the University of Texas in Austin have demonstrated that laser wakefield acceleration can generate electrons with energies over 1GeV. This was not possible previously.

To achieve this, though, required that the density of the gas used to provide the electrons was reduced by an order of magnitude. Unfortunately, as the gas density reduces, the required laser energy increases. So instead of using a Big Laser™, the researchers built a Really Big Laser™. To put it in perspective, a typical laser wakefield acceleration experiment uses a laser that produces something like ten laser pulses per second, with each pulse having about 1J of energy (average power flow during the pulse is about 10TW). And that is a complicated beast, involving one laser and around four amplifier stages, occupying two to three large tables.

The laser in this experiment emits a pulse once per hour, but each pulse has 120J of energy. The power flow in this system is around 1PW. I suspect that this system requires considerably more than three tabletops for the optical components, never mind all the support services.

In any case, the increased pulse energy provides electric fields sufficiently large to allow laser wakefield acceleration in low-density plasmas. Even better, their system hasn't reached its limit. Currently, the laser focus isn't that good, so even higher intensities—and therefore larger accelerating fields—can be reached with some optimization. Under optimum conditions, they should be able to reach 10GeV. At that energy, they are less than a factor of two short of the most energetic free electron lasers.

Unfortunately, at one shot per hour, that laser isn't going to be very useful. Keeping experimental conditions under control and repeatable for hours on end while you wait for the laser to cycle so you can get just a few measurement points sounds like a nightmare. To really make this thing useful, I would say that they need to get to something like a shot per second, with ten shots per second as a better target.

In the conclusions, though, lurked the kind of statement that gives scientists a bad name: "LPAs [laser-plasma accelerators] at these densities are required for tabletop X-ray free electron lasers and to achieve the ~10GeV stage envisioned for a future laser-driven collider." Calling this, or any like-system "tabletop" is like calling me a "nobel prize winning physicist"—the sort of exaggeration that causes the world to wobble on its axis. And of course, this was precisely the language that the press release (and many ensuing press reports) picked up and used.

Even if we accept that the laser system can be halved in size—something that seems challenging—this device will occupy at least two, and more likely three, heavily shielded rooms. Yes, the chamber where the electron bunch is accelerated is small; in that respect it is already a tabletop system. But really, even that small part requires a good vacuum system, gas injectors, supply, and diagnostics. All of which adds up to a considerable amount of space.

The point is that this doesn't have to be a tabletop device. Hell, most research laser systems are only tabletop if you ignore the large box sitting beside the table, so I don't get the obsession with the label. Even if this tech requires the accelerator to be in a separate building (or in the basement of a building), it would be a considerable improvement over a 1 km beam line. The exaggeration was not just unnecessary, but considering that most users are not experts in the field, it raises false expectations that could end up damaging the credibility of accelerator physicists.

Nature Communications, DOI: 10.1038/ncomms2988


View the original article here

The story behind the island that wasn’t there

Sunset over the Coral Sea as seen from the deck of the RV Southern Surveyor.

Last November, an oddball news story caught attention around the world. A research vessel in the Coral Sea, northeast of Australia, discovered that a small island on the map didn’t actually, well, exist. It wasn’t a victim of sea level rise or a David Copperfield illusion—there just wasn’t anything there. A lot of the news coverage at the time was (quite appropriately) of the “Gee—whaddya know?” variety, but the researchers who “undiscovered” (de-scovered?) Sandy Island recently published a paper in Eos detailing answers to deeper questions—how did the island get on the map in the first place and what can we learn from its undiscovery?

The vessel, the RV Southern Surveyor, wasn’t out checking on islands to make sure they were still there. As University of Sydney PhD student Sabin Zahirovic told Ars, the research group on board was collecting measurements of the seafloor to aid in piecing together the plate tectonic history of the Coral Sea region, which is poorly understood. That included things like seafloor depth, magnetism (magnetic signatures are locked into the rock as it cools from magma at mid-ocean ridges), gravity measurements (which detect variations in rock density), and good old-fashioned rock samples.

It was with that work on their mind that they noticed something odd. “Close to New Caledonia, we noticed this blob that represented Sandy Island on our scientific charts,” Zahirovic said. “At that time we were unaware of the island's name because it did not appear in the main navigational chart, but only in our scientific charts and cached version of Google Earth.”

Curious, they wanted to go and have a look. The ship’s captain played it cautiously, charting a course that would skirt the edge of the potentially non-existent island. “It is such a remote region that it would not be a surprise if the navigational charts were incorrect—since many of the data points in that region were from Captain Cook's 18th century manual depth measurements using an immensely long rope and a lead weight. In addition, the ocean floor is an evolving environment with submarine volcanoes (called seamounts) that can pop up in a single day, meaning that even the most up-to-date maps would be of no help,” Zahirovic said.

But the instruments showed no change in depth as they passed by—the seafloor remained 1,500 meters or so below the surface. The navigational chart was correct, and their scientific charts (and Google Earth) were in error.

When they got back on land, word got out, and scientists started scouring all kinds of maps and data sets. Some thought the researchers must have made a navigational error of some kind—after all, you could see the island in the raw satellite measurements of sea surface temperature or ocean chlorophyll content. But those data sets were not, in fact, raw. A map of “known” land areas is applied to the data to eliminate measurements that don’t belong. Even digital global ocean depth charts fell prey, forcing mapped land area to be above sea level as an a priori assumption.

In Google Earth, the area showed up as a black patch with no satellite imagery. The area was supposed to be an island, but no land appeared in the satellite images, which were consequently automatically excluded.

Although the troublesome non-island found its way into public data sets, critically including the widely used World Vector Shoreline Database, it wasn’t present on many navigational charts. “It was only once we returned to shore that the Australian Hydrographic Service confirmed that they had in fact surveyed the entire region and removed the island in 1985, while the French Hydrographic Service had already removed it in 1974,” Zahirovic told Ars.

If some groups already knew the island didn’t exist, why didn’t that become common knowledge? Zahirovic explained that hydrographic services are usually part of national navies, which may not release the data (or may release it behind a paywall). So while many of the hydrographic services got word, the information didn’t make it into public data sets. “Another reason is the cautionary approach, since nobody wants to wrongly remove an island from charts and [be responsible for] a ship running aground,” Zahirovic said. “From our discussions at the American Geophysical Meeting in December 2012, we had information that it requires Congressional approval to remove these phantom islands from charts due to the possible hazard of incorrectly removing an island.”

Those sorts of things help explain how the non-island could have persisted for so long, but where did it come from in the first place? Did some cartographer in the 1800s mischievously draw it in, knowing that a day would come when those minor pen (or quill) strokes would thoroughly trick some robots orbiting high above the Earth? Entertaining as that idea may be (imagine this person trying to explain the prank to a friend), that probably wasn’t the intent, even if it was the outcome.

The earliest major map on which Sandy Island appeared seems to have been a 1908 British admiralty chart. The map shows an island the same size and shape in Sandy Island’s location. The chart notes that the island was reported in 1876 by a whaling ship named Velocity.

There are several possibilities that could explain that report, as the Eos paper lays out. Was there a shallow island there then that has since eroded just below the waves? Probably not, as that should have been apparent in the RV Southern Surveyor’s depth measurements. Of course, it’s possible that the Velocity made a navigational error and had actually spied land somewhere else. And it’s also possible that a mistake was made somewhere along the line of hand-copied maps.

But giving the able crew of the Velocity and the cartographers who worked on those maps the benefit of the doubt, the researchers realized there was another plausible explanation. Volcanic eruptions below the ocean surface are not uncommon in the region and can result in huge “rafts” of floating pumice that drift with the currents.

In fact, the researchers note, a pumice raft from a 2001-2002 eruption near Tonga drifted within 20 kilometers of Sandy Island’s location. That wasn’t just a one-off coincidence—the currents frequently push pumice rafts from the volcanically active Tonga region westward through that general area. Perhaps the Velocity spotted one such raft and mistook it for an island, accidentally leading over a century of maps astray.

It’s a fun story, but it’s also a lesson for scientists. Global data sets like the handy World Vector Shoreline Database have an “official” feel to them, but that doesn’t make them infallible. It can be critical, as well, to understand how each data set is created, as the processing of land areas in ocean satellite data showed.

The Sandy Island episode highlights the need for the curation of global data sets. Sabin Zahirovic told Ars, “There is very little money for establishing and maintaining scientific databases, considering it is often seen as not ‘doing’ science but just storing and managing data.” Paywalls certainly don’t make the compilation of data any easier, either.

Finally, the researchers emphasize the value of funding data-gathering voyages, whether of the traditional variety or utilizing autonomous vehicles. “It does highlight that the surface of the Moon and Mars are better explored than our oceans,” Zahirovic said, “and future efforts should be aimed at revealing the secrets of the ocean floor and conducting basic marine research. The ocean floor helps us understand how our planet's surface formed but also supports immensely unique and sensitive ecosystems that need to be catalogued and studied.”

“The media coverage of James Cameron descending into the Marianas Trench (the deepest point on Earth's surface) in a submersible in 2012 helped raise this issue, and we hope that further development of marine technologies will help us better understand our blue planet.”

Eos, 2013. DOI: 10.1002/2013EO150001  (About DOIs).


View the original article here

Packed star system may have three habitable super-Earths

The number of potentially habitable planets continues to grow. This week, a team of astronomers provided an update on GJ 667C, a star known to host two super-Earths, based on past observations. Further observations, along with some refined statistical methods, now indicate that there are likely to be at least six planets in the system (and possibly a seventh), all packed in a region that's about half the distance from the Earth to the Sun. Although they're all much closer to the host star, the star is quite a bit dimmer, which also shifts the habitable zone such that two of the planets fall squarely within it.

GJ 667C is part of a three-star system in the direction of the constellation Scorpius. The stars orbit each other at a sufficient distance, however, that GJ 667C's companions don't interfere with the planetary orbits. Initial observations of the star were made with a spectrograph (the HARPS instrument), which detects subtle shifts in the wavelengths of the light emitted by the star. Some of these shifts are changes in the star's activity, but others are caused by its motion toward or away from Earth, which shift the light to higher or lower frequencies, respectively. One of the factors that can cause these shifts is gravitational pull of planets as their orbits take them ever so slightly closer to or farther from Earth.

GJ 667C is a type of star called an M-dwarf that is smaller than the Sun. Because of its small size, it's possible to detect even relatively light planets due to their pull on the host star. The ease of detecting planets was one of the reasons that the star was targeted for observations originally, and that paid off with the discovery of the exoplanets GJ 667Cb, a super-Earth close to the star (at 0.05 Astronomical Units) and GJ 667Cc, at about .12 Astronomical Units.

But a team of researchers has gone back and re-evaluated the system, combining some new observations from HARPS and other instruments with new methods of both modeling the star and identifying planets. The new model of the star allowed the authors to determine how variable it normally is; the answer turned out to be "not very." (In technical terms, the data "puts the star among the most inactive objects in the HARPS M-dwarf sample.")

With the background activity of the star controlled for, the authors looked at the remaining signals for indications of a planetary orbit. The method they used was fairly standard: search for a clear signal and once it was identified, remove it and look for a signal in what remained. As each planet was identified, the authors checked whether including it in their model of the exosolar system fit the actual data better. The two previously identified planets quickly popped out with a very significant signal by this test, as did a third (each of them improved the model by anywhere from 1012 up to 1037). And signals continued to pop out of the analysis, indicating three additional planets (GJ 667Ce-g), each of which improved the model by about a thousand-fold. There was also a hint of a planet close to the host star at just under 0.1 Astronomical Units, though it failed the authors' statistical tests.

The authors tested whether a system with planets in these apparent orbits would be stable, and it appears that it would be, provided none of the planets were much above their potential minimum masses. That would make most of them super-Earths, with the lone exception being an Earth-sized body.

Based on the brightness of GJ 667C, it's possible to calculate where the potential habitable zone would reside around the star. On the inner edge of the zone, enough water enters the atmosphere that it reaches altitudes where the incoming stellar radiation can dissociate it, allowing the hydrogen to escape into space. GJ 667Cc is right at this boundary, but its high mass means that it might be able to retain water in the atmosphere despite the heat. GJ 667Cf is squarely within the habitable zone, while GJ 667Ce is farther out, but still close enough that a healthy dose of greenhouse gasses like methane and carbon dioxide would warm it enough to keep water liquid.

All of these assumptions are based on the presence of an atmosphere and a reflectivity similar to that of Earth's. If those aren't present, then either of the two planets that are closer to the edge of the habitable zone could easily shift out of it. (Read this for more details of these complications.) Another consideration is that these planets are close enough to the host star that they could be tidally locked like our Moon, constantly showing a single face to the body they orbit. This can create a hot spot directly facing the host star, with progressively cooler zones farther from that, possibly allowing liquid water in a ring around one region of the planet. This configuration has been nicknamed an "eyeball Earth."

The authors expect that further observations should be able to tell us definitively whether there's a GJ 667Ch. But the planets are so tightly packed that trying to fit any additional planets inside the orbit of GJ 667Cg would destabilize the system. That doesn't rule out anything farther out, but those are getting far enough from the host star that, unless they're massive, they'll be difficult to detect. In general, the system continues what seems to be a trend with dwarf stars: it hosts a tightly packed system of rocky bodies a very short distance from the star. Given that these dwarfs emit far less intense light, that places the planets in and near the habitable zone with great frequency. Since planets near dwarfs are relatively easy to discover, habitable zone discoveries may become relatively commonplace in the future.

One minor negative in all of this is that the decision to name planets in order of their discovery is starting to create some very confusing exosolar systems. Assuming planet seven is confirmed at GJ 667C, then the order of planets will end up being b, h, c, f, e, d, and g, with g being farthest from the star. All of which makes keeping track of which planet is where rather challenging.

The arXiv. Abstract number: 1306.6074  (About the arXiv). To be published in Astronomy & Astrophysics.


View the original article here

Cassini captures gigantic hurricane on Saturn in exquisite detail

Jupiter's Great Red Spot may get most of the attention, but it's hardly the only big weather event in the Solar System. Saturn, for example, has an odd hexagonal pattern in the clouds at its north pole, and when the planet tilted enough to illuminate it, the light revealed a giant hurricane embedded in the center of the hexagon. Scientists think the immense storm may have been there for years.

But Saturn is also home to transient storms that show up sporadically. The most notable of these are the Great White Spots, which can persist for months and alter the weather on a planetary scale. Great White Spots are rare, with only six having been observed since 1876. When one formed in 2010, we were lucky enough to have the Cassini orbiter in place to watch it from close up. Even though the head of the storm was roughly 7,000 km across, Cassini's cameras were able to image it at resolutions where each pixel was only 14 km across, allowing an unprecedented view into the storm's dynamics.

The storm turned out to be very violent, with convective features as big as 3,000 km across that could form and dissipate in as little as 10 hours. Winds of over 400 km/hour were detected, and the pressure gradient between the storm and the unaffected areas nearby was twice that of the one observed in the Great Red Spot of Jupiter. By carefully mapping the direction of the winds, the authors were able to conclude that the head of the White Spot was an anti-cyclone, with winds orbiting around a central feature.

Convection that brings warm material up from the depths of Saturn's atmosphere appears to be key to driving these storms. The authors built an atmospheric model that could reproduce the White Spot and found that shutting down the energy injection from the lower atmosphere was enough to kill the storm. In addition, observations suggest that many areas of the storm contain freshly condensed particles, which may represent material that was brought up from the lower atmosphere and then condensed when it reached the cooler upper layers.

The Great White spot was an anticyclone, and the authors' model suggests that there's only a very narrow band of winds on Saturn that enable the formation of a Great White Spot. The convective activity won't trigger a White Spot anywhere outside the range of 31.5° and 32.4°N, which probably goes a long way toward explaining why the storms are so rare.

Although the 2010 storm was also monitored using Earth-based instruments, both it and the polar hurricane show the value of having an observatory in residence at Saturn (and, by extension, at other planets). The local observations allow a much more detailed view of what's happening within the Great White Spot; without them, it would be impossible to have built the detailed model of the event. In the case of the polar hurricane, the feature was largely invisible except in the infrared from most of Cassini's time in orbit, as no light reached it during the winter of Saturn's northern hemisphere. The only reason the storm could be imaged is because Cassini was around long enough and could take a carefully planned orbit to make observations.

Nature Geoscience, 2013. DOI: 10.1038/NGEO1860  (About DOIs).


View the original article here

Two accelerators find signs of a particle that nobody can explain

The Belle detector at Japan's KEK facility.

Two different accelerators have found evidence for a particle that appears to contain four quarks, according to papers published in Physical Review Letters. Although particles with two and three quarks are common, this would be the first time that something containing four quarks has been spotted. Depending on the precise nature of the interactions among the quarks, this could be a discovery that keeps the theoreticians very busy.

With the discovery of the Higgs boson, the predicted collection of fundamental particles was complete. But one family of these fundamental particles—the quarks—combine with gluons to make more complex particles called hadrons and mesons. Hadrons include the proton and neutron, and they are formed by combinations of three quarks. Mesons, which are unstable, are comprised of pairs of quarks.

Having only two quarks would seem to make mesons fairly simple when actually they're anything but. There are three families of quarks, each with a particle and antiparticle, and mesons can consist of any combination of these. They also create some of the more amusing nomenclature in physics, with mesons involving a strange quark being referred to as strangeonium, and those with a bottom quark as bottomonium.

Earlier collider experiments had suggested that the presence of a meson with two bottom quarks might be associated with a heavier particle of unknown properties. So, two different teams, one working at a collider in Japan, the other at one in Beijing, decided to look at whether the same was true with charmonium. (Both colliders are electron-positron colliders, which have many advantages, despite their relatively low energies.)

So, the teams looked at events that included a J/? meson, which is a single particle (with two names, since two groups announced its discovery simultaneously) that is composed of a charm quark and a charm antiquark. To do this, they scanned the data sets generated by two different detectors: BES III in Beijing, and Belle at the KEK facility in Japan. The researchers pulled out those events that included a J/? and a pair of p particles (another type of meson), with the J/? being spotted due to its decay either into an electron and positron, or a muon and antimuon. With those in hand, they searched for indications that the J/? was the product of the decay of a heavier particle. (They do this by looking at what's called the "structure of the mass spectrum").

Both teams found something at 3.9GeV, which they're terming Zc(3900) due to its apparent mass of 3900MeV. But its presence is coupled to the appearance of charged p particles, which suggests that the new particle itself is charged. This means that it is probably comprised of four quarks. And, as mentioned above, particles with four quarks have not been previously detected.

The results have a statistical significance well above that required to count for discovery in particle physics, and the fact that there seems to be a similar heavy particle for bottom quarks suggests that this may be a common feature for all the quarks. There's also nothing in particular that rules out four-quark particles but we've gotten pretty deep into the era of particle physics without ever detecting one, so the results are surprising.

Most of the debate, however, seems to focus on how exactly could four quarks combine. One option is that they combine in the same way that two quarks combine with gluons to make mesons and three combine with gluons to form hadrons. The alternative would be what some reports are calling a "meson molecule," where a pair of two-quark mesons are held together by an attractive force. The problem with the latter option, as noted by Nature News, is that the molecule should be less stable than its constituent mesons. But the detectors see no sign of it splitting apart before it decays.

Given the clear method for spotting the Zc(3900) laid out by these papers, it should be easy for anybody to comb through their data and look for similar events, which may shed some more light on the particle's properties. And, in the mean time, it's a safe bet that theorists will be looking carefully at various forms of four-quark particles (molecule and otherwise) to see what sort of predictions they could make about the particle's behavior.

Physical Review Letters, 2013. DOI: 10.1103/PhysRevLett.110.252002, 10.1103/PhysRevLett.110.252001  (About DOIs).


View the original article here

Obama to reveal “plan to prep for the impacts of climate change” on Tuesday

At the top of WhiteHouse.gov today, "Take Action on Climate Change."

On Tuesday at Georgetown University, President Barack Obama plans to deliver on his biggest promise from the 2013 inaugural address. "In my inaugural address, I pledged that America would respond to the growing threat of climate change for our children and future generations," the president states in a new video on WhiteHouse.gov. So this week, he'll lay out his vision of where the US needs to go with "a national plan to reduce carbon pollution, prep our country for the impacts of climate change, and lead global efforts to fight it."

Obama made headlines back in January for, among other things, making a bold, public promise for climate change policy. He devoted eight sentences of his inauguration speech to climate change—more than any other topic noted the New York Times—and did not sugarcoat it at the time. “Some may still deny the overwhelming judgment of science," Obama said. "But none can avoid the devastating impact of raging fires, and crippling drought, and more powerful storms.”

In this new teaser video, Obama recognizes there is "no single step that can reverse the effects of climate change." He makes reference to the need for collaborative work from scientists and farmers, engineers and businesses, and all US citizens and employees alike. However, the video doesn't reveal any specifics. It leads to the WhiteHouse.gov/climate-change page, which does the same.

Many following the situation—USA Today, Reuters, and Ars Science Editor John Timmer—anticipate the president's plan will include bringing existing plants under the Clean Air Act (previously, this applied only to new construction). That type of action would be in line with recent statements from Heather Zichal, a top energy and climate adviser for the White House. During The New Republic's energy and environmental forum this week, Zichal told reporters, "In the near term, we are very much focused on the power plant piece of the equation."

On Sunday, the Organizing for Action team (a community organization group associated with BarackObama.com and a successor of Obama for America) sent out its e-mail notice about the announcement. The group warned that this climate change initiative will be met by strong opposition:

The powerful, well-financed forces who still deny the science behind climate change aren't going to like this—and they'll be fighting this progress every step of the way. In fact, before he's even seen the plan, House Speaker John Boehner is calling it "absolutely crazy."

Boehner's comments came during a press conference on Thursday. It was largely focused on immigration happenings, but a reporter eventually asked about this upcoming climate change announcement (around the 6:57:00 mark).


View the original article here

Population boom might not have set off “human revolution”

Food from the Stone Age has raised doubts about the causes of the human revolution.

About 50,000 years ago, modern humans left Africa and began occupying the rest of the world. The common thought is that a sudden growth in population caused the so-called “human revolution,” which gave birth to language, art, and culture as we know it today. Now, based on something that’s not obviously related to human culture—the size of shellfish fossils—researchers have challenged that model.

Artifacts from two sites in South Africa, Still Bay and Howieson’s Poort, have convinced archaeologists that the period between 85,000 to 65,000 years ago was when the “human revolution” began. Humans from that time made jewelry from perforated shells and used objects as symbols. They made better tools than they had ever before. Some of these tools, made from ostrich eggshells, were even capable of slicing fruit.

It has been thought that this period also saw a sudden explosion in population growth. Now, Richard Klein from Stanford University and Teresa Steele from the University of California at Davis argue that archaeologists and anthropologists have got it wrong. In a study published in the Proceedings of the National Academy of Sciences, they say there is evidence to suggest that population did not actually explode during this period.

Klein and Steele’s evidence comes from the size of shellfish fossils from that period. Higher predation of shellfish forces their shells to become smaller, and there is no evidence of shellfish shrinkage during this time.

According to Robert Foley, an expert on the origins of modern humans at Cambridge University, the idea is not as far-fetched as it sounds. “Other things remaining equal, there is a strong relationship between shell size and human predation,” he said. This correlation can be seen in shellfish around the world today. Coasts where shellfish are collected en masse see animals with smaller shells than those where shellfish aren’t collected.

The evidence for Klein and Steele’s claim comes from the South Africa, which seems to be the only place that has well-preserved shellfish fossils from that period. They find that the shell size during the Middle Stone Age (MSA, about 200,000 years ago) and Later Stone Age (LSA, about 50,000 years ago) were not much different. Thus, they say, this period must not have seen a sudden population explosion, as many argue.

But Martin Zeigler, climate geologist at the Swiss Federal Institute of Technology in Zurich, is not entirely convinced. “Environmental factors could affect shellfish size on the timescales that are studied here,” he said. And he is right. Many factors are involved in determining shell size, such as water temperature, salinity, nutrient availability, and species population.

Klein and Steele argue that we will never have enough information on how much these factors played a role. But with studies on contemporary shellfish showing the same trend, it's hard to argue against the hypothesis without some evidence showing that it’s wrong.

Foley has respect for Klein’s work. He said, “They do tackle limitations in their study by pointing out that rising sea-level has removed the relevant sites from the recent LSA period of about 10,000 years ago. While one has to be cautious about the overall implications of their analysis for the evolution of modern humans, it is not very likely that rising population could ever be a complete explanation.”

Klein and Steele believe that the innovations that happened in Still Bay and Howieson’s Poort at the time cannot be explained by an enhancement in human ability to survive and reproduce. Instead, they argue that other reasons, such as climate fluctuations and genetic changes, may explain what caused the human revolution.The Conversation

PNAS, June 2013. DOI: 10.1073/pnas.1304750110 (About DOIs)

This article was first published at The Conversation.


View the original article here

The International Linear Collider will be a Higgs factory

A simulation of an electron/positron collision producing a Higgs and Z boson.

The Large Hadron Collider (LHC) is currently undergoing upgrades that will allow it to finally reach its intended top energy of 14TeV. When it comes back online, researchers will use it to probe the properties of the Higgs boson it discovered and to continue the search for particles beyond those described by the Standard Model. But no matter how many Higgs particles pop out of the machine, there's a limit to how much we can discover there.

That's because the hadrons it uses create messy collisions that are hard to characterize. The solution to this is to switch to leptons, a class of particles that includes the familiar electron. Leptons present their own challenges but allow for clean collisions at precise energies, allowing the machine to produce little beyond the intended particles. So now, the international physics community is putting agreements in place that will see a new lepton collider start construction before the decade is out, most likely in Japan.

Hadrons like the proton are composed of a mixture of quarks, gluons, and virtual particles. Their heavy mass makes them easy to accelerate. When fast-moving particles move around curves, some of their energy is lost as radiation; the proportion of lost energy goes down as the particle's mass goes up. So when lightweight electrons were sent through the curved tunnels that now house the LHC, the maximum energy they could reach was just over 100GeV. In contrast, protons can go through those same curves at 7TeV, meaning the collisions have significantly more energy.

The problem with hadrons comes from the fact that they're made up of a bunch of individual particles, and there's no way of controlling how the particles are oriented when the collision takes place. Even when two protons smash head-on, the quarks inside them may end up colliding off-center. As a result, only a small fraction of the total energy ends up being available to create new particles. Since there's no way of predicting how much, reconstructing what happened during a collision gets rather complicated, especially since each collision is really multiple smaller collisions. Imagine trying to reconstruct what happened when two trucks used to ship cars collide when you don't know which cars started on what truck.

Leptons, in contrast, are fundamental particles; there's nothing inside them, so when they collide, all the energy gets put into a single, simple collision. This means you can see things like the Higgs without accelerating the electrons to anywhere near the energies used at the LHC. It's also relatively easy to collide electrons with their antimatter cousins, the positrons, which eliminates the particles themselves in the process. The end result is a very clean collision and, if it's at the appropriate energy (say, near the mass of a heavy particle), a strong likelihood of producing the particle you're interested in.

Since leptons don't like running in circles, the solution is to build a linear collider. If everything moves in a straight line, it's possible to get electrons up to high energies and keep them there. The problem is that a straight line means long tunnels and infrastructure spread along them. All of which means a large expense.

Even while the LHC was still being built, scientists started planning for a next-generation linear collider under the assumption that the LHC would spot the Higgs and physicists would want to study its properties in detail. Two competing camps were formed, one focused at Fermilab, the other at CERN.

The CERN group called its effort the Compact Linear Collider, or CLiC. To keep costs under control, CLiC would accelerate leptons within a shorter distance by transferring energy from one beam of electrons to the beams of electrons and positrons used for collisions. The result is a shorter (or, as the name implies, more compact) collider. The downside is that the technology remains unproven, so we don't know whether it would actually work in practice.

That dilemma left the focus on the International Linear Collider, which uses extensions of existing technology on a larger scale—a much larger scale. The initial plan for the ILC calls for the hardware to be housed in a tunnel 30km long; to increase the energy later, another 20km may be added. Initial plans were to build it at Fermilab, although the size meant that the tunnel would extend out under some of the neighboring towns. But the US Congress cut the funding for it a few years back, leaving the project in a bit of a limbo. Two factors have changed that: the actual discovery of the Higgs, which gives the collider its purpose, and the Great East Japan Earthquake of 2011, which freed up money from the Japanese government as part of the country's recovery efforts.

(It's worth noting that Japan hasn't been designated the official site yet, but that is an offer that will be hard to refuse. Also worth noting is that the ILC and CLiC teams are partly competitors and partly collaborators.)

Last week, the ILC team published its technical design report, which lays out how the accelerator will function and what it might explore.

Enlarge / The planned layout of the ILC.

The design calls for two storage rings, one each for the electrons and positrons. Electrons are relatively easy to come by, but the positrons will be made on-site. Basically, the electron beam will be sent on a curved path and lose energy via the production of high-energy photons. These photons will be converted to electron/positron pairs; the electrons will be discarded, and the positrons will be sent into the storage ring. From the ring, the electrons and positrons will be sent 15km in opposite directions, sent around a curve and then fed into two linear accelerators that are 11km long each and pointed roughly at each other.

If future higher-energy extensions go forward, they would basically involve extending the tunnels and shifting the point where the electrons go around a bend further out. More accelerator hardware would be attached to the backend of the existing linear accelerator, lengthening it so that the electrons spend more time being accelerated and come out at a higher energy.

There's a single beam-collision point at the center of the complex, which means only a single detector can take data from collisions. To provide the reassurance of reproducibility, there will be two detectors on rails, allowing one to be slid out and a second to be slid in to replace it. If there's enough shielding, it should be possible to perform maintenance on one while the second is operating.

Since the amount of acceleration can be tuned, the initial range of the accelerator runs from about 200GeV to 500GeV, enough to explore the production of the Higgs with a Z boson and see how the two interact (given that the Z is a relatively massive particle, the answer is expected to be "extensively"). At the high end, a pair of Higgs particles can be produced with a Z, allowing us to determine how the particle interacts with itself. Combinations of the Higgs and two top quarks are also possible (the top is the most massive particle we know about).

Extending the energy up to 1TeV would get us more elaborate combinations, but it's hard to see that much money being spent unless the LHC produces some evidence for supersymmetry, additional Higgs particles, or dark matter particles (or some combination of the above).

The LHC has plenty of time to get there, as a final site for the ILC hasn't even been chosen yet. It will take a few years to finalize the design specs and then roughly a decade to construct it, so operation probably won't start until the late 2020s. By that time, the LHC will probably have gone through a second round of upgrades and should have thoroughly explored all the particles accessible to the energies it can reach.


View the original article here

Cancer immunity of strange underground rat revealed

Researchers have discovered how one of the world’s oddest mammals developed resistance to cancer, and there is hope that their work could help fight the disease in humans.

Naked mole rats live underground, where environmental conditions are harsh but predators are few. They can live for more than 30 years, almost twenty-seven years longer than their close cousin the house mouse—which is particularly susceptible to cancer. They breathe slowly due to the limited supply of oxygen, survive on very little food, have poor sight, and are largely indifferent to pain.

Naked mole rats are also the only mammals that do not regulate their body temperature. Because they live in colonies where the queen rat does the job of producing progeny and only a few males father the litters, their sperms become lazy.

For cancer researchers, mice and naked mole rats fall on two extremes of the disease spectrum. Mice are used as animal models of disease because of their short lives and high incidence of cancer, which help researchers study the mechanism of cancer occurrence and test drugs that fight the disease.

Naked mole rats, on the other hand, have never developed cancer in the years that they have been studied. In labs, researchers often don’t wait for their animal models to develop cancer. Instead they induce cancer by blasting the animals with gamma radiation, transplanting tumors, or injecting cancer-causing agents. Do that to a naked mole rat, though, and nothing happens.

Now, Vera Gorbunova and Andrei Seluanov at the University of Rochester think they may have found one mechanism by which naked mole rats defend themselves against cancer. Their results, reported today in the journal Nature, make for a strange tale.

While studying cells taken from the armpits and lungs of naked mole rats, they found an unusually thick chemical surrounding the cells. This turned out to be hyaluronan, a substance that is present in all animals, where its main job is to hold cells together. Beyond providing mechanical strength, it is also involved in controlling when cells grow in number.

Cancer relies on the unregulated growth of cells, so hyaluronan was thought to be involved in the progression of malignant tumors. According to Gorbunova, aspects of hyaluronan may regulate cell growth as well as hyaluronan's amount and thickness. As a polymer, the greater the number of hyaluronan molecules in a single chain, the thicker it becomes.

When the molecular mass is high, cells are “told” to stop increasing in number. When the molecular mass is low, they are “asked” to proliferate. In the case of the naked mole rat, Gorbunova found that the molecular mass was unusually high, as much as five times that of mice or humans.

To understand whether this unusual hyaluroanan was responsible for cancer resistance in naked mole rats, Gorbunova increased the amount of enzyme that degrades the chemical, reducing its molecular weight. Soon after, she observed that the rat’s cells readily started growing in thick clusters like cancerous mouse cells do.

In a separate experiment, she also tested this hypothesis by reducing the amount of hyaluronan by knocking out the genes that encode for its production. Then, on injecting cancer-causing virus, instead of resisting, the naked mole rat’s cells became cancerous.

Gorbunova thinks that having thick hyaluronan might have helped increase the elasticity of the rat’s skin, allowing it to live in small tunnels underground. This trait might then have accidentally developed a new role of preventing cancer.

Rochelle Buffenstein, a physiologist at the University of Texas Health Science Center, has studied naked mole rats for years and was pleased to see that some light has been shed on this creature’s remarkable resistance to cancer. “As we learn more about these cancer-resistant mechanisms that are effective and can be directly pertinent to humans, we may find new cancer prevention strategies,” she said.The Conversation

Nature, 2013. DOI: 10.1038/nature12234  (About DOIs).

This article was published at The Conversation.


View the original article here

NASA’s launch of Sun-observing satellite to be carried live today

Sign up for the Ars Technica Dispatch, which delivers links to the most popular articles, journals, and multimedia features via e-mail to your inbox every week.

I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy.

View the original article here

New tech may let current graphics cards drive a $500 holographic display

A 40-channel version of the new holographic display.

Three-dimensional films and TVs may seem cutting-edge, but existing technologies all require optical tricks to create the illusion of depth (in some cases, very old tricks). The only truly 3D display technology we have, holography, has primarily been limited to displaying static images. That situation has slowly begun to change, but the existing technology is complicated and expensive, and it suffers from a slow refresh rate.

Now, some researchers have come up with a completely different method of creating the light pattern necessary to build a holographic image. The functional units in their device can be manufactured for pennies: the researchers suspect they could build a large holographic display for as little as $500, one that could potentially be driven by a commodity PC sporting a suite of high-end graphics cards.

The key to building a hologram is the ability of photons to interfere with each other, creating patterns where some regions have constructive interference and become bright, while others experience destructive interference and go dark. A carefully crafted diffractive can bend and redirect light so that this interference pattern recreates patterns of light that look as if they just reflected off the surface of a three-dimensional object. Most importantly, this 3D appearance is retained even as the viewer's perspective shifts around the surface.

We have very mature technology that allows us to create a static surface that consistently displays a single image. But to get that image to move or to replace it with a different image entirely involves wholesale modification to the hardware that is creating the diffraction pattern. Even assuming that you can calculate what the new configuration of the hardware must be (something that's not especially easy), you would then have to reconfigure the hardware and rescan light over it. The existing examples of hardware that can do this have some serious issues.

Most of them involve liquid crystal, micro-mechanical hardware that physically alters its configuration. The authors of the new paper provide a laundry list of this technology's limitations: "relatively low bandwidth, high cost, low diffraction angle, poor scalability, and the presence of quantization noise, unwanted diffractive orders, and zero-order light." (The latter factors create visual artifacts in the display image.)

The device the researchers have created instead involves an array of devices called anisotropic leaky-mode couplers. These devices act as waveguides for light while allowing the light travelling through them to be manipulated. When exposed to radio-frequency radiation, the hardware will form acoustic waves that alter the light travelling through them. This allows each coupler to rapidly alter the timing and direction of the light it emits in response to changes in the radio waves. By placing a number of them in close proximity, it's possible to get the light they emit to interfere (creating a hologram) and then change this hologram simply by altering the radio frequencies.

Other good features of the couplers are that they can be made to emit light with a single polarization, allowing a simple filter to cut out any imaging artifacts. And they can work with red, green, and blue light simultaneously, allowing a true-color hologram. This can also work with just about any light source.

You don't need many of these couplers to build a significant device; the authors estimate that about 500 of them would be all that you'd need to build a single horizontal line of a one-meter wide display. In the paper, they showed a device with 40 channels, and they're already testing one with 1,250 channels. The devices also easy and cheap to make. Their 40-channel hardware cost $50 to make at MIT's custom fabrication facility, but the author's estimate that an equivalent could be made at a commercial fabrication facility for somewhere in the area of $3.

As further progress has been made in getting graphics cards to generate holographic information and the radio frequency control signals are compatible with analog video displays, the authors think that a holographic display could probably be driven by a commodity PC with a bank of high-end graphics cards. The graphics cards would end up being the biggest expense in the hardware, so the actual array of couplers would only cost about $500 to make. Any reasonably bright color LED could provide the light source in this case. The end result would be a full-color hologram at standard video resolution with a refresh rate of about 30 fps.

Nature, 2013. DOI: 10.1038/nature12217  (About DOIs).


View the original article here

EPA steps away from fracking investigation in Wyoming

Sign up for the Ars Technica Dispatch, which delivers links to the most popular articles, journals, and multimedia features via e-mail to your inbox every week.

I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy.

View the original article here

Jonas Mekas: New Works and His First Vine

ANIMAL stopped by the opening of Jonas Mekas’s exhibit “OUTLAW: NEW WORKS” at the Microscope Gallery last night. The 90-year-old Lithuanian filmmaker, “godfather of American avant-garde cinema” and founder of the invaluable Anthology Film Archives was cheerful and friendly, despite the aggressive theme of his show.

I dedicate this Exhibition to all artists who had to go or are going through long money draining and annoying “dispute” trials with their gallerists. This show marks the occasion of my own “dispute” court case concluding its Fourth year.

- Jonas Mekas

There were also a few gorgeous film-strip-triptych canvases of flowers, grass, Mekas in the grass — a familiar poignant sweetness. All the while, a drunken accordion screech is coming from the corner with Mekas’s accented singing, undulating, gentle demon rasping about wanting to “piss on your grave” in his new video piece Sing. Sing to me, Blackberrybird. Like the “OUTLAW Xerox diaries” collages on the wall, the magnified torn-up bits of email exchanges. It’s hard to tell whether Mekas or his gallerist e-authored “I never lose” “don’t fuck with me” “hahahaha” and “ghetto avant garde.” Mekas survived the Holocaust, Nazi pursuits and political persecution for his writing and “obscene” works. Don’t fuck with Mekas!

Last night, something important happened. ANIMAL friend Rhett Jones of Art F City handed Mekas a phone. He tells ANIMAL that when Mekas saw our modern trinket in action, his face lit up. And then, Mekas made his first Vine and everything came full circle.

Head over to Art F City to see the first ever Jonas Mekas Vine.

That is exciting, validating and awesome. Mekas invented Vine. Sort of. His quick cuts, sped-up documentarian accounts via a 16mm Bolex are legendary. They encompass, communicate and emote so much of New York history and personal intimacy. That very aesthetic is what Vine – at it’s very best – is doing with it’s looping 6-second captures in the hands of a pedestrian documentarian. More Mekas Vines please!

Here are some Mekas classics for context.

Here are my Vines of the opening. Not as good as Mekas, obviously.


“OUTLAW: NEW WORKS,” Jonas Mekas, Jun 27 – Jul 29, Microscope Gallery, Brooklyn


View the original article here

Epic Tribute to Graffiti Artist NEKST Stretches for Blocks in Brooklyn

We’ve seen a lot of tributes for graffiti writers who have passed away, but never like this. Over the past few days, numerous artists have been descended upon Williamsburg to put up NEKST, a prolific graffiti artist who passed away in December of 2012. For three blocks, the name NEKST appears over and over, each done in a different style by a different artist. Most of the dedication pieces were done by members of the mighty MSK crew, some of who are in town to paint the Bowery Wall. Click through the gallery above to see each tribute. (Photo: Aymann Ismail/ANIMALNewYork)


View the original article here

Parting Shot

Artist Federico Massa aka Cruz painted a breakthrough mural for Williamsburg Cinemas. (Photo: Federico Massa)


View the original article here

Upcoming Nick Cave Documentary Is Not a Documentary

Nick Cave pretends to chauffeur Kylie Minogue and Ray Winstone. Nick Cave watches Scarface with his twelve-year-old sons. He goes to a psychoanalyst, has lunch with Warren Ellis and visits the Nick Cave Archive at the “Melbourne Arts Centre” set. But not really.

For their upcoming “drama-documentary” 20,000 Days on Earth, conceptual artists and music history re-enactors Iain Forsyth & Jane Pollard don’t just respect Nick Cave’s mysterious side — they relish in it, setting up cinematic narratives and expository scenes for Cave to improvise and ad-lib in. “The thing that seems so kind of prevalent in contemporary music docs is that they’re all about getting behind something, revealing something, taking away the mask, taking away the myth,” Forsyth tells The Guardian. ”The important thing for us was not breaking the mythology.”

The car chauffeur is specifically explicit in its metaphor-turned-narrative-device technique.

“The car becomes this place of imaginings, I guess, where the thoughts I’m having materialise in the forms of people that have played some part in my story,” Cave says.

The film has been secretly in progress since “Push the Sky Away” and will be released in 2014.

Speaking of artifice, here’s the Forsyth & Pollard-directed “Dig, Lazarus, Dig!!!” Nick Cave and the Bad Seeds video. That mustache though.


View the original article here

A Restaurant for Introverts

Eenmaal is one of Amsterdam’s newest restaurants and it’s the first to only offer tables for one. I wonder if this idea could ever catch on in the US.

American news outlets barrage with articles and guides about “Eating Solo.” A lot of the “tips” are focused on making the dining not actually solo and explain how to talk to your waiter or talking to another patron. They discuss neither why one might not want to dine alone, aside from a the lack of other options, nor the benefits of dining alone, like the chance for quiet contemplation after a busy day. It’s a move make it more “tolerable,” but not to destigmatize it in the way this Dutch restaurant seems to.

There’s an obvious cultural divide. I’m reminded of a conversation I once had with a Dutch friend who, upon moving to New York City, was surprised how freely strangers talk to each other, especially on public transportation. In the Netherlands, he talked that people are instilled with an idea that they have right to be left alone and not be bothered in public.

Marti Olsen Laney recently released a self-help book called The Introvert Advantage: Making the Most of Your Inner Strengths which promises to show the benefits that introversion has in the face of a culture that pressures them into being extroverted, like “their analytical skills, ability to think outside the box, and strong powers of concentration.” Maybe this restaurant could serve as an example for Americans — a small step towards valuing introverts and their unique skills or, at least, encourage recognizing this as a valid human characteristic rather than a failing.

Too bad that we’re a long way away from having our own solo-dining restaurants, but pop-up restaurants are usually more about advertising and creating buzz anyway. This seems more about promoting Eenmaal’s social designer Marina Van Goor and branding agency Vandejong than making introversion more socially acceptable.


View the original article here