FROM: NATIONAL SCIENCE FOUNDATION
The challenge of building a better atomic clock and why it matters
Prior to the mid-18th century, it was tough to be a sailor. If your voyage required east-west travel, you couldn't set out to a specific destination and have any real hope of finding it efficiently.
At the time sailors had no reliable method for measuring longitude, the coordinates that measure a point's east-west position on the globe. To find longitude, you need to know the time in two places--the ship you're on, and the port you departed from. By calculating the difference between those times, sailors got a rough estimate of their position. The problem: The clocks back then just couldn't keep time that well. They lost their home port's time almost immediately after departing.
Today, time is just as important to navigation, only instead of calculating positioning with margins of errors measured in miles and leagues, we have GPS systems that are accurate within meters. And instead of springs and gears, our best timepieces rely on cesium atoms and lasers.
But given the history, it's fitting that scientists like Clayton Simien, a National Science Foundation (NSF)-funded physicist at the University of Alabama at Birmingham who works on atomic clocks, was inspired by the story of John Harrison, an English watchmaker who toiled in the 1700s to come up with the first compact marine chronometer. This device marked the beginning of the end for the "longitude problem" that had plagued sailors for centuries.
"If you want to measure distances well, you really need an accurate clock," Simien said.
Despite the massive leaps navigation technology has made since Harrison's time, scientists--many NSF-funded--are looking for new ways to make clocks more accurate, diminishing any variables that might distort precise timekeeping. Some, for example, are looking for ways to better synchronize atomic clocks on earth with GPS satellites in orbit, where atmospheric distortion can limit signal accuracy to degrees that seem minute, but are profound for the precise computer systems that govern modern navigation.
The National Institute of Standards and Technology, Department of Defense, join NSF in the search for even better atomic clocks. But today's research isn't just about building a more accurate timepiece. It's about foundational science that has other ramifications.
'One Mississippi,' or ~9 billion atom oscillations
Atomic clocks precisely measure the ticks of atoms, essentially tossing cesium atoms upward, much like a fountain. Laser-beam photons "cool down" the atoms to very low temperatures, so the atoms can transfer back and forth between a ground state and an excited state.
The trick to this process is finding just the right frequency to move directly between the two states and overcome Doppler shifts that distort rhythm. (Doppler shifts are increases or decreases in wave frequency as the waves move closer or further away -- much like the way a siren's sound changes depending on its distance.)
Laser improvements have helped scientists control atoms better and address the Doppler issue. In fact, lasers helped to facilitate something known as an optical lattice, which can layer atoms into "egg cartons" to immobilize them, helping to eliminate Doppler shifts altogether.
That shift between ground state and excited state (better known as the atomic transition frequency) yields something equivalent to the official definition of a second: 9,192,631,770 cycles of the radiation that gets a cesium atom to vibrate between those two energy states. Today's atomic clocks mostly still use cesium.
NSF-funded physicist Kurt Gibble, of Pennsylvania State University, has an international reputation for assessing accuracy and improving atomic clocks, including some of the most accurate ones in the world: the cesium clocks at the United Kingdom's National Physical Laboratory and the Observatory of Paris in France.
But accurate as those are, Gibble says the biggest advance in atomic clocks will be a move from current-generation microwave frequency clocks -- the only kind currently in operation -- to optical frequency clocks.
The difference between the two types of clocks lies in the frequencies they use to measure the signals their atoms' electrons emit when they change energy levels. The microwave technology keeps reliable time, but optical clocks offer significant improvements. According to Gibble, they're so accurate they would lose less than a second over the lifetime of the universe, or 13.8 billion years.
Despite that promise of more accurate performance, the optical frequency clocks don't currently keep time.
"So far, optical standards don't run for long enough to keep time," Gibble said. "But they will soon."
Optical frequency clocks operate on a significantly higher frequency than the microwave ones, which is why many researchers are exploring their potential with new alkaline rare earth elements, such as ytterbium, strontium and gadolinium.
"The higher frequency makes it a lot easier to be more accurate," Gibble said.
Gibble is starting work on another promising elemental candidate: cadmium. Simien, whose research employs gadolinium, has focused on minimizing--or eliminating if possible--key issues that limit accuracy.
"Nowadays, the biggest obstacle, in my opinion is the black body radiation shift," Simien said. "The black body radiation shift is a symptomatic effect. We live in a thermal environment, meaning its temperature fluctuates. Even back in the day, a mechanical clock had pieces that would heat up and expand or cool down and contract.
"A clock's accuracy varied with its environment. Today's system is no longer mechanical and has better technology, but it is still susceptible to a thermal environment's effects. Gadolinium is predicted to have a significantly reduced black body relationship compared to other elements implemented and being proposed as new frequency standards."
While Simien and Gibble agree that optical frequency research represents the next generation of atomic clocks, they recognize that most people don't really care if the Big Bang happened 13 billion years ago or 13 billion years ago plus one second.
"It's important to understand that one more digit of accuracy is not always just fine tuning something that is probably already good enough," said John Gillaspy, an NSF program director who reviews funding for atomic clock research for the agency's physics division. "Extremely high accuracy can sometimes mean a qualitative breakthrough which provides the first insight into an entirely new realm of understanding--a revolution in science."
Gillaspy cited the example of American physicist Willis Lamb, who in the middle of the last century measured a tiny frequency shift that led theorists to reformulate physics as we know it, and earned him a Nobel Prize. While research to improve atomic clocks is sometimes dismissed as trying to make ultra-precise clocks even more precise, the scientists working in the field know their work could potentially change the world in profound, unexpected ways.
"Who knows when the next breakthrough will come, and whether it will be in the first digit or the 10th?" Gillaspy continued. "Unfortunately, most people cannot appreciate why more accuracy matters."
From Wall Street to 'Interstellar'
Atomic clock researchers point to GPS as the most visible application of the basic science they study, but it's only one of this foundational work's potential benefits.
Many physicists expect it to provide insight that will illuminate our understanding of fundamental physics and general relativity. They say new discoveries will also advance quantum computing, sensor development and other sensitive instrumentation that requires clever design to resist natural forces like gravity, magnetic and electrical fields, temperature and motion.
The research also has implications beyond the scientific world. Financial analysts worry that worldwide markets could lose millions due to ill-synchronized clocks.
On June 30 th at 7:59:59 p.m. EDT, the world adds what is known as a "leap second" to keep solar time within 1 second of atomic time. History has shown, however, that this adjustment to clocks around the world is often done incorrectly. Many major financial markets are taking steps ranging from advising firms on how to deal with the adjustment to curtailing after-hours trading that would occur when the change takes place.
Gibble says the goal of moving to ever more accurate clocks isn't to more precisely measure time over a long period.
"It's the importance of being able to measure small time differences."
GPS technology, for example, looks at the difference of the propagation of light from multiple satellites. To provide location information, several GPS satellites send out signals at the speed of light--or one foot per nanosecond--saying where they are and what time they made their transmissions.
"Your GPS receiver gets the signals and looks at the time differences of the signals--when they arrive compared to when they said they left," Gibble said. "If you want to know where you are to a couple of feet, you need to have timing to a nanosecond--a billionth of a second."
In fact, he said, if you want that system to continue to accurately operate for a day, or for weeks, you need timing significantly better than that. Getting a GPS to guide us in deserts, tropical forests, oceans and other areas where roads aren't around to help as markers along the way--one needs clocks with nanosecond precision in GPS satellites to keep us from getting lost.
And if you're not traveling to those locales, then there's still the future to think about.
"Remember the movie, 'Interstellar,'" Simien said. "There is someone on a spaceship far away, and Matthew McConaughey is on a planet in a strong gravitational field. He experiences reality in terms of hours, but the other individual back on the space craft experiences years. That's general relativity. Atomic clocks can test this kind of fundamental theory and its various applications that make for fascinating science, and as you can see, they also expand our lives."
-- Ivy F. Kupec,
Investigators
Kurt Gibble
Clayton Simien
Related Institutions/Organizations
University of Alabama at Birmingham
Pennsylvania State Univ University Park
A PUBLICATION OF RANDOM U.S.GOVERNMENT PRESS RELEASES AND ARTICLES
Showing posts with label SCIENCE. Show all posts
Showing posts with label SCIENCE. Show all posts
Wednesday, July 8, 2015
Tuesday, July 7, 2015
DISCOVERING HOW ROMULAN CLOAKING TECHNOLOGY WORKS THROUGH MATH
FROM: THE NATIONAL SCIENCE FOUNDATION
Hidden from view
Mathematicians formulate equations, bend light and figure out how to hide things
The idea of cloaking and rendering something invisible hit the small screen in 1966 when a Romulan Bird of Prey made an unseen, surprise attack on the Starship Enterprise on Star Trek. Not only did it make for a good storyline, it likely inspired budding scientists, offering a window of technology's potential.
Today, between illusionists who make the Statue of Liberty disappear to Harry Potter's invisibility cloak that not only hides him from view but also protects him from spells, pop culture has embraced the idea of hiding behind force fields and magical materials. And not too surprisingly, National Science Foundation (NSF)-funded mathematicians, scientists and engineers are equally fascinated and looking at how and if they can transform science fiction into, well, just science.
"Cloaking is about detection and rendering something--and the cloak itself--not detectable or seen," said Michael Weinstein, an NSF-funded mathematician at Columbia University. "An object is seen when waves are bounced off it and observed by a detector."
In recent years, researchers have developed new ways in which light can move around and even through a physical object, making it invisible to parts of the electromagnetic spectrum and undetectable by sensors. Additionally, mathematicians, theoretical physicists and engineers are exploring how and whether it's feasible to cloak against other waves besides light waves. In fact, they are investigating sound waves, sea waves, seismic waves and electromagnetic waves including microwaves, infrared light, radio and television signals.
Successful outcomes have far-reaching results--like protecting deep-water oil rigs from earthquakes and vulnerable beaches from tsunamis.
Uncloaking cloaking's math and science history
Partial differential equations, coordinate invariance, wave equations--when you start talking to researchers about cloaking, it soon starts sounding a lot like math. And that's because at the very heart of this scientific question lies a mathematical one.
"There are very nice mathematical problems associated with this, and some of the ideas are mathematically very, very simple," said Michael Vogelius, NSF's division director for mathematical sciences and whose own research at Rutgers University has contributed significantly to this field. "But that doesn't mean they are simple to implement. In transformation cloaking the materials with the desired cloaking properties are found by singular or nearly singular change of variables in the energy expression--these material coefficients are sometimes referred to as the push-forward (or pull-back) of the original background. Basically, mathematicians ask, 'what do the equations have to look like to get this effect?' The thing that will be very hard--and is very hard--is to build these materials. They are singular in all kinds of ways."
That is why throughout cloaking research history, mathematicians, theoretical physicists and engineers have looked at the problem together.
According to Graeme Milton, an NSF-funded mathematician at the University of Utah, cloaking's start is rooted in math.
"Mathematicians and theoretical physicists basically had the idea independently for transformation-based cloaking," he said, adding that other mathematicians along the way--including himself--have taken the same wave equations and developed them further.
Milton and his collaborators created superlens cloaking, where cloaking occurs near lenses with capabilities far greater than traditional ones, and active exterior cloaking, where cloaking is created by active devices, and the cloak does not completely surround the object.
While cloaking has made considerable theoretical strides, its triumphs have been fairly limited for those awaiting real-world applications.
"Essentially, all the cloaking that has been done successfully in experiment involves a fixed frequency or small band of frequencies," Weinstein said. "So, it's a bit like--suppose you detect things by shining a light on them, and we all agree you're only allowed to shine blue light. I can construct a cloak that will conceal it under blue light, but if you vary the color--that is the wavelength--of the probing light, it will then be detectable. So far, we are unable to cloak something that is invisible to all colors. And because white light is composed of a broad spectrum of colors, no one has come near to making things truly undetectable."
Even with those limitations, there have been distinct milestones in cloaking research.
One of the best examples is actually widely available but probably not commonly thought of as cloaking technology, yet it applies the same sort of math. It involves sounds waves.
"Noise-cancelling headphones are basically cloaking the sounds from outside so they don't reach your ears," Milton said. "Active cloaking is very much along these same lines."
In 2006, as Milton published a key paper that expanded on the superlens cloaking he developed more than a decade earlier, a group of Duke University physicists created the first-ever microwave invisibility cloak using specially engineered "metamaterials," which can manipulate wavelengths, such as light, in a way that naturally occurring materials cannot do alone. However, it only cloaked microwaves and only in two dimensions.
And in 2014, a group in France actually did some experiments with a company to drill 5-meter-deep holes in strategic locations that would modify the earth's density and then measure effectiveness in cloaking. The experiments man-made vibrations that were at a given frequency, not earthquakes. They were able to deflect the seismic waves, showing some possibility to develop this application further.
"Science needs to figure out how to cloak against multiple frequencies before there can be any 'real' cloaking, however," Milton said. "Earthquakes and tsunamis involve a mixture of frequencies, so they are particularly challenging problems."
Passive and active approaches to cloaking research
To understand cloaking, one must first understand where the idea comes from.
When light encounters an object, it is either reflected, refracted or absorbed. Reflection means light waves bounce off an object, like a mirror. Refraction bends light waves, much like looking at a straw in a glass of water seems to break the straw into two pieces. When waves are absorbed, they are stopped, neither bouncing back nor transmitting through the object--although perhaps heating it. Objects which absorb light appear opaque or dark. These interactions between light and objects are what allow us to see those objects.
For cloaking to occur, light must be tricked into doing unusual things that reduce our ability to "see" or detect the object. Mathematicians look for how to control the flow of waves, using wave equations to characterize their behavior. Wave equations are an example of partial differential equation (PDE); PDEs are the language of the fundamental laws of physics. (Just this year, John Nash and Louis Nirenberg received the prestigious Abel Prize for their work in partial differential equations. Their contributions have had a major impact on how mathematicians analyze the PDEs used to understand phenomena such as cloaking.)
"All wave phenomena are predictable from these wave equations--at least in principle," Weinstein said. "That is, light waves, sound waves, elastic waves, quantum waves, gravitational waves. But the problem is that these equations are not so easily solved, so one tries to come up with guiding principles, useful approximations and rules of thumb. Coming to the question of cloaking, there's a mathematical property of wave equations, governing, for example, light, called coordinate invariance. That's basically a way of saying that you can change coordinates and perspectives of viewing the object, and the equations themselves don't change their essential form. By exploiting this idea of coordinate invariance, scientists have come up with prescriptions for optical properties that can cloak arbitrary objects."
In 2009, Milton and colleagues first introduced exterior active cloaking. Scientists in this field describe their research as involving either active or passive cloaking. Active cloaking uses devices that actively generate electromagnetic fields that distort waves. Passive cloaking employs metamaterials that passively shield objects from electromagnetic waves rather than intervening.
"The term 'metamaterial' is a bit deceptive," Weinstein noted. "Metamaterials are roughly composite materials. You take a bunch of building blocks, made from naturally occurring materials, and put them together in interesting ways to create some emergent property--some collective property of this novel arrangement not in naturally occurring materials. That new collective material is a metamaterial. But it's more like a device that actually interacts actively with waves moving through it."
With new metamaterial designs come new cloaking capabilities. NSF-funded engineer Andrea Alù won NSF's Waterman award in 2015 for creating metamaterials that can cloak a three-dimensional object. He and his team developed two methods--plasmonic cloaking and mantle cloaking--that take advantage of different light-scattering effects to hide an object.
Weinstein is exploring, through his research on the partial differential equations governing light, electromagnetism, sound, etc.--different ways of controlling the flow of energy, cloaking being one example, by using novel media such as metamaterials. Vogelius is known for bringing credibility to the transformation optics that serve as a backbone to cloaking broadly.
Where's my invisibility cloak?
But most fans of stealthful space ships, submarines and cloaks will still wonder: how close are we to really having any of this technology?
"I think that from the perspective of lay people, the most misunderstood thing is thinking this technology is right around the corner," Milton said. "Realistic Harry Potter cloaks are still a long way off."
Unfortunately, addressing multi-frequency cloaking will take time.
"What I do see is a merging of mathematical, physical and engineering principles to more effectively enable isolation of objects from harmful environments--there will be movement in that direction," Weinstein said. "Also, there will be important experimental advances resulting from attempts to achieve what is only theoretically possible at this time."
In the meantime, these mathematicians often look at other issues--sometimes similar ones that offer the potential to rethink their approaches.
"Right now, we're working on the opposite sort of problem--on the limitations to cloaking," Milton said. "Cloaking is just one of the many avenues I work on. Honestly, it's always stimulating to explore the limitations of what's possible and what's impossible."
-- Ivy F. Kupec
Investigators
Andrea Alu
Graeme Milton
Michael Vogelius
Michael Weinstein
Related Institutions/Organizations
Rutgers University
University of Utah
Columbia University
University of Texas at Austin
Hidden from view
Mathematicians formulate equations, bend light and figure out how to hide things
The idea of cloaking and rendering something invisible hit the small screen in 1966 when a Romulan Bird of Prey made an unseen, surprise attack on the Starship Enterprise on Star Trek. Not only did it make for a good storyline, it likely inspired budding scientists, offering a window of technology's potential.
Today, between illusionists who make the Statue of Liberty disappear to Harry Potter's invisibility cloak that not only hides him from view but also protects him from spells, pop culture has embraced the idea of hiding behind force fields and magical materials. And not too surprisingly, National Science Foundation (NSF)-funded mathematicians, scientists and engineers are equally fascinated and looking at how and if they can transform science fiction into, well, just science.
"Cloaking is about detection and rendering something--and the cloak itself--not detectable or seen," said Michael Weinstein, an NSF-funded mathematician at Columbia University. "An object is seen when waves are bounced off it and observed by a detector."
In recent years, researchers have developed new ways in which light can move around and even through a physical object, making it invisible to parts of the electromagnetic spectrum and undetectable by sensors. Additionally, mathematicians, theoretical physicists and engineers are exploring how and whether it's feasible to cloak against other waves besides light waves. In fact, they are investigating sound waves, sea waves, seismic waves and electromagnetic waves including microwaves, infrared light, radio and television signals.
Successful outcomes have far-reaching results--like protecting deep-water oil rigs from earthquakes and vulnerable beaches from tsunamis.
Uncloaking cloaking's math and science history
Partial differential equations, coordinate invariance, wave equations--when you start talking to researchers about cloaking, it soon starts sounding a lot like math. And that's because at the very heart of this scientific question lies a mathematical one.
"There are very nice mathematical problems associated with this, and some of the ideas are mathematically very, very simple," said Michael Vogelius, NSF's division director for mathematical sciences and whose own research at Rutgers University has contributed significantly to this field. "But that doesn't mean they are simple to implement. In transformation cloaking the materials with the desired cloaking properties are found by singular or nearly singular change of variables in the energy expression--these material coefficients are sometimes referred to as the push-forward (or pull-back) of the original background. Basically, mathematicians ask, 'what do the equations have to look like to get this effect?' The thing that will be very hard--and is very hard--is to build these materials. They are singular in all kinds of ways."
That is why throughout cloaking research history, mathematicians, theoretical physicists and engineers have looked at the problem together.
According to Graeme Milton, an NSF-funded mathematician at the University of Utah, cloaking's start is rooted in math.
"Mathematicians and theoretical physicists basically had the idea independently for transformation-based cloaking," he said, adding that other mathematicians along the way--including himself--have taken the same wave equations and developed them further.
Milton and his collaborators created superlens cloaking, where cloaking occurs near lenses with capabilities far greater than traditional ones, and active exterior cloaking, where cloaking is created by active devices, and the cloak does not completely surround the object.
While cloaking has made considerable theoretical strides, its triumphs have been fairly limited for those awaiting real-world applications.
"Essentially, all the cloaking that has been done successfully in experiment involves a fixed frequency or small band of frequencies," Weinstein said. "So, it's a bit like--suppose you detect things by shining a light on them, and we all agree you're only allowed to shine blue light. I can construct a cloak that will conceal it under blue light, but if you vary the color--that is the wavelength--of the probing light, it will then be detectable. So far, we are unable to cloak something that is invisible to all colors. And because white light is composed of a broad spectrum of colors, no one has come near to making things truly undetectable."
Even with those limitations, there have been distinct milestones in cloaking research.
One of the best examples is actually widely available but probably not commonly thought of as cloaking technology, yet it applies the same sort of math. It involves sounds waves.
"Noise-cancelling headphones are basically cloaking the sounds from outside so they don't reach your ears," Milton said. "Active cloaking is very much along these same lines."
In 2006, as Milton published a key paper that expanded on the superlens cloaking he developed more than a decade earlier, a group of Duke University physicists created the first-ever microwave invisibility cloak using specially engineered "metamaterials," which can manipulate wavelengths, such as light, in a way that naturally occurring materials cannot do alone. However, it only cloaked microwaves and only in two dimensions.
And in 2014, a group in France actually did some experiments with a company to drill 5-meter-deep holes in strategic locations that would modify the earth's density and then measure effectiveness in cloaking. The experiments man-made vibrations that were at a given frequency, not earthquakes. They were able to deflect the seismic waves, showing some possibility to develop this application further.
"Science needs to figure out how to cloak against multiple frequencies before there can be any 'real' cloaking, however," Milton said. "Earthquakes and tsunamis involve a mixture of frequencies, so they are particularly challenging problems."
Passive and active approaches to cloaking research
To understand cloaking, one must first understand where the idea comes from.
When light encounters an object, it is either reflected, refracted or absorbed. Reflection means light waves bounce off an object, like a mirror. Refraction bends light waves, much like looking at a straw in a glass of water seems to break the straw into two pieces. When waves are absorbed, they are stopped, neither bouncing back nor transmitting through the object--although perhaps heating it. Objects which absorb light appear opaque or dark. These interactions between light and objects are what allow us to see those objects.
For cloaking to occur, light must be tricked into doing unusual things that reduce our ability to "see" or detect the object. Mathematicians look for how to control the flow of waves, using wave equations to characterize their behavior. Wave equations are an example of partial differential equation (PDE); PDEs are the language of the fundamental laws of physics. (Just this year, John Nash and Louis Nirenberg received the prestigious Abel Prize for their work in partial differential equations. Their contributions have had a major impact on how mathematicians analyze the PDEs used to understand phenomena such as cloaking.)
"All wave phenomena are predictable from these wave equations--at least in principle," Weinstein said. "That is, light waves, sound waves, elastic waves, quantum waves, gravitational waves. But the problem is that these equations are not so easily solved, so one tries to come up with guiding principles, useful approximations and rules of thumb. Coming to the question of cloaking, there's a mathematical property of wave equations, governing, for example, light, called coordinate invariance. That's basically a way of saying that you can change coordinates and perspectives of viewing the object, and the equations themselves don't change their essential form. By exploiting this idea of coordinate invariance, scientists have come up with prescriptions for optical properties that can cloak arbitrary objects."
In 2009, Milton and colleagues first introduced exterior active cloaking. Scientists in this field describe their research as involving either active or passive cloaking. Active cloaking uses devices that actively generate electromagnetic fields that distort waves. Passive cloaking employs metamaterials that passively shield objects from electromagnetic waves rather than intervening.
"The term 'metamaterial' is a bit deceptive," Weinstein noted. "Metamaterials are roughly composite materials. You take a bunch of building blocks, made from naturally occurring materials, and put them together in interesting ways to create some emergent property--some collective property of this novel arrangement not in naturally occurring materials. That new collective material is a metamaterial. But it's more like a device that actually interacts actively with waves moving through it."
With new metamaterial designs come new cloaking capabilities. NSF-funded engineer Andrea Alù won NSF's Waterman award in 2015 for creating metamaterials that can cloak a three-dimensional object. He and his team developed two methods--plasmonic cloaking and mantle cloaking--that take advantage of different light-scattering effects to hide an object.
Weinstein is exploring, through his research on the partial differential equations governing light, electromagnetism, sound, etc.--different ways of controlling the flow of energy, cloaking being one example, by using novel media such as metamaterials. Vogelius is known for bringing credibility to the transformation optics that serve as a backbone to cloaking broadly.
Where's my invisibility cloak?
But most fans of stealthful space ships, submarines and cloaks will still wonder: how close are we to really having any of this technology?
"I think that from the perspective of lay people, the most misunderstood thing is thinking this technology is right around the corner," Milton said. "Realistic Harry Potter cloaks are still a long way off."
Unfortunately, addressing multi-frequency cloaking will take time.
"What I do see is a merging of mathematical, physical and engineering principles to more effectively enable isolation of objects from harmful environments--there will be movement in that direction," Weinstein said. "Also, there will be important experimental advances resulting from attempts to achieve what is only theoretically possible at this time."
In the meantime, these mathematicians often look at other issues--sometimes similar ones that offer the potential to rethink their approaches.
"Right now, we're working on the opposite sort of problem--on the limitations to cloaking," Milton said. "Cloaking is just one of the many avenues I work on. Honestly, it's always stimulating to explore the limitations of what's possible and what's impossible."
-- Ivy F. Kupec
Investigators
Andrea Alu
Graeme Milton
Michael Vogelius
Michael Weinstein
Related Institutions/Organizations
Rutgers University
University of Utah
Columbia University
University of Texas at Austin
Friday, July 3, 2015
MRO TAKES CLOSEUP IN AUREUM CHAOS, MARS
FROM: NASA
FED FUNDING OF SCIENCE AND ENGINEERING INSTITUTIONS DROPS 6%
FROM: NATIONAL SCIENCE FOUNDATION
Federal funding for science and engineering at universities down 6 percent
Latest figures show obligations down for R&D and facilities that support science and engineering
Federal agencies obligated $29 billion to 995 science and engineering academic institutions in fiscal year 2013, according to a new report from the National Science Foundation's (NSF) National Center for Science and Engineering Statistics (NCSES). The figure represents a 6 percent decline in current dollars from the previous year, when agencies provided $31 billion to 1,073 institutions.
After adjustment for inflation, federal science and engineering obligations to academic institutions dropped by $1 billion from FY 2011 to FY 2012, and by $2 billion between FY 2012 and FY 2013. The obligations fall into six categories:
Research and development;
R&D plant (facilities and fixed equipment, such as reactors, wind tunnels and particle accelerators);
Facilities and equipment for instruction in science and engineering;
Fellowships, traineeships and training grants;
General support for science and engineering;
Other science and engineering activities.
Of those categories, research and development accounted for 89 percent of total federal obligations during the past three years.
The three largest providers of federal funding in fiscal 2013 were the Department of Health and Human Services (58 percent), NSF (17 percent) and the Department of Defense (12 percent). The Department of Energy, the Department of Agriculture and NASA provided the remainder of funding (11 percent, combined). Of these six agencies, only the Department of Energy showed increased obligations between FY 2012 and FY 2013.
The leading 20 universities, ranked in terms of federal academic S&E obligations, accounted for 37 percent of the FY 2013 federal total. The Johns Hopkins University continued to receive the most federal obligations of any university, at $1.5 billion.
NCSES collects information about federal obligations to independent nonprofit institutions in two categories: research and development, and R&D plant. The $6.6 billion provided to 1,068 institutions in FY 2013 represented a 2 percent decrease from $6.8 billion the previous year. The leading 10 nonprofits accounted for 36 percent of fiscal 2013 funding, with the MITRE Corporation receiving the largest total, at $485 million.
The statistics are from the NCSES Survey of Federal Science and Engineering Support to Universities, Colleges and Nonprofit Institutions.
-NSF-
Media Contacts
Rob Margetta, NSF
Federal funding for science and engineering at universities down 6 percent
Latest figures show obligations down for R&D and facilities that support science and engineering
Federal agencies obligated $29 billion to 995 science and engineering academic institutions in fiscal year 2013, according to a new report from the National Science Foundation's (NSF) National Center for Science and Engineering Statistics (NCSES). The figure represents a 6 percent decline in current dollars from the previous year, when agencies provided $31 billion to 1,073 institutions.
After adjustment for inflation, federal science and engineering obligations to academic institutions dropped by $1 billion from FY 2011 to FY 2012, and by $2 billion between FY 2012 and FY 2013. The obligations fall into six categories:
Research and development;
R&D plant (facilities and fixed equipment, such as reactors, wind tunnels and particle accelerators);
Facilities and equipment for instruction in science and engineering;
Fellowships, traineeships and training grants;
General support for science and engineering;
Other science and engineering activities.
Of those categories, research and development accounted for 89 percent of total federal obligations during the past three years.
The three largest providers of federal funding in fiscal 2013 were the Department of Health and Human Services (58 percent), NSF (17 percent) and the Department of Defense (12 percent). The Department of Energy, the Department of Agriculture and NASA provided the remainder of funding (11 percent, combined). Of these six agencies, only the Department of Energy showed increased obligations between FY 2012 and FY 2013.
The leading 20 universities, ranked in terms of federal academic S&E obligations, accounted for 37 percent of the FY 2013 federal total. The Johns Hopkins University continued to receive the most federal obligations of any university, at $1.5 billion.
NCSES collects information about federal obligations to independent nonprofit institutions in two categories: research and development, and R&D plant. The $6.6 billion provided to 1,068 institutions in FY 2013 represented a 2 percent decrease from $6.8 billion the previous year. The leading 10 nonprofits accounted for 36 percent of fiscal 2013 funding, with the MITRE Corporation receiving the largest total, at $485 million.
The statistics are from the NCSES Survey of Federal Science and Engineering Support to Universities, Colleges and Nonprofit Institutions.
-NSF-
Media Contacts
Rob Margetta, NSF
REGULATING METHANE EMISSIONS FROM FRESHWATER WETLANDS
FROM: NATIONAL SCIENCE FOUNDATION
Methane-eating microorganisms help regulate emissions from wetlands
Without this process, methane emissions from freshwater wetlands could be 30 to 50 percent higher
Though they occupy a small fraction of Earth's surface, freshwater wetlands are the largest natural source of methane emitted into the atmosphere. New research identifies an unexpected process that acts as a key gatekeeper in regulating methane emissions from these freshwater environments.
The study results are published this week in the journal Nature Communications by biologist Samantha Joye of the University of Georgia and colleagues.
The researchers report that high rates of anaerobic (no oxygen) methane oxidation in freshwater wetlands substantially reduce atmospheric emissions of methane.
New attention
The process of anaerobic methane oxidation was once considered insignificant in freshwater wetlands, but scientists now think very differently about its importance.
"Some microorganisms actually eat methane, and recent decades have seen an explosion in our understanding of the way they do this," says Matt Kane, program director in the National Science Foundation's Division of Environmental Biology, which funded the research. "These researchers demonstrate that if it were not for an unusual group of methane-eating microbes that live in freshwater wetlands, far more methane would be released into the atmosphere."
Although anaerobic methane oxidation in freshwater has been gathering scientific attention, the environmental relevance of this process was unknown until recently, Joye says.
"This paper reports a previously unrecognized sink for methane in freshwater sediments, soils and peats: microbially-mediated anaerobic oxidation of methane," she says. "The fundamental importance of this process in freshwater wetlands underscores the critical role that anaerobic oxidation of methane plays on Earth, even in freshwater habitats."
Without this process, Joye says, methane emissions from freshwater wetlands could be 30 to 50 percent greater.
Comparison of wetlands
The researchers investigated the anaerobic oxidation process in freshwater wetlands in three regions: the freshwater peat soils of the Florida Everglades; a coastal organic-rich wetland in Acadia National Park, Maine; and a tidal freshwater wetland in coastal Georgia.
All three sites were sampled over multiple seasons.
The anaerobic oxidation of methane was coupled to some extent with sulfate reduction. Rising sea levels, for example, would result in increased sulfate, which could fuel greater rates of anaerobic oxidation.
Similarly, with saltwater intrusion into coastal freshwater wetlands, increasing sulfate inhibits microbial methane formation, or methanogenesis.
So while freshwater wetlands are known to be significant methane sources, their low sulfate concentrations previously led most researchers to conclude that anaerobic oxidation of methane was not important in these regions.
Crucial process
The new findings show that if not for the anaerobic methane oxidation process, freshwater environments would account for an even greater portion of the global methane budget.
"The process of anaerobic oxidation of methane in freshwater wetlands appears to be different than what we know about this process in marine sediments," Joye says. "There could be unique biochemistry at work."
Adds Katherine Segarra, an oceanographer at the U.S. Department of the Interior's Bureau of Ocean Energy Management and co-author of the paper: "This study furthers the understanding of the global methane budget, and may have ramifications for the development of future greenhouse gas models."
Additional financial support for the research was provided by the Deutsche Forschungsgemeinschaft via the Research Center/Cluster of Excellence at the MARUM Center for Marine Environmental Sciences and department of geosciences at the University of Bremen, Germany.
-- Cheryl Dybas
-- Alan Flurry, University of Georgia
Investigators
Samantha Joye
Christof Meile
Vladimir Samarkin
Related Institutions/Organizations
University of Georgia Research Foundation Inc
Methane-eating microorganisms help regulate emissions from wetlands
Without this process, methane emissions from freshwater wetlands could be 30 to 50 percent higher
Though they occupy a small fraction of Earth's surface, freshwater wetlands are the largest natural source of methane emitted into the atmosphere. New research identifies an unexpected process that acts as a key gatekeeper in regulating methane emissions from these freshwater environments.
The study results are published this week in the journal Nature Communications by biologist Samantha Joye of the University of Georgia and colleagues.
The researchers report that high rates of anaerobic (no oxygen) methane oxidation in freshwater wetlands substantially reduce atmospheric emissions of methane.
New attention
The process of anaerobic methane oxidation was once considered insignificant in freshwater wetlands, but scientists now think very differently about its importance.
"Some microorganisms actually eat methane, and recent decades have seen an explosion in our understanding of the way they do this," says Matt Kane, program director in the National Science Foundation's Division of Environmental Biology, which funded the research. "These researchers demonstrate that if it were not for an unusual group of methane-eating microbes that live in freshwater wetlands, far more methane would be released into the atmosphere."
Although anaerobic methane oxidation in freshwater has been gathering scientific attention, the environmental relevance of this process was unknown until recently, Joye says.
"This paper reports a previously unrecognized sink for methane in freshwater sediments, soils and peats: microbially-mediated anaerobic oxidation of methane," she says. "The fundamental importance of this process in freshwater wetlands underscores the critical role that anaerobic oxidation of methane plays on Earth, even in freshwater habitats."
Without this process, Joye says, methane emissions from freshwater wetlands could be 30 to 50 percent greater.
Comparison of wetlands
The researchers investigated the anaerobic oxidation process in freshwater wetlands in three regions: the freshwater peat soils of the Florida Everglades; a coastal organic-rich wetland in Acadia National Park, Maine; and a tidal freshwater wetland in coastal Georgia.
All three sites were sampled over multiple seasons.
The anaerobic oxidation of methane was coupled to some extent with sulfate reduction. Rising sea levels, for example, would result in increased sulfate, which could fuel greater rates of anaerobic oxidation.
Similarly, with saltwater intrusion into coastal freshwater wetlands, increasing sulfate inhibits microbial methane formation, or methanogenesis.
So while freshwater wetlands are known to be significant methane sources, their low sulfate concentrations previously led most researchers to conclude that anaerobic oxidation of methane was not important in these regions.
Crucial process
The new findings show that if not for the anaerobic methane oxidation process, freshwater environments would account for an even greater portion of the global methane budget.
"The process of anaerobic oxidation of methane in freshwater wetlands appears to be different than what we know about this process in marine sediments," Joye says. "There could be unique biochemistry at work."
Adds Katherine Segarra, an oceanographer at the U.S. Department of the Interior's Bureau of Ocean Energy Management and co-author of the paper: "This study furthers the understanding of the global methane budget, and may have ramifications for the development of future greenhouse gas models."
Additional financial support for the research was provided by the Deutsche Forschungsgemeinschaft via the Research Center/Cluster of Excellence at the MARUM Center for Marine Environmental Sciences and department of geosciences at the University of Bremen, Germany.
-- Cheryl Dybas
-- Alan Flurry, University of Georgia
Investigators
Samantha Joye
Christof Meile
Vladimir Samarkin
Related Institutions/Organizations
University of Georgia Research Foundation Inc
Wednesday, June 24, 2015
THE MYSTERIOUS LIGHTS OF PLANET CERES
FROM: NASA
THE WASPS AND THE BRAINS
FROM: NATIONAL SCIENCE FOUNDATION
Tiny brains, but shared smarts
Unlike humans and other vertebrates, the brains of wasps shrink when they're socialized--but they might 'share' brainpower
A solitary wasp--the kind that lives and forages for food alone--has a fairly small brain. Type out a lowercase letter in 10-point text and you'll get an idea of its size.
But tiny as that brain is, its social cousins, living together in honeycombed nests, have even smaller ones. And that size difference might provide some key information about the difference between insect societies and vertebrate societies.
Biologists have studied the societies of vertebrates--from flocks of birds, to schools of fish, to communities of humans--enough to come up with something called the "social brain hypothesis." Generally, it goes something like this: Social interaction presents challenges that require a lot of brain power, as that interaction requires organisms to navigate complicated territory, including avoiding conflict and building alliances.
Therefore, vertebrates that live in societies have bigger brains. The more complex the organism's society, the bigger its brain regions for processing complex information will be. Scientists believe the complexity of human societies may be one of the reasons we have such large, developed brains.
Sean O'Donnell, a biology professor at Drexel, has spent almost the entirety of his more than 20-year career studying wasps. He says these picnic terrors--actually critical members of the insect world that prey on pest species--represent ideal candidates for seeing whether that hypothesis applies to insects, because they have so much variation.
Some wasps are solitary. Some live in small, primitive groups. Others live in larger, more complex societies. "There are lots of intermediate stages," O'Donnell said.
When O'Donnell, with support from the National Science Foundation's Directorate for Biological Sciences, looked at the brains in 29 related species of wasps spanning the social spectrum, he found that living in a society did indeed affect the size of their brains. It just made them smaller, instead of bigger.
His findings are described in the latest issue of Proceedings of the Royal Society B.
"If our data is verified, it suggests that there's something really different about how insect societies formed," he said.
O'Donnell's work focused on the "mushroom bodies" of the wasps' brains, structures that are superficially similar to the regions of vertebrate brains that deal with higher cognitive functions.
His research uncovered another interesting difference from vertebrates: the complexity of the wasps' societies seemed to have no significant effect on the size of their brains. The big dropoff in size occurred between solitary and social wasps. In contrast, the brains of wasps in simple societies showed no significant size differences between those in complex societies.
"That suggests to me that going from solitary to a small society is the significant transition," O'Donnell said.
'Sharing' brainpower
Part of what makes vertebrate societies so brain-intensive is that they usually involve groups of organisms with different agendas that aren't related to one another--most of the people you know aren't members of your family.
Insect societies, however, are made up of groups of cooperating close relatives with shared objectives. Wasps might not need the type of brainpower required for social interaction because there's much less of it in their nests and colonies. The insects cooperate and rely on each other without the type of negotiation that can be required in vertebrate societies.
But what advantage could a smaller, less complex brain offer a species? As O'Donnell puts it, "Brains are expensive."
Neural tissues require more energy to develop and maintain than almost any other kind, and biologists have found that natural selection will find the optimal balance between the metabolic costs of developing particular areas of the brain and the benefits yielded.
In some ways, the social wasps may "share" brainpower. Individually, their brains might not stack up to their solitary relatives, but the colony as a whole is "smart."
O'Donnell says the next steps for his work will replicate the wasp research with termites and bees, which also offer a variety of social complexity.
"We would expect to see similar patterns," he said.
Learn more in this Drexel University video on Sean O'Donnell's work.
-- Rob Margetta
Investigators
Sean O'Donnell
Related Institutions/Organizations
Drexel University
Tiny brains, but shared smarts
Unlike humans and other vertebrates, the brains of wasps shrink when they're socialized--but they might 'share' brainpower
A solitary wasp--the kind that lives and forages for food alone--has a fairly small brain. Type out a lowercase letter in 10-point text and you'll get an idea of its size.
But tiny as that brain is, its social cousins, living together in honeycombed nests, have even smaller ones. And that size difference might provide some key information about the difference between insect societies and vertebrate societies.
Biologists have studied the societies of vertebrates--from flocks of birds, to schools of fish, to communities of humans--enough to come up with something called the "social brain hypothesis." Generally, it goes something like this: Social interaction presents challenges that require a lot of brain power, as that interaction requires organisms to navigate complicated territory, including avoiding conflict and building alliances.
Therefore, vertebrates that live in societies have bigger brains. The more complex the organism's society, the bigger its brain regions for processing complex information will be. Scientists believe the complexity of human societies may be one of the reasons we have such large, developed brains.
Sean O'Donnell, a biology professor at Drexel, has spent almost the entirety of his more than 20-year career studying wasps. He says these picnic terrors--actually critical members of the insect world that prey on pest species--represent ideal candidates for seeing whether that hypothesis applies to insects, because they have so much variation.
Some wasps are solitary. Some live in small, primitive groups. Others live in larger, more complex societies. "There are lots of intermediate stages," O'Donnell said.
When O'Donnell, with support from the National Science Foundation's Directorate for Biological Sciences, looked at the brains in 29 related species of wasps spanning the social spectrum, he found that living in a society did indeed affect the size of their brains. It just made them smaller, instead of bigger.
His findings are described in the latest issue of Proceedings of the Royal Society B.
"If our data is verified, it suggests that there's something really different about how insect societies formed," he said.
O'Donnell's work focused on the "mushroom bodies" of the wasps' brains, structures that are superficially similar to the regions of vertebrate brains that deal with higher cognitive functions.
His research uncovered another interesting difference from vertebrates: the complexity of the wasps' societies seemed to have no significant effect on the size of their brains. The big dropoff in size occurred between solitary and social wasps. In contrast, the brains of wasps in simple societies showed no significant size differences between those in complex societies.
"That suggests to me that going from solitary to a small society is the significant transition," O'Donnell said.
'Sharing' brainpower
Part of what makes vertebrate societies so brain-intensive is that they usually involve groups of organisms with different agendas that aren't related to one another--most of the people you know aren't members of your family.
Insect societies, however, are made up of groups of cooperating close relatives with shared objectives. Wasps might not need the type of brainpower required for social interaction because there's much less of it in their nests and colonies. The insects cooperate and rely on each other without the type of negotiation that can be required in vertebrate societies.
But what advantage could a smaller, less complex brain offer a species? As O'Donnell puts it, "Brains are expensive."
Neural tissues require more energy to develop and maintain than almost any other kind, and biologists have found that natural selection will find the optimal balance between the metabolic costs of developing particular areas of the brain and the benefits yielded.
In some ways, the social wasps may "share" brainpower. Individually, their brains might not stack up to their solitary relatives, but the colony as a whole is "smart."
O'Donnell says the next steps for his work will replicate the wasp research with termites and bees, which also offer a variety of social complexity.
"We would expect to see similar patterns," he said.
Learn more in this Drexel University video on Sean O'Donnell's work.
-- Rob Margetta
Investigators
Sean O'Donnell
Related Institutions/Organizations
Drexel University
Tuesday, June 23, 2015
USDA ISSUES RULE ON ADDING SELENIUM TO INFANT FORMULA
FROM: U.S. FOOD AND DRUG ADMINISTRATION
FDA Issues Final Rule to Add Selenium to List of Required Nutrients for Infant Formula
June 22, 2015
The U. S. Food and Drug Administration today announced a final rule to add selenium to the list of required nutrients for infant formula, and to establish both minimum and maximum levels of selenium in infant formula.
U.S. manufacturers began adding selenium to infant formula after the Institute of Medicine recognized selenium to be an essential nutrient for infants in 1989, and currently, all infant formulas on the U.S. market contain selenium. By amending regulations to add selenium to the list of required nutrients for infant formula and establish a safe range for this use, the FDA is able to require manufacturers currently marketing infant formula in the U.S. to add selenium within this safe range, and to require any manufacturer newly entering the U.S. market to adopt this practice as well.
Specifically, the rule requires 2.0 micrograms (μg) selenium/100 kilocalories as the minimum level and 7.0 μg/100 kilocalories as the maximum level of selenium in infant formula. It also amends the labeling requirements for infant formula to require the listing of selenium in micrograms per 100 kilocalories on infant formula labels.
Selenium, found in breast milk, is an essential nutrient for infants. Among its benefits, it helps the body defend against oxidative stress and aids in the regulation of thyroid hormones. Because infant formula often serves as a sole source of nutrition for infants, selenium in infant formula is needed to ensure that formula-fed infants are getting this essential nutrient at appropriate levels. Selenium is the 30th nutrient required by law to be in infant formula.
FDA Issues Final Rule to Add Selenium to List of Required Nutrients for Infant Formula
June 22, 2015
The U. S. Food and Drug Administration today announced a final rule to add selenium to the list of required nutrients for infant formula, and to establish both minimum and maximum levels of selenium in infant formula.
U.S. manufacturers began adding selenium to infant formula after the Institute of Medicine recognized selenium to be an essential nutrient for infants in 1989, and currently, all infant formulas on the U.S. market contain selenium. By amending regulations to add selenium to the list of required nutrients for infant formula and establish a safe range for this use, the FDA is able to require manufacturers currently marketing infant formula in the U.S. to add selenium within this safe range, and to require any manufacturer newly entering the U.S. market to adopt this practice as well.
Specifically, the rule requires 2.0 micrograms (μg) selenium/100 kilocalories as the minimum level and 7.0 μg/100 kilocalories as the maximum level of selenium in infant formula. It also amends the labeling requirements for infant formula to require the listing of selenium in micrograms per 100 kilocalories on infant formula labels.
Selenium, found in breast milk, is an essential nutrient for infants. Among its benefits, it helps the body defend against oxidative stress and aids in the regulation of thyroid hormones. Because infant formula often serves as a sole source of nutrition for infants, selenium in infant formula is needed to ensure that formula-fed infants are getting this essential nutrient at appropriate levels. Selenium is the 30th nutrient required by law to be in infant formula.
Sunday, June 21, 2015
RESEARCH TO SAVE HONEYBEES IN THE U.S.
FROM: U.S. NATIONAL SCIENCE FOUNDATION
Protecting the honey-bearers
Ancestors of American honey bees shed light on pollinator health
The honey bearers arrived in the early 17th century, carried into the United States by early European settlers. Apis mellifera--the name truly translates as bee honey-bearer, though they are better known as honey bees.
Over the ensuing centuries, they flourished in the temperate North American climate, so successful they've become an integral part of our agricultural economy, contributing more than $14 billion in pollination services each year. They're trucked to our apple orchards and blueberry farms, our fields of squash and watermelon.
During the last decade, however, the honey bearers have suffered. They've died off in alarming numbers, entire colonies collapsing into ruin. The culprit seems to be a complex quartet of factors--poor nutrition, parasites, pathogens and pesticides--and scientists are still uncovering how these stresses harm bees and how they can be prevented.
Could the answers to some of these questions lie in Apis mellifera's African ancestors?
"If we can understand the genetic and physiological mechanisms that allow African bees to withstand parasites and viruses, we can use this information for breeding programs or management practices in U.S. bee populations," says Christina Grozinger, director of the Center for Pollinator Research at Pennsylvania State University.
Grozinger was part of a National Science Foundation (NSF)-funded project researching East African honeybees, analyzing the health of bee populations at 24 sites across Kenya. The team included scientists from Penn State, the International Centre of Insect Physiology and Ecology (icipe) in Kenya and South Eastern Kenyan University.
NSF's Basic Research to Enable Agricultural Development, or BREAD, program funded the award. BREAD supports creative, fundamental research designed to help small-holder farms in the developing world. The program is a collaboration between NSF and the Bill & Melinda Gates Foundation.
"The BREAD partnership has allowed NSF to build and fully support international collaborations, as well as innovative proof-of-concept basic research with broad implications for global agriculture," says Jane Silverthorne, deputy assistant director of NSF's Biological Sciences Directorate, which funded the research. "This study continues to provide unique insights into how environmental conditions affect the health of honey bee colonies in Kenya."
In 2010, the team of researchers from icipe and Penn State first discovered the deadly Varroa mite in Kenyan bees. A tiny red beast that attaches, shield-like, to the back of a bee, Varroa feeds on hemolymph (bee blood). It transmits diseases and wreaks havoc with a bee's immune system. The parasite's full name--Varroa destructor--is apt; it is the culprit for thousands upon thousands of bee deaths in North America and Europe.
That research was the first time Varroa was documented in East Africa.
"Since Varroa is the most deadly parasite of honey bees and has decimated populations of honey bees wherever it has spread in the world, it was vital to track the effects of the introduction of Varroa on East African bee populations," Grozinger said.
So the team applied for the BREAD award and began analyzing honey bees across Kenya, studying how parasites, pathogens and viruses were affecting the African bees.
What they found was that--despite Varroa--the African bees were surviving, were tolerating the parasites. The bees did not seem to be actively fighting or removing the mites; instead, they had a higher tolerance for them. Researchers also discovered a link between elevation and Varroa: Bee colonies at higher elevations had higher instances of Varroa. This suggests a bee's environment may make it more or less susceptible to the mites. And since environment is also closely related to nutrition--higher elevations often have less flowering plants, which means less food options for honey bees--improving bee nutrition could be one way to combat Varroa.
The relationship between elevation, nutrition and pathogens needs to be examined further, but Grozinger calls it a "very intriguing" correlation. Increasing the diversity of flowering plant species in a landscape--one way to boost bee nutrition--could potentially help bees help themselves, by increasing a bee's natural ability to tolerate Varroa.
The research, published last year in PLOS One, is just a "first blush" at analyzing African bee populations, said Maryann Frazier, a senior extension associate at Penn State and another scientist on the project.
But she says it's important to study honey bees in other parts of the world, and not just because pollinators are a global resource (in Kenya, honey bees not only pollinate, but provide crucial income and nutrition for farmers and rural families).
"What we're really interested in are the mechanisms that allow them to be more resistant. And then we can use that knowledge to select for those behaviors and physiological traits."
Much about those mechanisms remains undiscovered. Frazier, Grozinger, their Kenyan counterparts and others from Penn State are sequencing whole genomes of individual bees collected from different parts of Kenya; this should allow them to identify specific genes that have helped the bees adapt to different environments and potentially resist different diseases. They're also analyzing whether different hive types--many Kenyan beekeepers use hollow logs or trees as hives--affect honey bee health and productivity.
Other NSF-funded honey bee projects are studying the role of gut microbes in bee health, how bees develop colony-level social immunity and much more: the foundation supports more than 250 current pollinator-related projects, many highlighted in the recent Pollinator Research Action Plan, a national strategy to better understand pollinator losses and improve pollinator health. And ensure the honey-bearers thrive for many years to come.
-- Jessica Arriens
Investigators
Harland Patch
James Frazier
James Tumlinson
Maryann Frazier
Christina Grozinger
Protecting the honey-bearers
Ancestors of American honey bees shed light on pollinator health
The honey bearers arrived in the early 17th century, carried into the United States by early European settlers. Apis mellifera--the name truly translates as bee honey-bearer, though they are better known as honey bees.
Over the ensuing centuries, they flourished in the temperate North American climate, so successful they've become an integral part of our agricultural economy, contributing more than $14 billion in pollination services each year. They're trucked to our apple orchards and blueberry farms, our fields of squash and watermelon.
During the last decade, however, the honey bearers have suffered. They've died off in alarming numbers, entire colonies collapsing into ruin. The culprit seems to be a complex quartet of factors--poor nutrition, parasites, pathogens and pesticides--and scientists are still uncovering how these stresses harm bees and how they can be prevented.
Could the answers to some of these questions lie in Apis mellifera's African ancestors?
"If we can understand the genetic and physiological mechanisms that allow African bees to withstand parasites and viruses, we can use this information for breeding programs or management practices in U.S. bee populations," says Christina Grozinger, director of the Center for Pollinator Research at Pennsylvania State University.
Grozinger was part of a National Science Foundation (NSF)-funded project researching East African honeybees, analyzing the health of bee populations at 24 sites across Kenya. The team included scientists from Penn State, the International Centre of Insect Physiology and Ecology (icipe) in Kenya and South Eastern Kenyan University.
NSF's Basic Research to Enable Agricultural Development, or BREAD, program funded the award. BREAD supports creative, fundamental research designed to help small-holder farms in the developing world. The program is a collaboration between NSF and the Bill & Melinda Gates Foundation.
"The BREAD partnership has allowed NSF to build and fully support international collaborations, as well as innovative proof-of-concept basic research with broad implications for global agriculture," says Jane Silverthorne, deputy assistant director of NSF's Biological Sciences Directorate, which funded the research. "This study continues to provide unique insights into how environmental conditions affect the health of honey bee colonies in Kenya."
In 2010, the team of researchers from icipe and Penn State first discovered the deadly Varroa mite in Kenyan bees. A tiny red beast that attaches, shield-like, to the back of a bee, Varroa feeds on hemolymph (bee blood). It transmits diseases and wreaks havoc with a bee's immune system. The parasite's full name--Varroa destructor--is apt; it is the culprit for thousands upon thousands of bee deaths in North America and Europe.
That research was the first time Varroa was documented in East Africa.
"Since Varroa is the most deadly parasite of honey bees and has decimated populations of honey bees wherever it has spread in the world, it was vital to track the effects of the introduction of Varroa on East African bee populations," Grozinger said.
So the team applied for the BREAD award and began analyzing honey bees across Kenya, studying how parasites, pathogens and viruses were affecting the African bees.
What they found was that--despite Varroa--the African bees were surviving, were tolerating the parasites. The bees did not seem to be actively fighting or removing the mites; instead, they had a higher tolerance for them. Researchers also discovered a link between elevation and Varroa: Bee colonies at higher elevations had higher instances of Varroa. This suggests a bee's environment may make it more or less susceptible to the mites. And since environment is also closely related to nutrition--higher elevations often have less flowering plants, which means less food options for honey bees--improving bee nutrition could be one way to combat Varroa.
The relationship between elevation, nutrition and pathogens needs to be examined further, but Grozinger calls it a "very intriguing" correlation. Increasing the diversity of flowering plant species in a landscape--one way to boost bee nutrition--could potentially help bees help themselves, by increasing a bee's natural ability to tolerate Varroa.
The research, published last year in PLOS One, is just a "first blush" at analyzing African bee populations, said Maryann Frazier, a senior extension associate at Penn State and another scientist on the project.
But she says it's important to study honey bees in other parts of the world, and not just because pollinators are a global resource (in Kenya, honey bees not only pollinate, but provide crucial income and nutrition for farmers and rural families).
"What we're really interested in are the mechanisms that allow them to be more resistant. And then we can use that knowledge to select for those behaviors and physiological traits."
Much about those mechanisms remains undiscovered. Frazier, Grozinger, their Kenyan counterparts and others from Penn State are sequencing whole genomes of individual bees collected from different parts of Kenya; this should allow them to identify specific genes that have helped the bees adapt to different environments and potentially resist different diseases. They're also analyzing whether different hive types--many Kenyan beekeepers use hollow logs or trees as hives--affect honey bee health and productivity.
Other NSF-funded honey bee projects are studying the role of gut microbes in bee health, how bees develop colony-level social immunity and much more: the foundation supports more than 250 current pollinator-related projects, many highlighted in the recent Pollinator Research Action Plan, a national strategy to better understand pollinator losses and improve pollinator health. And ensure the honey-bearers thrive for many years to come.
-- Jessica Arriens
Investigators
Harland Patch
James Frazier
James Tumlinson
Maryann Frazier
Christina Grozinger
Thursday, June 18, 2015
CO2, BIG DINOSAURS AND THE EQUATOR
FROM: NATIONAL SCIENCE FOUNDATION
Big dinosaurs steered clear of the tropics
Climate swings lasting millions of years too much for dinos
For more than 30 million years after dinosaurs first appeared, they remained inexplicably rare near the equator, where only a few small-bodied meat-eating dinosaurs made a living.
The long absence at low latitudes has been one of the great, unanswered questions about the rise of the dinosaurs.
Now the mystery has a solution, according to scientists who pieced together a detailed picture of the climate and ecology more than 200 million years ago at Ghost Ranch in northern New Mexico, a site rich with fossils.
The findings, reported today in the journal Proceedings of the National Academy of Sciences (PNAS), show that the tropical climate swung wildly with extremes of drought and intense heat.
Wildfires swept the landscape during arid regimes and reshaped the vegetation available for plant-eating animals.
"Our data suggest it was not a fun place," says scientist Randall Irmis of the University of Utah.
"It was a time of climate extremes that went back and forth unpredictably. Large, warm-blooded dinosaurian herbivores weren't able to exist close to the equator--there was not enough dependable plant food."
The study, led by geochemist Jessica Whiteside, now of the University of Southampton, is the first to provide a detailed look at climate and ecology during the emergence of the dinosaurs.
Atmospheric carbon dioxide levels then were four to six times current levels. "If we continue along our present course, similar conditions in a high-CO2 world may develop, and suppress low-latitude ecosystems," Irmis says.
"These scientists have developed a new explanation for the perplexing near-absence of dinosaurs in late Triassic [the Triassic was between 252 million and 201 million years ago] equatorial settings," says Rich Lane, program director in the National Science Foundation's (NSF) Division of Earth Sciences, which funded the research.
"That includes rapid vegetation changes related to climate fluctuations between arid and moist climates and the resulting extensive wildfires of the time."
Reconstructing the deep past
The earliest known dinosaur fossils, found in Argentina, date from around 230 million years ago.
Within 15 million years, species with different diets and body sizes had evolved and were abundant except in tropical latitudes. There the only dinosaurs were small carnivores. The pattern persisted for 30 million years after the first dinosaurs appeared.
The scientists focused on Chinle Formation rocks, which were deposited by rivers and streams between 205 and 215 million years ago at Ghost Ranch (perhaps better known as the place where artist Georgia O'Keeffe lived and painted for much of her career).
The multi-colored rocks of the Chinle Formation are a common sight on the Colorado Plateau at places such as the Painted Desert at Petrified Forest National Park in Arizona.
In ancient times, North America and other land masses were bound together in the supercontinent Pangea. The Ghost Ranch site stood close to the equator, at roughly the same latitude as present-day southern India.
The researchers reconstructed the deep past by analyzing several kinds of data: from fossils, charcoal left by ancient wildfires, stable isotopes from organic matter, and carbonate nodules that formed in ancient soils.
Fossilized bones, pollen grains and fern spores revealed the types of animals and plants living at different times, marked by layers of sediment.
Dinosaurs remained rare among the fossils, accounting for less than 15 percent of vertebrate animal remains.
They were outnumbered in diversity, abundance and body size by reptiles known as pseudosuchian archosaurs, the lineage that gave rise to crocodiles and alligators.
The sparse dinosaurs consisted mostly of small, carnivorous theropods.
Big, long-necked dinosaurs, or sauropodomorphs--already the dominant plant-eaters at higher latitudes--did not exist at the study site nor any other low-latitude site in the Pangaea of that time, as far as the fossil record shows.
Abrupt changes in climate left a record in the abundance of different types of pollen and fern spores between sediment layers.
Fossilized organic matter from decaying plants provided another window on climate shifts. Changes in the ratio of stable isotopes of carbon in the organic matter bookmarked times when plant productivity declined during extended droughts.
Drought and fire
Wildfire temperatures varied drastically, the researchers found, consistent with a fluctuating environment in which the amount of combustible plant matter rose and fell over time.
The researchers estimated the intensity of wildfires using bits of charcoal recovered in sediment layers.
The overall picture is that of a climate punctuated by extreme shifts in precipitation and in which plant die-offs fueled hotter fires. That in turn killed more plants, damaged soils and increased erosion.
Atmospheric carbon dioxide levels, calculated from stable isotope analyses of soil carbonate and preserved organic matter, rose from about 1,200 parts per million (ppm) at the base of the section, to about 2,400 ppm near the top.
At these high CO2 concentrations, climate models predict more frequent and more extreme weather fluctuations consistent with the fossil and charcoal evidence.
Continuing shifts between extremes of dry and wet likely prevented the establishment of the dinosaur-dominated communities found in the fossil record at higher latitudes across South America, Europe, and southern Africa, where aridity and temperatures were less extreme and humidity was consistently higher.
Resource-limited conditions could not support a diverse community of fast-growing, warm-blooded, large dinosaurs, which require a productive and stable environment to thrive.
"The conditions would have been something similar to the arid western United States today, although there would have been trees and smaller plants near streams and rivers, and forests during humid times," says Whiteside.
"The fluctuating and harsh climate with widespread wildfires meant that only small two-legged carnivorous dinosaurs could survive."
-NSF-
Media Contacts
Cheryl Dybas, NSF
Big dinosaurs steered clear of the tropics
Climate swings lasting millions of years too much for dinos
For more than 30 million years after dinosaurs first appeared, they remained inexplicably rare near the equator, where only a few small-bodied meat-eating dinosaurs made a living.
The long absence at low latitudes has been one of the great, unanswered questions about the rise of the dinosaurs.
Now the mystery has a solution, according to scientists who pieced together a detailed picture of the climate and ecology more than 200 million years ago at Ghost Ranch in northern New Mexico, a site rich with fossils.
The findings, reported today in the journal Proceedings of the National Academy of Sciences (PNAS), show that the tropical climate swung wildly with extremes of drought and intense heat.
Wildfires swept the landscape during arid regimes and reshaped the vegetation available for plant-eating animals.
"Our data suggest it was not a fun place," says scientist Randall Irmis of the University of Utah.
"It was a time of climate extremes that went back and forth unpredictably. Large, warm-blooded dinosaurian herbivores weren't able to exist close to the equator--there was not enough dependable plant food."
The study, led by geochemist Jessica Whiteside, now of the University of Southampton, is the first to provide a detailed look at climate and ecology during the emergence of the dinosaurs.
Atmospheric carbon dioxide levels then were four to six times current levels. "If we continue along our present course, similar conditions in a high-CO2 world may develop, and suppress low-latitude ecosystems," Irmis says.
"These scientists have developed a new explanation for the perplexing near-absence of dinosaurs in late Triassic [the Triassic was between 252 million and 201 million years ago] equatorial settings," says Rich Lane, program director in the National Science Foundation's (NSF) Division of Earth Sciences, which funded the research.
"That includes rapid vegetation changes related to climate fluctuations between arid and moist climates and the resulting extensive wildfires of the time."
Reconstructing the deep past
The earliest known dinosaur fossils, found in Argentina, date from around 230 million years ago.
Within 15 million years, species with different diets and body sizes had evolved and were abundant except in tropical latitudes. There the only dinosaurs were small carnivores. The pattern persisted for 30 million years after the first dinosaurs appeared.
The scientists focused on Chinle Formation rocks, which were deposited by rivers and streams between 205 and 215 million years ago at Ghost Ranch (perhaps better known as the place where artist Georgia O'Keeffe lived and painted for much of her career).
The multi-colored rocks of the Chinle Formation are a common sight on the Colorado Plateau at places such as the Painted Desert at Petrified Forest National Park in Arizona.
In ancient times, North America and other land masses were bound together in the supercontinent Pangea. The Ghost Ranch site stood close to the equator, at roughly the same latitude as present-day southern India.
The researchers reconstructed the deep past by analyzing several kinds of data: from fossils, charcoal left by ancient wildfires, stable isotopes from organic matter, and carbonate nodules that formed in ancient soils.
Fossilized bones, pollen grains and fern spores revealed the types of animals and plants living at different times, marked by layers of sediment.
Dinosaurs remained rare among the fossils, accounting for less than 15 percent of vertebrate animal remains.
They were outnumbered in diversity, abundance and body size by reptiles known as pseudosuchian archosaurs, the lineage that gave rise to crocodiles and alligators.
The sparse dinosaurs consisted mostly of small, carnivorous theropods.
Big, long-necked dinosaurs, or sauropodomorphs--already the dominant plant-eaters at higher latitudes--did not exist at the study site nor any other low-latitude site in the Pangaea of that time, as far as the fossil record shows.
Abrupt changes in climate left a record in the abundance of different types of pollen and fern spores between sediment layers.
Fossilized organic matter from decaying plants provided another window on climate shifts. Changes in the ratio of stable isotopes of carbon in the organic matter bookmarked times when plant productivity declined during extended droughts.
Drought and fire
Wildfire temperatures varied drastically, the researchers found, consistent with a fluctuating environment in which the amount of combustible plant matter rose and fell over time.
The researchers estimated the intensity of wildfires using bits of charcoal recovered in sediment layers.
The overall picture is that of a climate punctuated by extreme shifts in precipitation and in which plant die-offs fueled hotter fires. That in turn killed more plants, damaged soils and increased erosion.
Atmospheric carbon dioxide levels, calculated from stable isotope analyses of soil carbonate and preserved organic matter, rose from about 1,200 parts per million (ppm) at the base of the section, to about 2,400 ppm near the top.
At these high CO2 concentrations, climate models predict more frequent and more extreme weather fluctuations consistent with the fossil and charcoal evidence.
Continuing shifts between extremes of dry and wet likely prevented the establishment of the dinosaur-dominated communities found in the fossil record at higher latitudes across South America, Europe, and southern Africa, where aridity and temperatures were less extreme and humidity was consistently higher.
Resource-limited conditions could not support a diverse community of fast-growing, warm-blooded, large dinosaurs, which require a productive and stable environment to thrive.
"The conditions would have been something similar to the arid western United States today, although there would have been trees and smaller plants near streams and rivers, and forests during humid times," says Whiteside.
"The fluctuating and harsh climate with widespread wildfires meant that only small two-legged carnivorous dinosaurs could survive."
-NSF-
Media Contacts
Cheryl Dybas, NSF
Wednesday, June 17, 2015
SCIENTISTS STUDY CORAL REEFS AND OCEAN ACIDIFICATION
FROM: NATIONAL SCIENCE FOUNDATION
Coral reefs defy ocean acidification odds in Palau
Palau reefs show few of the predicted responses
Will some coral reefs be able to adapt to rapidly changing conditions in Earth's oceans? If so, what will these reefs look like in the future?
As the ocean absorbs atmospheric carbon dioxide (CO2) released by the burning of fossil fuels, its chemistry is changing. The CO2 reacts with water molecules, lowering ocean pH (making it more acidic) in a process known as ocean acidification.
This process also removes carbonate, an essential ingredient needed by corals and other organisms to build their skeletons and shells.
Scientists are studying coral reefs in areas where low pH is naturally occurring to answer questions about ocean acidification, which threatens coral reef ecosystems worldwide.
Palau reefs dodge ocean acidification effects
One such place is Palau, an archipelago in the far western Pacific Ocean. The tropical, turquoise waters of Palau's Rock Islands are naturally more acidic due to a combination of biological activity and the long residence time of seawater in their maze of lagoons and inlets.
Seawater pH within the Rock Island lagoons is as low now as the open ocean is projected to reach as a result of ocean acidification near the end of this century.
A new study led by scientists at the Woods Hole Oceanographic Institution (WHOI) found that coral reefs in Palau seem to be defying the odds, showing none of the predicted responses to low pH except for an increase in bio-erosion--the physical breakdown of coral skeletons by boring organisms such as mollusks and worms.
A paper reporting the results is published today in the journal Science Advances.
"This research illustrates the value of comprehensive field studies," says David Garrison, a program director in the National Science Foundation's Division of Ocean Sciences, which funded the research through NSF's Ocean Acidification (OA) Program. NSF OA is supported by the Directorates for Geosciences and for Biological Sciences.
"Contrary to laboratory findings," says Garrison, "it appears that the major effect of ocean acidification on Palau Rock Island corals is increased bio-erosion rather than direct effects on coral species."
Adds lead paper author Hannah Barkley of WHOI, "Based on lab experiments and studies of other naturally low pH reef systems, this is the opposite of what we expected."
Experiments measuring corals' responses to a variety of low pH conditions have shown a range of negative effects, such as fewer varieties of corals, more algae growth, lower rates of calcium carbonate production (growth), and juvenile corals that have difficulty constructing skeletons.
"Surprisingly, in Palau where the pH is lowest, we see a coral community that hosts more species and has greater coral cover than in the sites where pH is normal," says Anne Cohen, co-author of the paper.
"That's not to say the coral community is thriving because of the low pH, rather it is thriving despite the low pH, and we need to understand how."
When the researchers compared the communities found on Palau's reefs with those in other reefs where pH is naturally low, they found increased bio-erosion was the only common feature.
"Our study revealed increased bio-erosion to be the only consistent community response, as other signs of ecosystem health varied at different locations," Barkley says.
The riddle of resilience
How do Palau's low pH reefs thrive despite significantly higher levels of bio-erosion?
The researchers aren't certain yet, but hope to answer that question in future studies.
They also don't completely understand why conditions created by ocean acidification seem to favor bio-eroding organisms.
One theory--that skeletons grown under more acidic conditions are less dense, making them easier for bio-eroding organisms to penetrate--is not the case on Palau, Barkley says, "because we don't see a correlation between skeletal density and pH."
Though coral reefs cover less than one percent of the ocean, these diverse ecosystems are home to at least a quarter of all marine life. In addition to sustaining fisheries that feed hundreds of millions of people around the world, coral reefs protect thousands of acres of coastlines from waves, storms and tsunamis.
"On the one hand, the results of this study are optimistic," Cohen says. "Even though many experiments and other studies of naturally low pH reefs show that ocean acidification negatively affects calcium carbonate production, as well as coral diversity and cover, we are not seeing that on Palau.
"That gives us hope that some coral reefs--even if it is a very small percentage--might be able to withstand future levels of ocean acidification."
Along with Barkley and Cohen, the team included Yimnang Golbuu of the Palau International Coral Reef Center, Thomas DeCarlo and Victoria Starczak of WHOI, and Kathryn Shamberger of Texas A&M University.
The Dalio Foundation, Inc., The Tiffany & Co. Foundation, The Nature Conservancy and the WHOI Access to the Sea Fund provided additional funding for this work.
-NSF-
Coral reefs defy ocean acidification odds in Palau
Palau reefs show few of the predicted responses
Will some coral reefs be able to adapt to rapidly changing conditions in Earth's oceans? If so, what will these reefs look like in the future?
As the ocean absorbs atmospheric carbon dioxide (CO2) released by the burning of fossil fuels, its chemistry is changing. The CO2 reacts with water molecules, lowering ocean pH (making it more acidic) in a process known as ocean acidification.
This process also removes carbonate, an essential ingredient needed by corals and other organisms to build their skeletons and shells.
Scientists are studying coral reefs in areas where low pH is naturally occurring to answer questions about ocean acidification, which threatens coral reef ecosystems worldwide.
Palau reefs dodge ocean acidification effects
One such place is Palau, an archipelago in the far western Pacific Ocean. The tropical, turquoise waters of Palau's Rock Islands are naturally more acidic due to a combination of biological activity and the long residence time of seawater in their maze of lagoons and inlets.
Seawater pH within the Rock Island lagoons is as low now as the open ocean is projected to reach as a result of ocean acidification near the end of this century.
A new study led by scientists at the Woods Hole Oceanographic Institution (WHOI) found that coral reefs in Palau seem to be defying the odds, showing none of the predicted responses to low pH except for an increase in bio-erosion--the physical breakdown of coral skeletons by boring organisms such as mollusks and worms.
A paper reporting the results is published today in the journal Science Advances.
"This research illustrates the value of comprehensive field studies," says David Garrison, a program director in the National Science Foundation's Division of Ocean Sciences, which funded the research through NSF's Ocean Acidification (OA) Program. NSF OA is supported by the Directorates for Geosciences and for Biological Sciences.
"Contrary to laboratory findings," says Garrison, "it appears that the major effect of ocean acidification on Palau Rock Island corals is increased bio-erosion rather than direct effects on coral species."
Adds lead paper author Hannah Barkley of WHOI, "Based on lab experiments and studies of other naturally low pH reef systems, this is the opposite of what we expected."
Experiments measuring corals' responses to a variety of low pH conditions have shown a range of negative effects, such as fewer varieties of corals, more algae growth, lower rates of calcium carbonate production (growth), and juvenile corals that have difficulty constructing skeletons.
"Surprisingly, in Palau where the pH is lowest, we see a coral community that hosts more species and has greater coral cover than in the sites where pH is normal," says Anne Cohen, co-author of the paper.
"That's not to say the coral community is thriving because of the low pH, rather it is thriving despite the low pH, and we need to understand how."
When the researchers compared the communities found on Palau's reefs with those in other reefs where pH is naturally low, they found increased bio-erosion was the only common feature.
"Our study revealed increased bio-erosion to be the only consistent community response, as other signs of ecosystem health varied at different locations," Barkley says.
The riddle of resilience
How do Palau's low pH reefs thrive despite significantly higher levels of bio-erosion?
The researchers aren't certain yet, but hope to answer that question in future studies.
They also don't completely understand why conditions created by ocean acidification seem to favor bio-eroding organisms.
One theory--that skeletons grown under more acidic conditions are less dense, making them easier for bio-eroding organisms to penetrate--is not the case on Palau, Barkley says, "because we don't see a correlation between skeletal density and pH."
Though coral reefs cover less than one percent of the ocean, these diverse ecosystems are home to at least a quarter of all marine life. In addition to sustaining fisheries that feed hundreds of millions of people around the world, coral reefs protect thousands of acres of coastlines from waves, storms and tsunamis.
"On the one hand, the results of this study are optimistic," Cohen says. "Even though many experiments and other studies of naturally low pH reefs show that ocean acidification negatively affects calcium carbonate production, as well as coral diversity and cover, we are not seeing that on Palau.
"That gives us hope that some coral reefs--even if it is a very small percentage--might be able to withstand future levels of ocean acidification."
Along with Barkley and Cohen, the team included Yimnang Golbuu of the Palau International Coral Reef Center, Thomas DeCarlo and Victoria Starczak of WHOI, and Kathryn Shamberger of Texas A&M University.
The Dalio Foundation, Inc., The Tiffany & Co. Foundation, The Nature Conservancy and the WHOI Access to the Sea Fund provided additional funding for this work.
-NSF-
Saturday, June 13, 2015
HHS REPORTS ON QUICK AND EASY TEST FOR EBOLA VIRUS
FROM: U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES
HHS pursues fast, easy test to detect Ebola virus infections
Promising point-of-care test could improve diagnosis and speed response
To assist doctors in diagnosing Ebola virus disease quickly, the U.S. Department of Health and Human Services’ Office of the Assistant Secretary for Preparedness and Response (ASPR) will pursue development of an Ebola virus diagnostic test for use in a doctor’s office, hospital, clinic, or field setting that will provide results within 20 minutes.
“Fast and inexpensive point-of-care diagnostics will improve our ability to control Ebola virus disease outbreaks,” said Robin Robinson, Ph.D., director of ASPR’s Biomedical Advanced Research and Development Authority (BARDA), which will oversee this development program for HHS. “Faster diagnosis of Ebola virus infections allows for more immediate treatment and an earlier response to protect public health worldwide.”
Diagnosing Ebola virus infections quickly in resource-poor areas would enable health care providers to isolate and provide necessary treatment and supportive care to patients suffering from Ebola. Quickly isolating patients helps limit the spread of the disease. Emerging evidence has shown that early initiation of supportive care improves outcomes for patients suffering from Ebola virus disease.
The development of this simple, low-cost, lateral-flow test, called the OraQuick rapid Ebola antigen test, will take place under a $1.8 million contract with OraSure Technologies Inc., headquartered in Bethlehem, Pennsylvania. Lateral flow tests detect the presence of a virus with a drop of the patient’s blood or saliva on a test strip, similar to the tests used in doctors’ offices to diagnose strep throat.
The agreement supports clinical and non-clinical work necessary to apply for approval of the test by the U.S. Food and Drug Administration. The contract could be extended for up to a total of 39 months and $10.4 million.
In addition, OraSure will evaluate whether the test can be used in the post-mortem analysis of oral fluids. During the current epidemic, people died before Ebola virus infections could be confirmed, yet the bodies of people infected with Ebola virus would have remained highly infectious. A simple, rapid test that could determine disease status quickly from the body’s oral fluids would facilitate infection control efforts and support the appropriate handling of remains infected with the Ebola virus.
The OraQuick rapid Ebola antigen test is the first point-of-care Ebola virus testing device to receive BARDA support. To help the United States prepare for and control Ebola virus disease outbreaks, BARDA also is supporting development of vaccines to prevent Ebola virus infections and therapeutic drugs to treat the disease.
BARDA is seeking additional proposals for advanced development of new drugs and products to diagnose and treat Ebola and related illnesses.
The new test is part of BARDA’s comprehensive integrated portfolio approach to the advanced research and development, innovation, acquisition, and manufacturing of vaccines, drugs, therapeutics, diagnostic tools, and non-pharmaceutical products for public health emergency threats. These threats include chemical, biological, radiological, and nuclear (CBRN) agents, pandemic influenza, and emerging infectious diseases.
ASPR leads HHS in preparing the nation to respond to and recover from adverse health effects of emergencies, supporting communities’ ability to withstand adversity, strengthening health and response systems, and enhancing national health security. HHS is the principal federal agency for protecting the health of all Americans and providing essential human services, especially for those who are least able to help themselves.
HHS pursues fast, easy test to detect Ebola virus infections
Promising point-of-care test could improve diagnosis and speed response
To assist doctors in diagnosing Ebola virus disease quickly, the U.S. Department of Health and Human Services’ Office of the Assistant Secretary for Preparedness and Response (ASPR) will pursue development of an Ebola virus diagnostic test for use in a doctor’s office, hospital, clinic, or field setting that will provide results within 20 minutes.
“Fast and inexpensive point-of-care diagnostics will improve our ability to control Ebola virus disease outbreaks,” said Robin Robinson, Ph.D., director of ASPR’s Biomedical Advanced Research and Development Authority (BARDA), which will oversee this development program for HHS. “Faster diagnosis of Ebola virus infections allows for more immediate treatment and an earlier response to protect public health worldwide.”
Diagnosing Ebola virus infections quickly in resource-poor areas would enable health care providers to isolate and provide necessary treatment and supportive care to patients suffering from Ebola. Quickly isolating patients helps limit the spread of the disease. Emerging evidence has shown that early initiation of supportive care improves outcomes for patients suffering from Ebola virus disease.
The development of this simple, low-cost, lateral-flow test, called the OraQuick rapid Ebola antigen test, will take place under a $1.8 million contract with OraSure Technologies Inc., headquartered in Bethlehem, Pennsylvania. Lateral flow tests detect the presence of a virus with a drop of the patient’s blood or saliva on a test strip, similar to the tests used in doctors’ offices to diagnose strep throat.
The agreement supports clinical and non-clinical work necessary to apply for approval of the test by the U.S. Food and Drug Administration. The contract could be extended for up to a total of 39 months and $10.4 million.
In addition, OraSure will evaluate whether the test can be used in the post-mortem analysis of oral fluids. During the current epidemic, people died before Ebola virus infections could be confirmed, yet the bodies of people infected with Ebola virus would have remained highly infectious. A simple, rapid test that could determine disease status quickly from the body’s oral fluids would facilitate infection control efforts and support the appropriate handling of remains infected with the Ebola virus.
The OraQuick rapid Ebola antigen test is the first point-of-care Ebola virus testing device to receive BARDA support. To help the United States prepare for and control Ebola virus disease outbreaks, BARDA also is supporting development of vaccines to prevent Ebola virus infections and therapeutic drugs to treat the disease.
BARDA is seeking additional proposals for advanced development of new drugs and products to diagnose and treat Ebola and related illnesses.
The new test is part of BARDA’s comprehensive integrated portfolio approach to the advanced research and development, innovation, acquisition, and manufacturing of vaccines, drugs, therapeutics, diagnostic tools, and non-pharmaceutical products for public health emergency threats. These threats include chemical, biological, radiological, and nuclear (CBRN) agents, pandemic influenza, and emerging infectious diseases.
ASPR leads HHS in preparing the nation to respond to and recover from adverse health effects of emergencies, supporting communities’ ability to withstand adversity, strengthening health and response systems, and enhancing national health security. HHS is the principal federal agency for protecting the health of all Americans and providing essential human services, especially for those who are least able to help themselves.
Friday, June 12, 2015
RESEARCHERS LOOK AT BIOLUMINESCENT CREATURES
FROM: NATIONAL SCIENCE FOUNDATION
Night lights: The wonders of bioluminescent millipedes
A Virginia Tech researcher discusses bioluminescent millipedes and other glowing creatures
There's something inherently magical, even surreal, about seeing hundreds of glowing millipedes scattered across the ground of a sequoia grove on a moonless night in Sequoia National Park.
Every evening, these creatures--which remain hidden underground during the day--emerge and initiate a chemical reaction to produce a green-blue glow, a process called bioluminescence. The eerie night lights of these millipedes highlight nature’s eccentricities. My observations of this phenomena is a fringe benefit of my research of the millipede species known as Motyxia.
Seeing the light
Motyxia, which are the only known bioluminescent millipedes, are found solely in a small region of the Sierra Nevada mountain range in California. But various types of bioluminescent creatures live throughout the United States. They include:
railroad worms, a beetle that looks similar to a millipede but has a string of lights down each of its sides resembling the lit windows of a passenger train at night,
glowworms with bioluminescent lamps on their heads,
a fly larvae with the bluest bioluminescence in the insect world,
firefly larvae that have two abdominal lamps on their tail,
and even luminescent earthworms.
If you would like to see bioluminescent creatures, visit a moist area, such as a gully or streamside, in a deep dark forest late at night--preferably in the early summer, right after a rain.
When you arrive at your viewing sight, turn off your flashlight and let your eyes adjust to the dark. Within about 15 to 30 minutes, you may begin to discern bioluminescent organisms.
Focus on tiny specks of light, which may be firefly larvae. These organisms may quickly turn off their lights when approached--but then turn them on again. So if you initially see a twinkle, note its position relative to nearby stationary objects so that you may see it light up again.
If you want to light your path as you walk, use red light to maintain your light-adapted vision.
Why the turn on?
When you observe bioluminescence, you may wonder about the purpose of this illuminating phenomenon. My research on Motyxia indicates that "Glow means 'No!'" to predators. That is, Motyxia's glow warns nocturnal predators that these 60-legged creatures are armed and dangerous; any predator that riles a Motyxia risks being squirted by toxins, including hydrogen cyanide, an extremely poisonous gas, which the millipede releases when it feels threatened.
The suggestion that Motyxia's glow wards off marauding nocturnal predators is supported by the fact that Motyxia are blind, so their visual signaling can only be seen by members of other species, such as predators.
My research team and I ran an experiment to test whether Motyxia's coloration warns predators to stay away. Our experiment involved positioning 150 glowing clay millipede models and 150 clay non-glowing millipede models in Motyxia's natural nighttime habitat in California.
The results: Predators attacked a significantly lower percentage of the glowing vs. non-glowing models (18 percent vs. 49 percent.) The relatively greater ability of the glowing millipede models to repel predators supports the "Glow Means No!" idea.
Motyxia's eastern cousins possess bright and conspicuous reds and yellows, apparently also to ward off daytime predators.
Other animals that are toxic, inedible, or otherwise noxious also advertise their danger via warning signals. For example, a rattlesnake uses its rattle and the yellow jacket brandishes yellow and black stripes to advertise its threats.
Toxic animals that show bright, highly conspicuous and sometimes downright garish colors to distinguish themselves thereby help prevent predators from mistaking them for edible prey. Such an error would be costly to both predator and prey.
The conspicuous appearance of toxic animals also helps predators learn to recognize their bright coloration as warnings and remember the unpleasant consequences of ignoring them--e.g. a cyanide-induced fever.
How bioluminescence evolved
How did bioluminescence evolve? This question is another focus of our ongoing research on Motyxia.
By helping to reveal the evolutionary origins of warning colorations--which, by necessity, contribute to some of the most blatant and complex appearances in the living world--we expect to improve our ability to investigate and understand how other complex traits arise in nature.
One possible clue to the origins of bioluminescence is provided by a millipede species known as Motyxia sequoiae, which inhabits habitats that are normally off-limits to other closely related millipedes. These habitats include exposed areas of the forest floor, open mountain meadows and the trunks of oak trees.
So perhaps bioluminescence evolved in Motyxia sequoiae to protect these creatures from predators in particularly vulnerable areas, and thereby enable these millipedes to expand their range to these favorable locations.
But why would Motyxia sequoiae evolve bioluminescence instead of any other defense mechanism, such as camouflage or weapons such as claws or sharp spines?
Have you ever heard the saying that "natural selection...works like a tinkerer"? This is a great way to think about the evolution of warning coloration and other complex biologic features. Tinkerers use what's already available (e.g., odds and ends lying around) to repair machines, appliances and other apparatuses.
A body of research suggests that many species may have similarly acquired bioluminescence by "making do" with, or repurposing, biological equipment they already possessed.
For example, fireflies need an enzyme called luciferase to light up. But the original role of the firefly's luciferase wasn't to help these insects produce light, but instead to help them synthesize fatty acids needed to create brain cells.
The essence of bioluminescence
Despite our growing knowledge, much about Motyxia remains mysterious. For example, how do these blind creatures find mates? What triggers their nightly emergence? With funding from the National Science Foundation, my team is working to answer these and other questions.
This research is part of our larger effort to describe biodiversity and reconstruct the evolutionary histories of arthropods--a group that includes insects, spiders and crustaceans, and accounts for 80 percent of all living species. We contribute our findings to the Tree of Life, which is a worldwide effort to define the evolutionary histories of animals.
Some bright ideas from bioluminescence
In addition to advancing our understanding of the history of life, studies of the bioluminescence of various types of organisms have implications for fields ranging from national defense to medicine.
Here are several examples:
The efficiency of electrical lighting systems, which can be only 10 percent efficient, could be improved by designing them to mimic bioluminescent light, which is 90 percent efficient.
The underbellies of some marine bioluminescent animals blend with background light from the water's surface, and so are camouflaged. The U.S. Navy is studying these phenomena so that it may build similarly camouflaged ships.
Healthy human cells produce ultra-weak amounts of light through a process similar to animal bioluminescence, but cancer cells produce slightly more light. Techniques may ultimately be developed to help locate cancer cells by detecting the greater amounts of light they produce.
A green fluorescent protein identified in a jellyfish species is now widely used in biomedical research as a fluorescent tag to help researchers track specific biological activities, such as the spread of cancer, insulin production and the movement of HIV proteins.
The key enzyme for beetle bioluminescence is a pivotal component of a fast, inexpensive method for sequencing genomes, which in 2008 was used to sequence the full genome of a Neanderthal.
Learn more about Dr. Marek's work at jointedlegs.org
-- Paul Marek, Virginia Tech
Investigators
Paul Marek
Related Institutions/Organizations
Virginia Polytechnic Institute and State University
Night lights: The wonders of bioluminescent millipedes
A Virginia Tech researcher discusses bioluminescent millipedes and other glowing creatures
There's something inherently magical, even surreal, about seeing hundreds of glowing millipedes scattered across the ground of a sequoia grove on a moonless night in Sequoia National Park.
Every evening, these creatures--which remain hidden underground during the day--emerge and initiate a chemical reaction to produce a green-blue glow, a process called bioluminescence. The eerie night lights of these millipedes highlight nature’s eccentricities. My observations of this phenomena is a fringe benefit of my research of the millipede species known as Motyxia.
Seeing the light
Motyxia, which are the only known bioluminescent millipedes, are found solely in a small region of the Sierra Nevada mountain range in California. But various types of bioluminescent creatures live throughout the United States. They include:
railroad worms, a beetle that looks similar to a millipede but has a string of lights down each of its sides resembling the lit windows of a passenger train at night,
glowworms with bioluminescent lamps on their heads,
a fly larvae with the bluest bioluminescence in the insect world,
firefly larvae that have two abdominal lamps on their tail,
and even luminescent earthworms.
If you would like to see bioluminescent creatures, visit a moist area, such as a gully or streamside, in a deep dark forest late at night--preferably in the early summer, right after a rain.
When you arrive at your viewing sight, turn off your flashlight and let your eyes adjust to the dark. Within about 15 to 30 minutes, you may begin to discern bioluminescent organisms.
Focus on tiny specks of light, which may be firefly larvae. These organisms may quickly turn off their lights when approached--but then turn them on again. So if you initially see a twinkle, note its position relative to nearby stationary objects so that you may see it light up again.
If you want to light your path as you walk, use red light to maintain your light-adapted vision.
Why the turn on?
When you observe bioluminescence, you may wonder about the purpose of this illuminating phenomenon. My research on Motyxia indicates that "Glow means 'No!'" to predators. That is, Motyxia's glow warns nocturnal predators that these 60-legged creatures are armed and dangerous; any predator that riles a Motyxia risks being squirted by toxins, including hydrogen cyanide, an extremely poisonous gas, which the millipede releases when it feels threatened.
The suggestion that Motyxia's glow wards off marauding nocturnal predators is supported by the fact that Motyxia are blind, so their visual signaling can only be seen by members of other species, such as predators.
My research team and I ran an experiment to test whether Motyxia's coloration warns predators to stay away. Our experiment involved positioning 150 glowing clay millipede models and 150 clay non-glowing millipede models in Motyxia's natural nighttime habitat in California.
The results: Predators attacked a significantly lower percentage of the glowing vs. non-glowing models (18 percent vs. 49 percent.) The relatively greater ability of the glowing millipede models to repel predators supports the "Glow Means No!" idea.
Motyxia's eastern cousins possess bright and conspicuous reds and yellows, apparently also to ward off daytime predators.
Other animals that are toxic, inedible, or otherwise noxious also advertise their danger via warning signals. For example, a rattlesnake uses its rattle and the yellow jacket brandishes yellow and black stripes to advertise its threats.
Toxic animals that show bright, highly conspicuous and sometimes downright garish colors to distinguish themselves thereby help prevent predators from mistaking them for edible prey. Such an error would be costly to both predator and prey.
The conspicuous appearance of toxic animals also helps predators learn to recognize their bright coloration as warnings and remember the unpleasant consequences of ignoring them--e.g. a cyanide-induced fever.
How bioluminescence evolved
How did bioluminescence evolve? This question is another focus of our ongoing research on Motyxia.
By helping to reveal the evolutionary origins of warning colorations--which, by necessity, contribute to some of the most blatant and complex appearances in the living world--we expect to improve our ability to investigate and understand how other complex traits arise in nature.
One possible clue to the origins of bioluminescence is provided by a millipede species known as Motyxia sequoiae, which inhabits habitats that are normally off-limits to other closely related millipedes. These habitats include exposed areas of the forest floor, open mountain meadows and the trunks of oak trees.
So perhaps bioluminescence evolved in Motyxia sequoiae to protect these creatures from predators in particularly vulnerable areas, and thereby enable these millipedes to expand their range to these favorable locations.
But why would Motyxia sequoiae evolve bioluminescence instead of any other defense mechanism, such as camouflage or weapons such as claws or sharp spines?
Have you ever heard the saying that "natural selection...works like a tinkerer"? This is a great way to think about the evolution of warning coloration and other complex biologic features. Tinkerers use what's already available (e.g., odds and ends lying around) to repair machines, appliances and other apparatuses.
A body of research suggests that many species may have similarly acquired bioluminescence by "making do" with, or repurposing, biological equipment they already possessed.
For example, fireflies need an enzyme called luciferase to light up. But the original role of the firefly's luciferase wasn't to help these insects produce light, but instead to help them synthesize fatty acids needed to create brain cells.
The essence of bioluminescence
Despite our growing knowledge, much about Motyxia remains mysterious. For example, how do these blind creatures find mates? What triggers their nightly emergence? With funding from the National Science Foundation, my team is working to answer these and other questions.
This research is part of our larger effort to describe biodiversity and reconstruct the evolutionary histories of arthropods--a group that includes insects, spiders and crustaceans, and accounts for 80 percent of all living species. We contribute our findings to the Tree of Life, which is a worldwide effort to define the evolutionary histories of animals.
Some bright ideas from bioluminescence
In addition to advancing our understanding of the history of life, studies of the bioluminescence of various types of organisms have implications for fields ranging from national defense to medicine.
Here are several examples:
The efficiency of electrical lighting systems, which can be only 10 percent efficient, could be improved by designing them to mimic bioluminescent light, which is 90 percent efficient.
The underbellies of some marine bioluminescent animals blend with background light from the water's surface, and so are camouflaged. The U.S. Navy is studying these phenomena so that it may build similarly camouflaged ships.
Healthy human cells produce ultra-weak amounts of light through a process similar to animal bioluminescence, but cancer cells produce slightly more light. Techniques may ultimately be developed to help locate cancer cells by detecting the greater amounts of light they produce.
A green fluorescent protein identified in a jellyfish species is now widely used in biomedical research as a fluorescent tag to help researchers track specific biological activities, such as the spread of cancer, insulin production and the movement of HIV proteins.
The key enzyme for beetle bioluminescence is a pivotal component of a fast, inexpensive method for sequencing genomes, which in 2008 was used to sequence the full genome of a Neanderthal.
Learn more about Dr. Marek's work at jointedlegs.org
-- Paul Marek, Virginia Tech
Investigators
Paul Marek
Related Institutions/Organizations
Virginia Polytechnic Institute and State University
Wednesday, June 10, 2015
SECRETARY KERRY'S STATEMENT ON 'CLIMATE CHANGE ADAPTATION AND RESILIENCE'
FROM: U.S. STATE DEPARTMENT
Climate Change Adaptation and Resilience
Press Statement
John Kerry
Secretary of State
Washington, DC
June 9, 2015
Climate change poses a threat to every country on Earth, and we all need to do what we can to take advantage of the small window of opportunity we still have to stave off its worst, most disastrous impacts. But even as we take unprecedented steps to mitigate the climate threat, we also have to ensure our communities are prepared for the impacts we know are headed our way – and the impacts we are already seeing all over the world in the form of heat waves, floods, historic droughts, ocean acidification and more.
Thanks to President Obama’s Climate Action Plan, we’ve taken a number of important steps to increase the resilience of American communities. But as the President has always said, this is a global challenge, and we’re not going to get very far if we keep our efforts contained within our borders. That’s why the United States is deeply committed to helping the rest of the world – especially the poorest and most vulnerable nations – adapt to the changing climate as well.
As part of that commitment, last fall, President Obama announced his intention to create a private-public partnership to provide climate data and information to help promote resilient development worldwide. Today we formally launched the Climate Services for Resilient Development partnership, along with the government of the United Kingdom and our partners at the American Red Cross, the Asian Development Bank, Esri, Google, the Inter-American Development and the Skoll Global Threats Fund. In addition to the $34 million we and our partners are putting toward that new partnership, we also announced a series of individual steps we’re taking to make adapting to climate change easier around the globe – including, for example, the volunteer “climate resilience corps” that the Peace Corps and AmeriCorps will be launching in developing countries, and NASA’s release of the first-ever climate modeling system that breaks data down to the country level, which will enable countries to better target their individual adaptation planning efforts.
In the United States, we’ve developed some of the most advanced technologies and scientific expertise on climate change, and we want to make sure these tools are reaching those who need it the most. Each of the commitments announced today will make it easier for people to take control of their own futures and play an active role in helping to prepare their communities, their countries, and ultimately their planet for the changes ahead.
When it comes to confronting climate change, no country should be forced to go it alone – because no country can possibly address this threat alone. It will require all of us – every country, around the world, doing what it can to contribute to the solution. That understanding is at the core of the initiatives we are unveiling today, it’s what is driving our work toward an ambitious global agreement in Paris later this year, and it’s what will continue to guide our leadership in the fight against climate change in the months and years to come.
Climate Change Adaptation and Resilience
Press Statement
John Kerry
Secretary of State
Washington, DC
June 9, 2015
Climate change poses a threat to every country on Earth, and we all need to do what we can to take advantage of the small window of opportunity we still have to stave off its worst, most disastrous impacts. But even as we take unprecedented steps to mitigate the climate threat, we also have to ensure our communities are prepared for the impacts we know are headed our way – and the impacts we are already seeing all over the world in the form of heat waves, floods, historic droughts, ocean acidification and more.
Thanks to President Obama’s Climate Action Plan, we’ve taken a number of important steps to increase the resilience of American communities. But as the President has always said, this is a global challenge, and we’re not going to get very far if we keep our efforts contained within our borders. That’s why the United States is deeply committed to helping the rest of the world – especially the poorest and most vulnerable nations – adapt to the changing climate as well.
As part of that commitment, last fall, President Obama announced his intention to create a private-public partnership to provide climate data and information to help promote resilient development worldwide. Today we formally launched the Climate Services for Resilient Development partnership, along with the government of the United Kingdom and our partners at the American Red Cross, the Asian Development Bank, Esri, Google, the Inter-American Development and the Skoll Global Threats Fund. In addition to the $34 million we and our partners are putting toward that new partnership, we also announced a series of individual steps we’re taking to make adapting to climate change easier around the globe – including, for example, the volunteer “climate resilience corps” that the Peace Corps and AmeriCorps will be launching in developing countries, and NASA’s release of the first-ever climate modeling system that breaks data down to the country level, which will enable countries to better target their individual adaptation planning efforts.
In the United States, we’ve developed some of the most advanced technologies and scientific expertise on climate change, and we want to make sure these tools are reaching those who need it the most. Each of the commitments announced today will make it easier for people to take control of their own futures and play an active role in helping to prepare their communities, their countries, and ultimately their planet for the changes ahead.
When it comes to confronting climate change, no country should be forced to go it alone – because no country can possibly address this threat alone. It will require all of us – every country, around the world, doing what it can to contribute to the solution. That understanding is at the core of the initiatives we are unveiling today, it’s what is driving our work toward an ambitious global agreement in Paris later this year, and it’s what will continue to guide our leadership in the fight against climate change in the months and years to come.
CYBER PHYSICAL THERAPY BEING TRIED BY VETERANS
FROM: NATIONAL SCIENCE FOUNDATION
Veterans will be first to try cyber physical therapy
High-speed research networks help scientists develop and deploy future health technologies
The Internet has been transformational, changing how we communicate with friends and family, how we shop, and more recently, how we heal. Physical therapy is the latest treatment to be offered as telemedicine, with an experimental system now connecting specialists to patients to provide help they otherwise couldn't get, aiding recovery from serious ailments, from broken limbs to stroke.
In an effort to connect physical therapy with wounded veterans far from treatment facilities, researchers from the University of Texas (UT) at Dallas have developed a rehabilitation system that uses real-time video, 3-D computer-generated worlds and force-feedback "haptic" devices to re-create a physical therapy session between a patient and a therapist, all at long distance over high-speed networks.
The team demonstrated the system at the Beyond Today's Internet Summit in March 2015. Organized by US Ignite and the Global Environment for Networking Innovations (GENI), two groups dedicated to advancing the frontiers of the Internet, the event showed what new capabilities are possible with ultra-high-speed, "smart", programmable networks.
Powerful Internet brings powerful applications
Though the majority of U.S. citizens still have Internet connection speeds in the tens of megabits per second, through the GENI and US Ignite programs, supported by the U.S. National Science Foundation, researchers, experts and some communities are able to access gigabit networks with speeds 40-100 times faster than standard networks.
For 3-D tele-rehabilitation to be lifelike and effective requires the system to have virtually no lag-time--or latency, in networking lingo--between action and reaction.
"To transfer all of this data requires a bandwidth greater than 100 megabits per second, which we currently can't do over the Internet," said Karthik Venkataraman, a Ph.D. student working on the computer-enabled health technologies in computer scientist Balakrishnan Prabhakaran's Multimedia Systems Lab at UT Dallas."GENI and US Ignite provide the bandwidth and low latency that is required by these kinds of applications."
Reach out and touch someone
Every year, physical therapists help millions of people recover from the debilitating impacts of strokes, injuries and a range of other ailments--but not everyone has access to a treatment facility or a physical therapy professional.
"We're trying to virtualize a physical therapy session in which a patient and a therapist cannot be present at the same location," explained Venkataraman.
To bring the tele-rehabilitation to life, the system uses Microsoft Kinect to create 3-D, real-time models of the patient and the doctor. The models then join a shared virtual environment, a computer-generated space customized by the participants.
To simulate the touch aspect of the physical therapy session, the patient responds to a touch-sensitive "haptic" arm controlled by the therapist via a paired haptic device.
At the summit, the team demonstrated a physical therapy session in which two individuals practice sawing a log, a task that mimics the movements used by recovering stroke patients. The participants feel both the resistance of the log and the guiding movements of their partner, just as would occur at an in-person therapy session.
The researchers say this is just one example of what can be achieved with next-generation networks that support high-bandwidth and low-latency communication. The team is also working on extending the tele-rehabilitation system so one therapist or physician can work with multiple patients at the same time.
"This scaled-up version will ensure privacy in the sense that the patients will not be able to see other patients. Only the therapist will be able to view and monitor multiple patients," said Prabhakaran Balakrishnan, the lead researcher on the project. "The therapist will also be able to pick one patient and work with him or her on a one-to-one basis."
In collaboration with Thiru Annaswamy, a physician and assistant professor of medicine, the 3-D tele-rehabilitation system will be deployed at the Dallas Veterans Affairs Medical Center and used to help rehabilitate disabled veterans, with field trials beginning in June.
"If the patient and the therapist cannot be in the same location," Venkataraman said, "we still want to be able to give that virtual experience of him or her being together with the therapist in the same room."
-- Aaron Dubrow, (
Investigators
Balakrishnan Prabhakaran
Ovidiu Daescu
Mark Spong
Xiaohu Guo
Gopal Gupta
Dinesh Bhatia
Roozbeh Jafari
Related Institutions/Organizations
University of Texas at Dallas
Dallas Veterans Affairs Medical Center
Veterans will be first to try cyber physical therapy
High-speed research networks help scientists develop and deploy future health technologies
The Internet has been transformational, changing how we communicate with friends and family, how we shop, and more recently, how we heal. Physical therapy is the latest treatment to be offered as telemedicine, with an experimental system now connecting specialists to patients to provide help they otherwise couldn't get, aiding recovery from serious ailments, from broken limbs to stroke.
In an effort to connect physical therapy with wounded veterans far from treatment facilities, researchers from the University of Texas (UT) at Dallas have developed a rehabilitation system that uses real-time video, 3-D computer-generated worlds and force-feedback "haptic" devices to re-create a physical therapy session between a patient and a therapist, all at long distance over high-speed networks.
The team demonstrated the system at the Beyond Today's Internet Summit in March 2015. Organized by US Ignite and the Global Environment for Networking Innovations (GENI), two groups dedicated to advancing the frontiers of the Internet, the event showed what new capabilities are possible with ultra-high-speed, "smart", programmable networks.
Powerful Internet brings powerful applications
Though the majority of U.S. citizens still have Internet connection speeds in the tens of megabits per second, through the GENI and US Ignite programs, supported by the U.S. National Science Foundation, researchers, experts and some communities are able to access gigabit networks with speeds 40-100 times faster than standard networks.
For 3-D tele-rehabilitation to be lifelike and effective requires the system to have virtually no lag-time--or latency, in networking lingo--between action and reaction.
"To transfer all of this data requires a bandwidth greater than 100 megabits per second, which we currently can't do over the Internet," said Karthik Venkataraman, a Ph.D. student working on the computer-enabled health technologies in computer scientist Balakrishnan Prabhakaran's Multimedia Systems Lab at UT Dallas."GENI and US Ignite provide the bandwidth and low latency that is required by these kinds of applications."
Reach out and touch someone
Every year, physical therapists help millions of people recover from the debilitating impacts of strokes, injuries and a range of other ailments--but not everyone has access to a treatment facility or a physical therapy professional.
"We're trying to virtualize a physical therapy session in which a patient and a therapist cannot be present at the same location," explained Venkataraman.
To bring the tele-rehabilitation to life, the system uses Microsoft Kinect to create 3-D, real-time models of the patient and the doctor. The models then join a shared virtual environment, a computer-generated space customized by the participants.
To simulate the touch aspect of the physical therapy session, the patient responds to a touch-sensitive "haptic" arm controlled by the therapist via a paired haptic device.
At the summit, the team demonstrated a physical therapy session in which two individuals practice sawing a log, a task that mimics the movements used by recovering stroke patients. The participants feel both the resistance of the log and the guiding movements of their partner, just as would occur at an in-person therapy session.
The researchers say this is just one example of what can be achieved with next-generation networks that support high-bandwidth and low-latency communication. The team is also working on extending the tele-rehabilitation system so one therapist or physician can work with multiple patients at the same time.
"This scaled-up version will ensure privacy in the sense that the patients will not be able to see other patients. Only the therapist will be able to view and monitor multiple patients," said Prabhakaran Balakrishnan, the lead researcher on the project. "The therapist will also be able to pick one patient and work with him or her on a one-to-one basis."
In collaboration with Thiru Annaswamy, a physician and assistant professor of medicine, the 3-D tele-rehabilitation system will be deployed at the Dallas Veterans Affairs Medical Center and used to help rehabilitate disabled veterans, with field trials beginning in June.
"If the patient and the therapist cannot be in the same location," Venkataraman said, "we still want to be able to give that virtual experience of him or her being together with the therapist in the same room."
-- Aaron Dubrow, (
Investigators
Balakrishnan Prabhakaran
Ovidiu Daescu
Mark Spong
Xiaohu Guo
Gopal Gupta
Dinesh Bhatia
Roozbeh Jafari
Related Institutions/Organizations
University of Texas at Dallas
Dallas Veterans Affairs Medical Center
Tuesday, June 9, 2015
SHAPE CHANGING WING FLAPS
FROM: NASA GREEN AVIATION
Green Aviation Project Tests Shape Changing Wing Flaps
A NASA F-15D flies chase for the G-III Adaptive Compliant Trailing Edge (ACTE) project. This photo was taken by an automated Wing Deflection Measurement System (WDMS) camera in the G-III that photographed the ACTE wing every second during the flight. The ACTE experimental flight research project is a joint effort between NASA and the U.S. Air Force Research Laboratory to determine if advanced flexible trailing-edge wing flaps, developed and patented by FlexSys, Inc., can both improve aircraft aerodynamic efficiency and reduce airport-area noise generated during takeoffs and landings.
The experiment is being carried out on a modified Gulfstream III (G-III) business aircraft that has been converted into an aerodynamics research test bed at NASA's Armstrong Flight Research Center. The ACTE project involves replacement of both of the G-III's conventional 19-foot-long aluminum flaps with the shape changing flaps that form continuous bendable surfaces.
Green Aviation Project Tests Shape Changing Wing Flaps
A NASA F-15D flies chase for the G-III Adaptive Compliant Trailing Edge (ACTE) project. This photo was taken by an automated Wing Deflection Measurement System (WDMS) camera in the G-III that photographed the ACTE wing every second during the flight. The ACTE experimental flight research project is a joint effort between NASA and the U.S. Air Force Research Laboratory to determine if advanced flexible trailing-edge wing flaps, developed and patented by FlexSys, Inc., can both improve aircraft aerodynamic efficiency and reduce airport-area noise generated during takeoffs and landings.
The experiment is being carried out on a modified Gulfstream III (G-III) business aircraft that has been converted into an aerodynamics research test bed at NASA's Armstrong Flight Research Center. The ACTE project involves replacement of both of the G-III's conventional 19-foot-long aluminum flaps with the shape changing flaps that form continuous bendable surfaces.
S. KOREA ROBOT WINS FIRST PRIZE AT DARPA ROBOT FINALS
FROM: U.S. DEFENSE DEPARTMENT
Right: Team Kaist’s robot DRC-Hubo uses a tool to cut a hole in a wall during the DARPA Robotics Challenge Finals, June 5-6, 2015, in Pomona, Calif. Team Kaist won the top prize at the competition. DARPA photo
Robots from South Korea, U.S. Win DARPA Finals
By Cheryl Pellerin
DoD News, Defense Media Activity
POMONA, Calif., June 7, 2015 – A robot from South Korea took first prize and two American robots took second and third prizes here yesterday in the two-day robotic challenge finals held by the Defense Advanced Research Projects Agency.
Twenty-three human-robot teams participating in the DARPA Robotics Challenge, or DRC, finals competed for $3.5 million in prizes, working to get through eight tasks in an hour, under their own onboard power and with severely degraded communications between robot and operator.
A dozen U.S. teams and 11 from Japan, Germany, Italy, South Korea and Hong Kong competed in the outdoor competition.
DARPA launched the DRC in response to the nuclear disaster at Fukushima, Japan, in 2011 and the need for help to save lives in the toxic environment there.
Progress in Robotics
The DRC’s goal was to accelerate progress in robotics so robots more quickly can gain the dexterity and robustness they need to enter areas too dangerous for people and mitigate disaster impacts.
Robot tasks were relevant to disaster response -- driving alone, walking through rubble, tripping circuit breakers, using a tool to cut a hole in a wall, turning valves and climbing stairs.
Each team had two tries at the course with the best performance and times used as official scores. All three winners each had final scores of eight points, so they were arrayed from first to third place according to least time on the course.
DARPA program manager and DRC organizer Gill Pratt congratulated the 23 participating teams and thanked them for helping open a new era of human-robot partnerships.
Loving Robots
The DRC was open to the public, and more than 10,000 people over two days watched from the Fairplex grandstand as each robot ran its course. The venue was formerly known as the Los Angeles County Fairgrounds.
"These robots are big and made of lots of metal, and you might assume people seeing them would be filled with fear and anxiety," Pratt said during a press briefing at the end of day 2.
"But we heard groans of sympathy when those robots fell, and what did people do every time a robot scored a point? They cheered!” he added.
Pratt said this could be one of the biggest lessons from DRC -- “the potential for robots not only to perform technical tasks for us but to help connect people to one another."
South Korean Winning Team
Team Kaist from Daejeon, South Korea, and its robot DRC-Hubo took first place and the $2 million prize. Hubo comes from the words ‘humanoid robot.’
Team Kaist is from the Korea Advanced Institute of Science and Technology, which professor JunHo Oh of the Mechanical Engineering Department called “the MIT of Korea,” and he led Team Kaist to victory here.
In his remarks at the DARPA press conference, Oh noted that researchers from a university commercial spinoff called Rainbow Co., built the Hubo robot hardware.
The professor said his team’s first-place prize doesn’t make DRC-Hubo the best robot in the world, but he’s happy with the prize, which he said helps demonstrate Korea’s technological capabilities.
Team IHMC Robotics
Coming in second with a $1 million prize is Team IHMC Robotics of Pensacola, Florida -- the Institute of Human and Machine Cognition -- and its robot Running Man.
Jerry Pratt leads a research group at IHMC that works to understand and model human gait and its applications in robotics, human assistive devices and man-machine interfaces.
“Robots are really coming a long way,” Pratt said.
“Are you going to see a lot more of them? It's hard to say when you'll really see humanoid robots in the world,” he added. “But I think this is the century of the humanoid robot. The real question is what decade? And the DRC will make that decade come maybe one decade sooner.”
Team Tartan Rescue
In third place is Team Tartan Rescue of Pittsburgh, winning $500,000. The robot is CHIMP, which stands for CMU highly intelligent mobile platform. Team members are from Carnegie Mellon University and the National Robotics Engineering Center.
Tony Stentz, NREC director, led Team Tartan Rescue, and during the press conference called the challenge “quite an experience.”
That experience was best captured, he said, “with our run yesterday when we had trouble all through the course, all kinds of problems, things we never saw before.”
While that was happening, Stentz said, the team operating the robot from another location kept their cool.
Promise for the Technology
“They figured out what was wrong, they tapped their deep experience in practicing with the machine, they tapped the tools available at their fingertips, and they managed to get CHIMP through the entire course, doing all of the tasks in less than an hour,” he added.
“That says a lot about the technology and it says a lot about the people,” Stentz said, “and I think it means that there's great promise for this technology.”
All the winners said they would put most of the prize money into robotics research and share a portion with their team members.
After the day 2 competition, Arati Prabhakar, DARPA director, said this is the end of the 3-year-long DARPA Robotics Challenge but “the beginning of a future in which robots can work alongside people to reduce the toll of disasters."
Right: Team Kaist’s robot DRC-Hubo uses a tool to cut a hole in a wall during the DARPA Robotics Challenge Finals, June 5-6, 2015, in Pomona, Calif. Team Kaist won the top prize at the competition. DARPA photo
Robots from South Korea, U.S. Win DARPA Finals
By Cheryl Pellerin
DoD News, Defense Media Activity
POMONA, Calif., June 7, 2015 – A robot from South Korea took first prize and two American robots took second and third prizes here yesterday in the two-day robotic challenge finals held by the Defense Advanced Research Projects Agency.
Twenty-three human-robot teams participating in the DARPA Robotics Challenge, or DRC, finals competed for $3.5 million in prizes, working to get through eight tasks in an hour, under their own onboard power and with severely degraded communications between robot and operator.
A dozen U.S. teams and 11 from Japan, Germany, Italy, South Korea and Hong Kong competed in the outdoor competition.
DARPA launched the DRC in response to the nuclear disaster at Fukushima, Japan, in 2011 and the need for help to save lives in the toxic environment there.
Progress in Robotics
The DRC’s goal was to accelerate progress in robotics so robots more quickly can gain the dexterity and robustness they need to enter areas too dangerous for people and mitigate disaster impacts.
Robot tasks were relevant to disaster response -- driving alone, walking through rubble, tripping circuit breakers, using a tool to cut a hole in a wall, turning valves and climbing stairs.
Each team had two tries at the course with the best performance and times used as official scores. All three winners each had final scores of eight points, so they were arrayed from first to third place according to least time on the course.
DARPA program manager and DRC organizer Gill Pratt congratulated the 23 participating teams and thanked them for helping open a new era of human-robot partnerships.
Loving Robots
The DRC was open to the public, and more than 10,000 people over two days watched from the Fairplex grandstand as each robot ran its course. The venue was formerly known as the Los Angeles County Fairgrounds.
"These robots are big and made of lots of metal, and you might assume people seeing them would be filled with fear and anxiety," Pratt said during a press briefing at the end of day 2.
"But we heard groans of sympathy when those robots fell, and what did people do every time a robot scored a point? They cheered!” he added.
Pratt said this could be one of the biggest lessons from DRC -- “the potential for robots not only to perform technical tasks for us but to help connect people to one another."
South Korean Winning Team
Team Kaist from Daejeon, South Korea, and its robot DRC-Hubo took first place and the $2 million prize. Hubo comes from the words ‘humanoid robot.’
Team Kaist is from the Korea Advanced Institute of Science and Technology, which professor JunHo Oh of the Mechanical Engineering Department called “the MIT of Korea,” and he led Team Kaist to victory here.
In his remarks at the DARPA press conference, Oh noted that researchers from a university commercial spinoff called Rainbow Co., built the Hubo robot hardware.
The professor said his team’s first-place prize doesn’t make DRC-Hubo the best robot in the world, but he’s happy with the prize, which he said helps demonstrate Korea’s technological capabilities.
Team IHMC Robotics
Coming in second with a $1 million prize is Team IHMC Robotics of Pensacola, Florida -- the Institute of Human and Machine Cognition -- and its robot Running Man.
Jerry Pratt leads a research group at IHMC that works to understand and model human gait and its applications in robotics, human assistive devices and man-machine interfaces.
“Robots are really coming a long way,” Pratt said.
“Are you going to see a lot more of them? It's hard to say when you'll really see humanoid robots in the world,” he added. “But I think this is the century of the humanoid robot. The real question is what decade? And the DRC will make that decade come maybe one decade sooner.”
Team Tartan Rescue
In third place is Team Tartan Rescue of Pittsburgh, winning $500,000. The robot is CHIMP, which stands for CMU highly intelligent mobile platform. Team members are from Carnegie Mellon University and the National Robotics Engineering Center.
Tony Stentz, NREC director, led Team Tartan Rescue, and during the press conference called the challenge “quite an experience.”
That experience was best captured, he said, “with our run yesterday when we had trouble all through the course, all kinds of problems, things we never saw before.”
While that was happening, Stentz said, the team operating the robot from another location kept their cool.
Promise for the Technology
“They figured out what was wrong, they tapped their deep experience in practicing with the machine, they tapped the tools available at their fingertips, and they managed to get CHIMP through the entire course, doing all of the tasks in less than an hour,” he added.
“That says a lot about the technology and it says a lot about the people,” Stentz said, “and I think it means that there's great promise for this technology.”
All the winners said they would put most of the prize money into robotics research and share a portion with their team members.
After the day 2 competition, Arati Prabhakar, DARPA director, said this is the end of the 3-year-long DARPA Robotics Challenge but “the beginning of a future in which robots can work alongside people to reduce the toll of disasters."
Sunday, June 7, 2015
GEMINI IV EARTHVIEW
FROM: NASA
June 4, 1965, Earth Observations From Gemini IV
RESEARCHERS SAY OCEAN WARMING WILL LEAD TO MARINE ANIMAL MIGRATION
FROM: NATIONAL SCIENCE FOUNDATION
Warmer, lower-oxygen oceans will shift marine habitats
Changes will result in marine animals moving away from equator
Modern mountain climbers usually carry tanks of oxygen to help them reach the summit. The combination of physical exertion and lack of oxygen at high altitudes creates a major challenge for mountaineers.
Now, just in time for World Oceans Day on Monday, June 8, researchers have found that the same principle applies to marine species during climate change.
Warmer water temperatures will speed up the animals' metabolic need for oxygen, as also happens during exercise, but the warmer water will hold less of the oxygen needed to fuel their bodies, similar to what happens at high altitudes.
Results of the study are published in this week's issue of the journal Science.
"This work is important because it links metabolic constraints to changes in marine temperatures and oxygen supply," said Irwin Forseth, program director in the National Science Foundation's (NSF) Division of Integrative Organismal Systems, which funded the research along with NSF's Division of Ocean Sciences.
"Understanding connections such as this is essential to allow us to predict the effects of environmental changes on the distribution and diversity of marine life.”
Marine animals pushed away from equator
The scientists found that these changes will act to push marine animals away from the equator. About two thirds of the respiratory stress due to climate change is caused by warmer temperatures, while the rest is because warmer water holds less dissolved gases such as oxygen.
"If your metabolism goes up, you need more food and you need more oxygen," said lead paper author Curtis Deutsch of the University of Washington.
"Aquatic animals could become oxygen-starved in a warmer future, even if oxygen doesn't change. We know that oxygen levels in the ocean are going down now and will decrease more with climate warming."
Four Atlantic Ocean species studied
The study centered on four Atlantic Ocean species whose temperature and oxygen requirements are well known from lab tests: Atlantic cod in the open ocean; Atlantic rock crab in coastal waters; sharp snout seabream in the sub-tropical Atlantic; and common eelpout, a bottom-dwelling fish in shallow waters in high northern latitudes.
Deutsch and colleagues used climate models to see how projected temperature and oxygen levels by 2100 would affect the four species ability to meet their future energy needs.
The near-surface ocean is projected to warm by several degrees Celsius by the end of this century. Seawater at that temperature would hold 5-10 percent less oxygen than it does now.
Results show that future rock crab habitat, for example, would be restricted to shallower water, hugging the more oxygenated surface.
Equatorial part of animals' ranges uninhabitable
For all four species, the equatorial part of their ranges would become uninhabitable because peak oxygen demand would be greater than the supply.
Viable habitats would shift away from the equator, displacing from 14 percent to 26 percent of the current ranges.
The authors believe the results are relevant for all marine species that rely on aquatic oxygen as an energy source.
"The Atlantic Ocean is relatively well-oxygenated," Deutsch said. "If there's oxygen restriction in the Atlantic Ocean marine habitat, then it should be everywhere."
Climate models predict that the northern Pacific Ocean's relatively low oxygen levels will decline even more, making it the most vulnerable part of the seas to habitat loss.
"For aquatic animals that are breathing water, warming temperatures create a problem of limited oxygen supply versus higher demand," said co-author Raymond Huey, a University of Washington biologist who has studied metabolism in land animals and in human mountain climbers.
"This simple metabolic index seems to correlate with the current distributions of marine organisms," he said. "That means that it gives us the power to predict how range limits are going to shift with warming."
A day-to-day "dead zone"
Previously, marine scientists thought about oxygen more in terms of extreme events that could cause regional die-offs of marine animals, also known as dead zones.
"We found that oxygen is also a day-to-day restriction on where species will live," Deutsch said.
"The effect we're describing will be part of what's pushing species around in the future."
Other co-authors are Hans Otto-Portner of the Alfred Wegener Institute in Germany; Aaron Ferrel of the University of California, Los Angeles; and Brad Seibel at the University of Rhode Island.
The Gordon and Betty Moore Foundation and the Alfred Wegener Institute also funded the research.
-NSF-
Media Contacts
Cheryl Dybas, NSF
Warmer, lower-oxygen oceans will shift marine habitats
Changes will result in marine animals moving away from equator
Modern mountain climbers usually carry tanks of oxygen to help them reach the summit. The combination of physical exertion and lack of oxygen at high altitudes creates a major challenge for mountaineers.
Now, just in time for World Oceans Day on Monday, June 8, researchers have found that the same principle applies to marine species during climate change.
Warmer water temperatures will speed up the animals' metabolic need for oxygen, as also happens during exercise, but the warmer water will hold less of the oxygen needed to fuel their bodies, similar to what happens at high altitudes.
Results of the study are published in this week's issue of the journal Science.
"This work is important because it links metabolic constraints to changes in marine temperatures and oxygen supply," said Irwin Forseth, program director in the National Science Foundation's (NSF) Division of Integrative Organismal Systems, which funded the research along with NSF's Division of Ocean Sciences.
"Understanding connections such as this is essential to allow us to predict the effects of environmental changes on the distribution and diversity of marine life.”
Marine animals pushed away from equator
The scientists found that these changes will act to push marine animals away from the equator. About two thirds of the respiratory stress due to climate change is caused by warmer temperatures, while the rest is because warmer water holds less dissolved gases such as oxygen.
"If your metabolism goes up, you need more food and you need more oxygen," said lead paper author Curtis Deutsch of the University of Washington.
"Aquatic animals could become oxygen-starved in a warmer future, even if oxygen doesn't change. We know that oxygen levels in the ocean are going down now and will decrease more with climate warming."
Four Atlantic Ocean species studied
The study centered on four Atlantic Ocean species whose temperature and oxygen requirements are well known from lab tests: Atlantic cod in the open ocean; Atlantic rock crab in coastal waters; sharp snout seabream in the sub-tropical Atlantic; and common eelpout, a bottom-dwelling fish in shallow waters in high northern latitudes.
Deutsch and colleagues used climate models to see how projected temperature and oxygen levels by 2100 would affect the four species ability to meet their future energy needs.
The near-surface ocean is projected to warm by several degrees Celsius by the end of this century. Seawater at that temperature would hold 5-10 percent less oxygen than it does now.
Results show that future rock crab habitat, for example, would be restricted to shallower water, hugging the more oxygenated surface.
Equatorial part of animals' ranges uninhabitable
For all four species, the equatorial part of their ranges would become uninhabitable because peak oxygen demand would be greater than the supply.
Viable habitats would shift away from the equator, displacing from 14 percent to 26 percent of the current ranges.
The authors believe the results are relevant for all marine species that rely on aquatic oxygen as an energy source.
"The Atlantic Ocean is relatively well-oxygenated," Deutsch said. "If there's oxygen restriction in the Atlantic Ocean marine habitat, then it should be everywhere."
Climate models predict that the northern Pacific Ocean's relatively low oxygen levels will decline even more, making it the most vulnerable part of the seas to habitat loss.
"For aquatic animals that are breathing water, warming temperatures create a problem of limited oxygen supply versus higher demand," said co-author Raymond Huey, a University of Washington biologist who has studied metabolism in land animals and in human mountain climbers.
"This simple metabolic index seems to correlate with the current distributions of marine organisms," he said. "That means that it gives us the power to predict how range limits are going to shift with warming."
A day-to-day "dead zone"
Previously, marine scientists thought about oxygen more in terms of extreme events that could cause regional die-offs of marine animals, also known as dead zones.
"We found that oxygen is also a day-to-day restriction on where species will live," Deutsch said.
"The effect we're describing will be part of what's pushing species around in the future."
Other co-authors are Hans Otto-Portner of the Alfred Wegener Institute in Germany; Aaron Ferrel of the University of California, Los Angeles; and Brad Seibel at the University of Rhode Island.
The Gordon and Betty Moore Foundation and the Alfred Wegener Institute also funded the research.
-NSF-
Media Contacts
Cheryl Dybas, NSF
Subscribe to:
Posts (Atom)