FROM: NATIONAL SCIENCE FOUNDATION
Federal funding for science and engineering at universities down 6 percent
Latest figures show obligations down for R&D and facilities that support science and engineering
Federal agencies obligated $29 billion to 995 science and engineering academic institutions in fiscal year 2013, according to a new report from the National Science Foundation's (NSF) National Center for Science and Engineering Statistics (NCSES). The figure represents a 6 percent decline in current dollars from the previous year, when agencies provided $31 billion to 1,073 institutions.
After adjustment for inflation, federal science and engineering obligations to academic institutions dropped by $1 billion from FY 2011 to FY 2012, and by $2 billion between FY 2012 and FY 2013. The obligations fall into six categories:
Research and development;
R&D plant (facilities and fixed equipment, such as reactors, wind tunnels and particle accelerators);
Facilities and equipment for instruction in science and engineering;
Fellowships, traineeships and training grants;
General support for science and engineering;
Other science and engineering activities.
Of those categories, research and development accounted for 89 percent of total federal obligations during the past three years.
The three largest providers of federal funding in fiscal 2013 were the Department of Health and Human Services (58 percent), NSF (17 percent) and the Department of Defense (12 percent). The Department of Energy, the Department of Agriculture and NASA provided the remainder of funding (11 percent, combined). Of these six agencies, only the Department of Energy showed increased obligations between FY 2012 and FY 2013.
The leading 20 universities, ranked in terms of federal academic S&E obligations, accounted for 37 percent of the FY 2013 federal total. The Johns Hopkins University continued to receive the most federal obligations of any university, at $1.5 billion.
NCSES collects information about federal obligations to independent nonprofit institutions in two categories: research and development, and R&D plant. The $6.6 billion provided to 1,068 institutions in FY 2013 represented a 2 percent decrease from $6.8 billion the previous year. The leading 10 nonprofits accounted for 36 percent of fiscal 2013 funding, with the MITRE Corporation receiving the largest total, at $485 million.
The statistics are from the NCSES Survey of Federal Science and Engineering Support to Universities, Colleges and Nonprofit Institutions.
-NSF-
Media Contacts
Rob Margetta, NSF
A PUBLICATION OF RANDOM U.S.GOVERNMENT PRESS RELEASES AND ARTICLES
Showing posts with label ENGINEERING. Show all posts
Showing posts with label ENGINEERING. Show all posts
Friday, July 3, 2015
Tuesday, June 23, 2015
NSF TOUTS FUNDING OF LASER RESEARCH
FROM: NATIONAL SCIENCE FOUNDATION
On the road to ubiquity
NSF support of laser research
When the National Science Foundation (NSF) was founded in 1950, the laser didn't exist. Some 65 years later, the technology is ubiquitous.
As a tool, the laser has stretched the imaginations of countless scientists and engineers, making possible everything from stunning images of celestial bodies to high-speed communications. Once described as a "solution looking for a problem," the laser powered and pulsed its way into nearly every aspect of modern life.
For its part, NSF funding enabled research that has translated into meaningful applications in manufacturing, communications, life sciences, defense and medicine. NSF also has played a critical role in training the scientists and engineers who perform this research.
"We enable them [young researchers] at the beginning of their academic careers to get funding and take the next big step," said Lawrence Goldberg, senior engineering adviser in NSF's Directorate for Engineering.
Getting started
During the late 1950s and throughout the 1960s, major industrial laboratories run by companies such as Hughes Aircraft Company, AT&T and General Electric supported laser research as did the Department of Defense. These efforts developed all kinds of lasers--gas, solid-state (based on solid materials), semiconductor (based on electronics), dye and excimer (based on reactive gases).
Like the first computers, early lasers were often room-size, requiring massive tables that held multiple mirrors, tubes and electronic components. Inefficient and power-hungry, these monoliths challenged even the most dedicated researchers. Undaunted, they refined components and techniques required for smooth operation.
As the 1960s ended, funding for industrial labs began to shrink as companies scaled back or eliminated their fundamental research and laser development programs. To fill the void, the federal government and emerging laser industry looked to universities and NSF.
Despite budget cuts in the 1970s, NSF funded a range of projects that helped improve all aspects of laser performance, from beam shaping and pulse rate to energy consumption. Support also contributed to developing new materials essential for continued progress toward new kinds of lasers. As efficiency improved, researchers began considering how to apply the technology.
Charge of the lightwave
One area in particular, data transmission, gained momentum as the 1980s progressed. NSF's Lightwave Technology Program in its engineering directorate was critical not only because the research it funded fueled the Internet, mobile devices and other high-bandwidth communications applications, but also because many of the laser advances in this field drove progress in other disciplines.
An important example of this crossover is optical coherence tomography (OCT). Used in the late 1980s in telecommunications to find faults in miniature optical waveguides and optical fibers, this imaging technique was adapted by biomedical researchers in the early 1990s to noninvasively image microscopic structures in the eye. The imaging modality is now commonly used in ophthalmology to image the retina. NSF continues to fund OCT research.
As laser technology matured through the 1990s, applications became more abundant. Lasers made their way to the factory floor (to cut, weld and drill) and the ocean floor (to boost signals in transatlantic communications). The continued miniaturization of lasers and the advent of optical fibers radically altered medical diagnostics as well as surgery.
Focus on multidisciplinary research
In 1996, NSF released its first solicitation solely targeting multidisciplinary optical science and engineering. The initiative awarded $13.5 million via 18 three-year awards. Grantees were selected from 76 proposals and 627 pre-proposals. Over a dozen NSF program areas participated.
The press release announcing the awards described optical science and engineering as "an 'enabling' technology" and went on to explain that "for such a sweeping field, the broad approach...emphasizing collaboration between disciplines, is particularly effective. By coordinating program efforts, the NSF has encouraged cross-disciplinary linkages that could lead to major findings, sometimes in seemingly unrelated areas that could have solid scientific as well as economic benefits."
"There is an advantage in supporting groups that can bring together the right people," said Denise Caldwell, director of NSF's Division of Physics. In one such case, she says NSF's support of the Center for Ultrafast Optical Science (CUOS) at the University of Michigan led to advances in multiple areas including manufacturing, telecommunications and precision surgery.
During the 1990s, CUOS scientists were developing ultrafast lasers. As they explored femtosecond lasers--ones with pulses one quadrillionth of a second--they discovered that femtosecond lasers drilled cleaner holes than picosecond lasers—ones with pulses one trillionth of a second.
Although they transferred the technology to the Ford Motor Company, a young physician at the university heard about the capability and contacted the center. The collaboration between the clinician and CUOS researchers led to IntraLASIK technology used by ophthalmologists for cornea surgery as well as a spin-off company, Intralase (funded with an NSF Small Business Innovative Research grant).
More recently, NSF support of the Engineering Research Center for Extreme Ultraviolet Science and Technology at Colorado State University has given rise to the development of compact ultrafast laser sources in the extreme UV and X-ray spectral regions.
This work is significant because these lasers will now be more widely available to researchers, diminishing the need for access to a large source like a synchrotron. Compact ultrafast sources are opening up entirely new fields of study such as attosecond dynamics, which enables scientists to follow the motion of electrons within molecules and materials.
Identifying new research directions
NSF's ability to foster collaborations within the scientific community has also enabled it to identify new avenues for research. As laser technology matured in the late 1980s, some researchers began to consider the interaction of laser light with biological material. Sensing this movement, NSF began funding research in this area.
"Optics has been a primary force in fostering this interface," Caldwell said.
One researcher who saw NSF taking the lead in pushing the frontiers of light-matter interactions was University of Michigan researcher Duncan Steel.
At the time, Steel continued pursuing quantum electronics research while using lasers to enable imaging and spectroscopy of single molecules in their natural environment. Steel and his colleagues were among the first to optically study how molecular self-assembly of proteins affects the neurotoxicity of Alzheimer's disease.
"New classes of light sources and dramatic changes in detectors and smart imaging opened up new options in biomedical research," Steel said. "NSF took the initiative to establish a highly interdisciplinary direction that then enabled many people to pursue emergent ideas that were rapidly evolving. That's one of NSF's biggest legacies--to create new opportunities that enable scientists and engineers to create and follow new directions for a field."
Roadmaps for the future
In the mid-1990s, the optical science and engineering research community expressed considerable interest in developing a report to describe the impact the field was having on national needs as well as recommend new areas that might benefit from focused effort from the community.
As a result, in 1998, the National Research Council (NRC) published Harnessing Light: Optical Science and Engineering for the 21st Century. Funded by the Defense Advanced Research Projects Agency, NSF and the National Institute of Standards and Technology (NIST), the 331-page report was the first comprehensive look at the laser juggernaut.
That report was significant because, for the first time, the community focused a lens on its R&D in language that was accessible to the public and to policymakers. It also laid the groundwork for subsequent reports. Fifteen years after Harnessing Light, NSF was the lead funding agency for another NRC report, Optics and Photonics: Essential Technologies for Our Nation.
Widely disseminated through the community's professional societies, the Optics and Photonics report led to a 2014 report by the National Science and Technology Council's Fast Track Action Committee on Optics and Photonics, Building a Brighter Future with Optics and Photonics.
The committee, comprised of 14 federal agencies and co-chaired by NSF and NIST, would identify cross-cutting areas of optics and photonics research that, through interagency collaboration, could benefit the nation. It also was also set to prioritize these research areas for possible federal investment and set long-term goals for federal optics and photonics research.
Developing a long-term, integrated approach
One of the recommendations from the NRC Optics and Photonics report was creation of a national photonics initiative to identify critical technical priorities for long-term federal R&D funding. To develop these priorities, the initiative would draw on researchers from industry, universities and the government as well as policymakers. Their charge: Provide a more integrated approach to industrial and federal optics and photonics R&D spending.
In just a year, the National Photonics Initiative was formed through the cooperative efforts of the Optical Society of America, SPIE--the international society for optics and photonics, the IEEE Photonics Society, the Laser Institute of America and the American Physical Society Division of Laser Science. One of the first fruits of this forward-looking initiative is a technology roadmap for President Obama's BRAIN Initiative.
To assess NSF's own programs and consider future directions, the agency formed an optics and photonics working group in 2013 to develop a roadmap to "lay the groundwork for major advances in scientific understanding and creation of high-impact technologies for the next decade and beyond."
The working group, led by Goldberg and Charles Ying, from NSF's Division of Materials Research, inventoried NSF's annual investment in optics and photonics. Their assessment showed that NSF invests about $300 million each year in the field.
They also identified opportunities for future growth and investment in photonic/electronic integration, biophotonics, the quantum domain, extreme UV and X-ray, manufacturing and crosscutting education and international activities.
As a next step, NSF also formed an optics and photonics advisory subcommittee for its Mathematical and Physical Sciences Directorate Advisory Committee. In its final report released in July 2014, the subcommittee identified seven research areas that could benefit from additional funding, including nanophotonics, new imaging modalities and optics and photonics for astronomy and astrophysics.
That same month, NSF released a "Dear Colleague Letter" to demonstrate the foundation's growing interest in optics and photonics and to stimulate a pool of proposals, cross-disciplinary in nature, that could help define new research directions.
And so the laser, once itself the main focus of research, takes its place as a device that extends our ability to see.
"To see is a fundamental human drive," Caldwell said. "If I want to understand a thing, I want to see it. The laser is a very special source of light with incredible capabilities."
-- Susan Reiss, National Science Foundation
-- Ivy F. Kupec, (703) 292-8796 ikupec@nsf.gov
Investigators
John Nees
Jorge Rocca
Duncan Steel
Tibor Juhasz
Carmen Menoni
David Attwood
Henry Kapteyn
Herbert Winful
Steven Yalisove
Margaret Murnane
On the road to ubiquity
NSF support of laser research
When the National Science Foundation (NSF) was founded in 1950, the laser didn't exist. Some 65 years later, the technology is ubiquitous.
As a tool, the laser has stretched the imaginations of countless scientists and engineers, making possible everything from stunning images of celestial bodies to high-speed communications. Once described as a "solution looking for a problem," the laser powered and pulsed its way into nearly every aspect of modern life.
For its part, NSF funding enabled research that has translated into meaningful applications in manufacturing, communications, life sciences, defense and medicine. NSF also has played a critical role in training the scientists and engineers who perform this research.
"We enable them [young researchers] at the beginning of their academic careers to get funding and take the next big step," said Lawrence Goldberg, senior engineering adviser in NSF's Directorate for Engineering.
Getting started
During the late 1950s and throughout the 1960s, major industrial laboratories run by companies such as Hughes Aircraft Company, AT&T and General Electric supported laser research as did the Department of Defense. These efforts developed all kinds of lasers--gas, solid-state (based on solid materials), semiconductor (based on electronics), dye and excimer (based on reactive gases).
Like the first computers, early lasers were often room-size, requiring massive tables that held multiple mirrors, tubes and electronic components. Inefficient and power-hungry, these monoliths challenged even the most dedicated researchers. Undaunted, they refined components and techniques required for smooth operation.
As the 1960s ended, funding for industrial labs began to shrink as companies scaled back or eliminated their fundamental research and laser development programs. To fill the void, the federal government and emerging laser industry looked to universities and NSF.
Despite budget cuts in the 1970s, NSF funded a range of projects that helped improve all aspects of laser performance, from beam shaping and pulse rate to energy consumption. Support also contributed to developing new materials essential for continued progress toward new kinds of lasers. As efficiency improved, researchers began considering how to apply the technology.
Charge of the lightwave
One area in particular, data transmission, gained momentum as the 1980s progressed. NSF's Lightwave Technology Program in its engineering directorate was critical not only because the research it funded fueled the Internet, mobile devices and other high-bandwidth communications applications, but also because many of the laser advances in this field drove progress in other disciplines.
An important example of this crossover is optical coherence tomography (OCT). Used in the late 1980s in telecommunications to find faults in miniature optical waveguides and optical fibers, this imaging technique was adapted by biomedical researchers in the early 1990s to noninvasively image microscopic structures in the eye. The imaging modality is now commonly used in ophthalmology to image the retina. NSF continues to fund OCT research.
As laser technology matured through the 1990s, applications became more abundant. Lasers made their way to the factory floor (to cut, weld and drill) and the ocean floor (to boost signals in transatlantic communications). The continued miniaturization of lasers and the advent of optical fibers radically altered medical diagnostics as well as surgery.
Focus on multidisciplinary research
In 1996, NSF released its first solicitation solely targeting multidisciplinary optical science and engineering. The initiative awarded $13.5 million via 18 three-year awards. Grantees were selected from 76 proposals and 627 pre-proposals. Over a dozen NSF program areas participated.
The press release announcing the awards described optical science and engineering as "an 'enabling' technology" and went on to explain that "for such a sweeping field, the broad approach...emphasizing collaboration between disciplines, is particularly effective. By coordinating program efforts, the NSF has encouraged cross-disciplinary linkages that could lead to major findings, sometimes in seemingly unrelated areas that could have solid scientific as well as economic benefits."
"There is an advantage in supporting groups that can bring together the right people," said Denise Caldwell, director of NSF's Division of Physics. In one such case, she says NSF's support of the Center for Ultrafast Optical Science (CUOS) at the University of Michigan led to advances in multiple areas including manufacturing, telecommunications and precision surgery.
During the 1990s, CUOS scientists were developing ultrafast lasers. As they explored femtosecond lasers--ones with pulses one quadrillionth of a second--they discovered that femtosecond lasers drilled cleaner holes than picosecond lasers—ones with pulses one trillionth of a second.
Although they transferred the technology to the Ford Motor Company, a young physician at the university heard about the capability and contacted the center. The collaboration between the clinician and CUOS researchers led to IntraLASIK technology used by ophthalmologists for cornea surgery as well as a spin-off company, Intralase (funded with an NSF Small Business Innovative Research grant).
More recently, NSF support of the Engineering Research Center for Extreme Ultraviolet Science and Technology at Colorado State University has given rise to the development of compact ultrafast laser sources in the extreme UV and X-ray spectral regions.
This work is significant because these lasers will now be more widely available to researchers, diminishing the need for access to a large source like a synchrotron. Compact ultrafast sources are opening up entirely new fields of study such as attosecond dynamics, which enables scientists to follow the motion of electrons within molecules and materials.
Identifying new research directions
NSF's ability to foster collaborations within the scientific community has also enabled it to identify new avenues for research. As laser technology matured in the late 1980s, some researchers began to consider the interaction of laser light with biological material. Sensing this movement, NSF began funding research in this area.
"Optics has been a primary force in fostering this interface," Caldwell said.
One researcher who saw NSF taking the lead in pushing the frontiers of light-matter interactions was University of Michigan researcher Duncan Steel.
At the time, Steel continued pursuing quantum electronics research while using lasers to enable imaging and spectroscopy of single molecules in their natural environment. Steel and his colleagues were among the first to optically study how molecular self-assembly of proteins affects the neurotoxicity of Alzheimer's disease.
"New classes of light sources and dramatic changes in detectors and smart imaging opened up new options in biomedical research," Steel said. "NSF took the initiative to establish a highly interdisciplinary direction that then enabled many people to pursue emergent ideas that were rapidly evolving. That's one of NSF's biggest legacies--to create new opportunities that enable scientists and engineers to create and follow new directions for a field."
Roadmaps for the future
In the mid-1990s, the optical science and engineering research community expressed considerable interest in developing a report to describe the impact the field was having on national needs as well as recommend new areas that might benefit from focused effort from the community.
As a result, in 1998, the National Research Council (NRC) published Harnessing Light: Optical Science and Engineering for the 21st Century. Funded by the Defense Advanced Research Projects Agency, NSF and the National Institute of Standards and Technology (NIST), the 331-page report was the first comprehensive look at the laser juggernaut.
That report was significant because, for the first time, the community focused a lens on its R&D in language that was accessible to the public and to policymakers. It also laid the groundwork for subsequent reports. Fifteen years after Harnessing Light, NSF was the lead funding agency for another NRC report, Optics and Photonics: Essential Technologies for Our Nation.
Widely disseminated through the community's professional societies, the Optics and Photonics report led to a 2014 report by the National Science and Technology Council's Fast Track Action Committee on Optics and Photonics, Building a Brighter Future with Optics and Photonics.
The committee, comprised of 14 federal agencies and co-chaired by NSF and NIST, would identify cross-cutting areas of optics and photonics research that, through interagency collaboration, could benefit the nation. It also was also set to prioritize these research areas for possible federal investment and set long-term goals for federal optics and photonics research.
Developing a long-term, integrated approach
One of the recommendations from the NRC Optics and Photonics report was creation of a national photonics initiative to identify critical technical priorities for long-term federal R&D funding. To develop these priorities, the initiative would draw on researchers from industry, universities and the government as well as policymakers. Their charge: Provide a more integrated approach to industrial and federal optics and photonics R&D spending.
In just a year, the National Photonics Initiative was formed through the cooperative efforts of the Optical Society of America, SPIE--the international society for optics and photonics, the IEEE Photonics Society, the Laser Institute of America and the American Physical Society Division of Laser Science. One of the first fruits of this forward-looking initiative is a technology roadmap for President Obama's BRAIN Initiative.
To assess NSF's own programs and consider future directions, the agency formed an optics and photonics working group in 2013 to develop a roadmap to "lay the groundwork for major advances in scientific understanding and creation of high-impact technologies for the next decade and beyond."
The working group, led by Goldberg and Charles Ying, from NSF's Division of Materials Research, inventoried NSF's annual investment in optics and photonics. Their assessment showed that NSF invests about $300 million each year in the field.
They also identified opportunities for future growth and investment in photonic/electronic integration, biophotonics, the quantum domain, extreme UV and X-ray, manufacturing and crosscutting education and international activities.
As a next step, NSF also formed an optics and photonics advisory subcommittee for its Mathematical and Physical Sciences Directorate Advisory Committee. In its final report released in July 2014, the subcommittee identified seven research areas that could benefit from additional funding, including nanophotonics, new imaging modalities and optics and photonics for astronomy and astrophysics.
That same month, NSF released a "Dear Colleague Letter" to demonstrate the foundation's growing interest in optics and photonics and to stimulate a pool of proposals, cross-disciplinary in nature, that could help define new research directions.
And so the laser, once itself the main focus of research, takes its place as a device that extends our ability to see.
"To see is a fundamental human drive," Caldwell said. "If I want to understand a thing, I want to see it. The laser is a very special source of light with incredible capabilities."
-- Susan Reiss, National Science Foundation
-- Ivy F. Kupec, (703) 292-8796 ikupec@nsf.gov
Investigators
John Nees
Jorge Rocca
Duncan Steel
Tibor Juhasz
Carmen Menoni
David Attwood
Henry Kapteyn
Herbert Winful
Steven Yalisove
Margaret Murnane
Tuesday, June 9, 2015
SHAPE CHANGING WING FLAPS
FROM: NASA GREEN AVIATION
Green Aviation Project Tests Shape Changing Wing Flaps
A NASA F-15D flies chase for the G-III Adaptive Compliant Trailing Edge (ACTE) project. This photo was taken by an automated Wing Deflection Measurement System (WDMS) camera in the G-III that photographed the ACTE wing every second during the flight. The ACTE experimental flight research project is a joint effort between NASA and the U.S. Air Force Research Laboratory to determine if advanced flexible trailing-edge wing flaps, developed and patented by FlexSys, Inc., can both improve aircraft aerodynamic efficiency and reduce airport-area noise generated during takeoffs and landings.
The experiment is being carried out on a modified Gulfstream III (G-III) business aircraft that has been converted into an aerodynamics research test bed at NASA's Armstrong Flight Research Center. The ACTE project involves replacement of both of the G-III's conventional 19-foot-long aluminum flaps with the shape changing flaps that form continuous bendable surfaces.
Green Aviation Project Tests Shape Changing Wing Flaps
A NASA F-15D flies chase for the G-III Adaptive Compliant Trailing Edge (ACTE) project. This photo was taken by an automated Wing Deflection Measurement System (WDMS) camera in the G-III that photographed the ACTE wing every second during the flight. The ACTE experimental flight research project is a joint effort between NASA and the U.S. Air Force Research Laboratory to determine if advanced flexible trailing-edge wing flaps, developed and patented by FlexSys, Inc., can both improve aircraft aerodynamic efficiency and reduce airport-area noise generated during takeoffs and landings.
The experiment is being carried out on a modified Gulfstream III (G-III) business aircraft that has been converted into an aerodynamics research test bed at NASA's Armstrong Flight Research Center. The ACTE project involves replacement of both of the G-III's conventional 19-foot-long aluminum flaps with the shape changing flaps that form continuous bendable surfaces.
S. KOREA ROBOT WINS FIRST PRIZE AT DARPA ROBOT FINALS
FROM: U.S. DEFENSE DEPARTMENT
Right: Team Kaist’s robot DRC-Hubo uses a tool to cut a hole in a wall during the DARPA Robotics Challenge Finals, June 5-6, 2015, in Pomona, Calif. Team Kaist won the top prize at the competition. DARPA photo
Robots from South Korea, U.S. Win DARPA Finals
By Cheryl Pellerin
DoD News, Defense Media Activity
POMONA, Calif., June 7, 2015 – A robot from South Korea took first prize and two American robots took second and third prizes here yesterday in the two-day robotic challenge finals held by the Defense Advanced Research Projects Agency.
Twenty-three human-robot teams participating in the DARPA Robotics Challenge, or DRC, finals competed for $3.5 million in prizes, working to get through eight tasks in an hour, under their own onboard power and with severely degraded communications between robot and operator.
A dozen U.S. teams and 11 from Japan, Germany, Italy, South Korea and Hong Kong competed in the outdoor competition.
DARPA launched the DRC in response to the nuclear disaster at Fukushima, Japan, in 2011 and the need for help to save lives in the toxic environment there.
Progress in Robotics
The DRC’s goal was to accelerate progress in robotics so robots more quickly can gain the dexterity and robustness they need to enter areas too dangerous for people and mitigate disaster impacts.
Robot tasks were relevant to disaster response -- driving alone, walking through rubble, tripping circuit breakers, using a tool to cut a hole in a wall, turning valves and climbing stairs.
Each team had two tries at the course with the best performance and times used as official scores. All three winners each had final scores of eight points, so they were arrayed from first to third place according to least time on the course.
DARPA program manager and DRC organizer Gill Pratt congratulated the 23 participating teams and thanked them for helping open a new era of human-robot partnerships.
Loving Robots
The DRC was open to the public, and more than 10,000 people over two days watched from the Fairplex grandstand as each robot ran its course. The venue was formerly known as the Los Angeles County Fairgrounds.
"These robots are big and made of lots of metal, and you might assume people seeing them would be filled with fear and anxiety," Pratt said during a press briefing at the end of day 2.
"But we heard groans of sympathy when those robots fell, and what did people do every time a robot scored a point? They cheered!” he added.
Pratt said this could be one of the biggest lessons from DRC -- “the potential for robots not only to perform technical tasks for us but to help connect people to one another."
South Korean Winning Team
Team Kaist from Daejeon, South Korea, and its robot DRC-Hubo took first place and the $2 million prize. Hubo comes from the words ‘humanoid robot.’
Team Kaist is from the Korea Advanced Institute of Science and Technology, which professor JunHo Oh of the Mechanical Engineering Department called “the MIT of Korea,” and he led Team Kaist to victory here.
In his remarks at the DARPA press conference, Oh noted that researchers from a university commercial spinoff called Rainbow Co., built the Hubo robot hardware.
The professor said his team’s first-place prize doesn’t make DRC-Hubo the best robot in the world, but he’s happy with the prize, which he said helps demonstrate Korea’s technological capabilities.
Team IHMC Robotics
Coming in second with a $1 million prize is Team IHMC Robotics of Pensacola, Florida -- the Institute of Human and Machine Cognition -- and its robot Running Man.
Jerry Pratt leads a research group at IHMC that works to understand and model human gait and its applications in robotics, human assistive devices and man-machine interfaces.
“Robots are really coming a long way,” Pratt said.
“Are you going to see a lot more of them? It's hard to say when you'll really see humanoid robots in the world,” he added. “But I think this is the century of the humanoid robot. The real question is what decade? And the DRC will make that decade come maybe one decade sooner.”
Team Tartan Rescue
In third place is Team Tartan Rescue of Pittsburgh, winning $500,000. The robot is CHIMP, which stands for CMU highly intelligent mobile platform. Team members are from Carnegie Mellon University and the National Robotics Engineering Center.
Tony Stentz, NREC director, led Team Tartan Rescue, and during the press conference called the challenge “quite an experience.”
That experience was best captured, he said, “with our run yesterday when we had trouble all through the course, all kinds of problems, things we never saw before.”
While that was happening, Stentz said, the team operating the robot from another location kept their cool.
Promise for the Technology
“They figured out what was wrong, they tapped their deep experience in practicing with the machine, they tapped the tools available at their fingertips, and they managed to get CHIMP through the entire course, doing all of the tasks in less than an hour,” he added.
“That says a lot about the technology and it says a lot about the people,” Stentz said, “and I think it means that there's great promise for this technology.”
All the winners said they would put most of the prize money into robotics research and share a portion with their team members.
After the day 2 competition, Arati Prabhakar, DARPA director, said this is the end of the 3-year-long DARPA Robotics Challenge but “the beginning of a future in which robots can work alongside people to reduce the toll of disasters."
Right: Team Kaist’s robot DRC-Hubo uses a tool to cut a hole in a wall during the DARPA Robotics Challenge Finals, June 5-6, 2015, in Pomona, Calif. Team Kaist won the top prize at the competition. DARPA photo
Robots from South Korea, U.S. Win DARPA Finals
By Cheryl Pellerin
DoD News, Defense Media Activity
POMONA, Calif., June 7, 2015 – A robot from South Korea took first prize and two American robots took second and third prizes here yesterday in the two-day robotic challenge finals held by the Defense Advanced Research Projects Agency.
Twenty-three human-robot teams participating in the DARPA Robotics Challenge, or DRC, finals competed for $3.5 million in prizes, working to get through eight tasks in an hour, under their own onboard power and with severely degraded communications between robot and operator.
A dozen U.S. teams and 11 from Japan, Germany, Italy, South Korea and Hong Kong competed in the outdoor competition.
DARPA launched the DRC in response to the nuclear disaster at Fukushima, Japan, in 2011 and the need for help to save lives in the toxic environment there.
Progress in Robotics
The DRC’s goal was to accelerate progress in robotics so robots more quickly can gain the dexterity and robustness they need to enter areas too dangerous for people and mitigate disaster impacts.
Robot tasks were relevant to disaster response -- driving alone, walking through rubble, tripping circuit breakers, using a tool to cut a hole in a wall, turning valves and climbing stairs.
Each team had two tries at the course with the best performance and times used as official scores. All three winners each had final scores of eight points, so they were arrayed from first to third place according to least time on the course.
DARPA program manager and DRC organizer Gill Pratt congratulated the 23 participating teams and thanked them for helping open a new era of human-robot partnerships.
Loving Robots
The DRC was open to the public, and more than 10,000 people over two days watched from the Fairplex grandstand as each robot ran its course. The venue was formerly known as the Los Angeles County Fairgrounds.
"These robots are big and made of lots of metal, and you might assume people seeing them would be filled with fear and anxiety," Pratt said during a press briefing at the end of day 2.
"But we heard groans of sympathy when those robots fell, and what did people do every time a robot scored a point? They cheered!” he added.
Pratt said this could be one of the biggest lessons from DRC -- “the potential for robots not only to perform technical tasks for us but to help connect people to one another."
South Korean Winning Team
Team Kaist from Daejeon, South Korea, and its robot DRC-Hubo took first place and the $2 million prize. Hubo comes from the words ‘humanoid robot.’
Team Kaist is from the Korea Advanced Institute of Science and Technology, which professor JunHo Oh of the Mechanical Engineering Department called “the MIT of Korea,” and he led Team Kaist to victory here.
In his remarks at the DARPA press conference, Oh noted that researchers from a university commercial spinoff called Rainbow Co., built the Hubo robot hardware.
The professor said his team’s first-place prize doesn’t make DRC-Hubo the best robot in the world, but he’s happy with the prize, which he said helps demonstrate Korea’s technological capabilities.
Team IHMC Robotics
Coming in second with a $1 million prize is Team IHMC Robotics of Pensacola, Florida -- the Institute of Human and Machine Cognition -- and its robot Running Man.
Jerry Pratt leads a research group at IHMC that works to understand and model human gait and its applications in robotics, human assistive devices and man-machine interfaces.
“Robots are really coming a long way,” Pratt said.
“Are you going to see a lot more of them? It's hard to say when you'll really see humanoid robots in the world,” he added. “But I think this is the century of the humanoid robot. The real question is what decade? And the DRC will make that decade come maybe one decade sooner.”
Team Tartan Rescue
In third place is Team Tartan Rescue of Pittsburgh, winning $500,000. The robot is CHIMP, which stands for CMU highly intelligent mobile platform. Team members are from Carnegie Mellon University and the National Robotics Engineering Center.
Tony Stentz, NREC director, led Team Tartan Rescue, and during the press conference called the challenge “quite an experience.”
That experience was best captured, he said, “with our run yesterday when we had trouble all through the course, all kinds of problems, things we never saw before.”
While that was happening, Stentz said, the team operating the robot from another location kept their cool.
Promise for the Technology
“They figured out what was wrong, they tapped their deep experience in practicing with the machine, they tapped the tools available at their fingertips, and they managed to get CHIMP through the entire course, doing all of the tasks in less than an hour,” he added.
“That says a lot about the technology and it says a lot about the people,” Stentz said, “and I think it means that there's great promise for this technology.”
All the winners said they would put most of the prize money into robotics research and share a portion with their team members.
After the day 2 competition, Arati Prabhakar, DARPA director, said this is the end of the 3-year-long DARPA Robotics Challenge but “the beginning of a future in which robots can work alongside people to reduce the toll of disasters."
Sunday, April 19, 2015
NSF REPORTS SCIENCE AND ENGINEERING GRAD SCHOOL ENROLLMENT UP
FROM: NATIONAL SCIENCE FOUNDATION
Science and engineering graduate school enrollment increases
Rise largely fueled by influx of foreign students
After remaining essentially flat for the past two years, the number of full-time graduate students enrolled in science and engineering programs rose by 2.4 percent in 2013, to nearly 425,000 students, according to a new InfoBrief from the National Science Foundation's (NSF) National Center for Science and Engineering Statistics (NCSES).
NCSES found the increase was largely due to a 7.9 percent rise in full-time enrollment of foreign graduate students on temporary visas. Foreign enrollment hit an all-time high of 168,297 students in 2013, or 39.6 percent of the full-time science and engineering graduate student population--up from 35.9 percent in 2008.
In contrast, full-time enrollment for U.S. science and engineering graduate students fell for the third year in a row. But while overall enrollment by U.S. citizens and permanent residents declined, the number of U.S. students of Hispanic or Latino descent has climbed steadily since 2008, resulting in 25.8 percent in growth.
NCSES found that among U.S. graduate students, enrollment continued to become more diverse. Of the total students enrolled in science and engineering graduate programs:
8.9 percent were Asian and Native Hawaiian or Other Pacific Islanders.
8.6 percent were Hispanic or Latino.
8.1 percent were Black or African American.
2.1 percent reported they were more than one race.
0.6 percent were American Indian or Alaska Native.
Those groups made up 28 percent of total graduate enrollments in science and engineering, including U.S. and foreign students. In 2008, they accounted for less than a quarter of students who were U.S. citizens and permanent residents.
The study also found that a decade-long decline continued in postdocs conducting research in the sciences. Between 2010 and 2013, the number of postdocs in science fields dropped by 2.8 percent, with the largest decreases in the two biggest science fields: biological sciences and physical sciences. Over the same period, the number of postdocs in engineering fields rose by 2 percent, with the largest increases in chemical engineering, biomedical engineering and electrical engineering.
Science and engineering graduate school enrollment increases
Rise largely fueled by influx of foreign students
After remaining essentially flat for the past two years, the number of full-time graduate students enrolled in science and engineering programs rose by 2.4 percent in 2013, to nearly 425,000 students, according to a new InfoBrief from the National Science Foundation's (NSF) National Center for Science and Engineering Statistics (NCSES).
NCSES found the increase was largely due to a 7.9 percent rise in full-time enrollment of foreign graduate students on temporary visas. Foreign enrollment hit an all-time high of 168,297 students in 2013, or 39.6 percent of the full-time science and engineering graduate student population--up from 35.9 percent in 2008.
In contrast, full-time enrollment for U.S. science and engineering graduate students fell for the third year in a row. But while overall enrollment by U.S. citizens and permanent residents declined, the number of U.S. students of Hispanic or Latino descent has climbed steadily since 2008, resulting in 25.8 percent in growth.
NCSES found that among U.S. graduate students, enrollment continued to become more diverse. Of the total students enrolled in science and engineering graduate programs:
8.9 percent were Asian and Native Hawaiian or Other Pacific Islanders.
8.6 percent were Hispanic or Latino.
8.1 percent were Black or African American.
2.1 percent reported they were more than one race.
0.6 percent were American Indian or Alaska Native.
Those groups made up 28 percent of total graduate enrollments in science and engineering, including U.S. and foreign students. In 2008, they accounted for less than a quarter of students who were U.S. citizens and permanent residents.
The study also found that a decade-long decline continued in postdocs conducting research in the sciences. Between 2010 and 2013, the number of postdocs in science fields dropped by 2.8 percent, with the largest decreases in the two biggest science fields: biological sciences and physical sciences. Over the same period, the number of postdocs in engineering fields rose by 2 percent, with the largest increases in chemical engineering, biomedical engineering and electrical engineering.
Sunday, March 1, 2015
NSF ON THE 'ENERGY INTERNET'
FROM: NATIONAL SCIENCE FOUNDATION
Creating the energy Internet
How leaders in research, industry and engineering education are working to create the energy network of the future
It only takes a power outage of a few minutes in the middle of a busy workday to drive home the hazards of relying on an energy infrastructure rooted in the Industrial Age. Without the electricity delivered over the nation's power grid, commerce would grind to a halt, communication networks would fail, transportation would stop and cities would go dark.
Simply put, nothing would work.
Plus, blackouts aren't easy to contain. Because the power grid is a vast interconnected network, the failure of one part can have a cascading effect, triggering successive outages down the line.
"The power grid is based on technology from the early 20th century," says Iqbal Husain, a professor of electrical and computer engineering at North Carolina State University. "That needs to change."
Husain is director of the FREEDM Systems Center, a collaboration of leaders in research, industry and engineering education working to envision and then create the energy network of the future. With funding from the National Science Foundation (NSF) leveraged by additional industry support, the Engineering Research Center has sparked the growth of dozens of clean energy businesses in Raleigh's Research Triangle, making the region an epicenter of smart grid development.
"We're trying to create a new electric grid infrastructure that we call the energy Internet," says Alex Huang, an NC State researcher and co-inventor of a newly patented soft-switch single-stage AC-DC converter. "We're looking at the whole distribution system. That's a huge engineering system. It's very, very complex."
According to the U.S. Department of Energy, the smart grid will be more efficient and capable of meeting increased consumer demand without adding infrastructure. It also will be more intelligent, sensing system overloads and rerouting power to prevent or to minimize a potential outage. It will accept energy from virtually any fuel source and--building on NSF-funded research--offer improved security and resiliency in case of a natural disaster or threat. It also will allow real-time communication between the consumer and utility, ushering in a new era of consumer choice.
Energy innovation
From its headquarters on NC State's Centennial Campus, FREEDM (short for Future Renewable Electric Energy Delivery and Management) is coming at the challenge on many fronts, from the creation of new devices that will allow energy to flow in more than one direction to the development of the software architecture that will give the smart grid its brainpower.
The facility boasts a 1-megawatt demonstration hub and real-time digital simulation lab, as well as labs specializing in computer science, power electronics, energy storage and motor drive technology. Under the FREEDM umbrella, researchers and students are tackling more than a dozen research projects in partnership with colleagues at Arizona State University, Florida State University, Florida A&M University and Missouri University of Science and Technology.
That's just this year. In seven years, the center has launched dozens of projects in fields ranging from systems theory to intelligent energy management.
The result is one innovation after another. Researchers have developed a technique that allows a common electronic component to handle voltages almost seven times higher than existing components; created an ultra-fast fault detection, isolation and restoration system; and invented a new solid-state transformer to replace the 100-year-old electromagnetic transformer.
These innovations hold promise for making the power grid more resilient, fostering sustainable energy technologies that play an important role in the nation's energy infrastructure, and driving economic growth.
Startups spawn new technologies
For example, the startup company General Capacitor is focused on developing energy storage products based on the "ultracapacitor" discoveries made by Jim Zheng, a professor at Florida A&M University and Florida State University who serves on FREEDM's leadership team.
Zheng's ultracapacitors open the door to a new generation of energy storage technologies that can be used to help stabilize the flow of energy from renewable sources--such as solar power--into the grid. This would have the effect of making renewable sources more viable, while also making the grid itself more resilient.
For the future power grid, incorporating these new technologies will be like plugging in a lamp. The smart grid will be able to collect and process thousands or even millions of bits of data and intelligently manage the flow of power across the network, ideally doing most of its work at the edge of the grid, close to the customer. This kind of system--called distributed generation--is potentially more efficient and environmentally sustainable than the existing system.
A mile from the NC State campus in Raleigh, a startup company called GridBridge is working to commercialize FREEDM technology in the form of a smart grid router that can integrate renewables and energy storage devices, including electric vehicles, into the grid. GridBridge was funded by the NSF Small Business Innovation Research program.
"We don't expect the utility companies to rip out their existing infrastructure," says CEO Chad Eckhardt. "But they need products that can help the infrastructure operate better and more efficiently."
Another FREEDM partner, energy giant ABB, is working to perfect the technology behind microgrids, which could significantly enhance grid security and reliability.
A microgrid essentially simulates the operations of the larger grid but, as the name suggests, provides power on a smaller scale, serving a town, military base or university, for example. Microgrids can operate independently of the main grid or run parallel to it. ABB's microgrid is designed to seamlessly integrate renewables, with their fluctuating energy profiles, and output reliable power. If the main grid goes down, its microgrid system isolates itself from the larger grid and continues to provide power to its customers. When the larger grid comes back online, the connection is re-established.
"Anything that produces power could potentially be a microgrid," says Brad Luyster, vice president and general manager of ABB's Microgrid Regional Execution Center. "If the power goes off from the main grid, the microgrid has its own generation on site."
The global marketplace
GridBridge and ABB aren't the only companies in the region eyeing the opportunities for energy innovation.
A recent study identified 169 firms within the 13-county Triangle region, including 16 Fortune 500 companies, working to develop sustainable solutions to the world's energy needs. The sector, called cleantech by the industry, spans every county in the region.
Lee Anne Nance, executive director of the Research Triangle Regional Partnership, spearheads a collaborative network called the Research Triangle Cleantech Cluster that promotes the region's competitive edge in the global marketplace. Its members include some of the industry's biggest players, including Duke Energy, Siemens Energy, ABB Inc. and Schneider Electric, as well as major high-tech companies such as SAS, Cisco, Power Analytics, Sensus, Power Secure, RTI International and Field2Base.
Combined, they pack a powerful punch, employing thousands of high-skill workers and driving innovation in energy management, water, transportation, data analytics, information technology, renewable energy, electronics and engineering.
"This is a disruptive and transformational time in infrastructure delivery throughout the world, and our region is leading the way," Nance says. "We're right in the middle of the action and that's good for the economy, the people who work here and the people who live here."
-- David Hunt, North Carolina State University
Investigators
Jim Zheng
Alex Huang
Gerald Heydt
Iqbal Husain
Mariesa Crow
Steinar Dale
Chad Eckhardt
Christopher Edrington
Related Institutions/Organizations
GridBridge, Inc
North Carolina State University
Creating the energy Internet
How leaders in research, industry and engineering education are working to create the energy network of the future
It only takes a power outage of a few minutes in the middle of a busy workday to drive home the hazards of relying on an energy infrastructure rooted in the Industrial Age. Without the electricity delivered over the nation's power grid, commerce would grind to a halt, communication networks would fail, transportation would stop and cities would go dark.
Simply put, nothing would work.
Plus, blackouts aren't easy to contain. Because the power grid is a vast interconnected network, the failure of one part can have a cascading effect, triggering successive outages down the line.
"The power grid is based on technology from the early 20th century," says Iqbal Husain, a professor of electrical and computer engineering at North Carolina State University. "That needs to change."
Husain is director of the FREEDM Systems Center, a collaboration of leaders in research, industry and engineering education working to envision and then create the energy network of the future. With funding from the National Science Foundation (NSF) leveraged by additional industry support, the Engineering Research Center has sparked the growth of dozens of clean energy businesses in Raleigh's Research Triangle, making the region an epicenter of smart grid development.
"We're trying to create a new electric grid infrastructure that we call the energy Internet," says Alex Huang, an NC State researcher and co-inventor of a newly patented soft-switch single-stage AC-DC converter. "We're looking at the whole distribution system. That's a huge engineering system. It's very, very complex."
According to the U.S. Department of Energy, the smart grid will be more efficient and capable of meeting increased consumer demand without adding infrastructure. It also will be more intelligent, sensing system overloads and rerouting power to prevent or to minimize a potential outage. It will accept energy from virtually any fuel source and--building on NSF-funded research--offer improved security and resiliency in case of a natural disaster or threat. It also will allow real-time communication between the consumer and utility, ushering in a new era of consumer choice.
Energy innovation
From its headquarters on NC State's Centennial Campus, FREEDM (short for Future Renewable Electric Energy Delivery and Management) is coming at the challenge on many fronts, from the creation of new devices that will allow energy to flow in more than one direction to the development of the software architecture that will give the smart grid its brainpower.
The facility boasts a 1-megawatt demonstration hub and real-time digital simulation lab, as well as labs specializing in computer science, power electronics, energy storage and motor drive technology. Under the FREEDM umbrella, researchers and students are tackling more than a dozen research projects in partnership with colleagues at Arizona State University, Florida State University, Florida A&M University and Missouri University of Science and Technology.
That's just this year. In seven years, the center has launched dozens of projects in fields ranging from systems theory to intelligent energy management.
The result is one innovation after another. Researchers have developed a technique that allows a common electronic component to handle voltages almost seven times higher than existing components; created an ultra-fast fault detection, isolation and restoration system; and invented a new solid-state transformer to replace the 100-year-old electromagnetic transformer.
These innovations hold promise for making the power grid more resilient, fostering sustainable energy technologies that play an important role in the nation's energy infrastructure, and driving economic growth.
Startups spawn new technologies
For example, the startup company General Capacitor is focused on developing energy storage products based on the "ultracapacitor" discoveries made by Jim Zheng, a professor at Florida A&M University and Florida State University who serves on FREEDM's leadership team.
Zheng's ultracapacitors open the door to a new generation of energy storage technologies that can be used to help stabilize the flow of energy from renewable sources--such as solar power--into the grid. This would have the effect of making renewable sources more viable, while also making the grid itself more resilient.
For the future power grid, incorporating these new technologies will be like plugging in a lamp. The smart grid will be able to collect and process thousands or even millions of bits of data and intelligently manage the flow of power across the network, ideally doing most of its work at the edge of the grid, close to the customer. This kind of system--called distributed generation--is potentially more efficient and environmentally sustainable than the existing system.
A mile from the NC State campus in Raleigh, a startup company called GridBridge is working to commercialize FREEDM technology in the form of a smart grid router that can integrate renewables and energy storage devices, including electric vehicles, into the grid. GridBridge was funded by the NSF Small Business Innovation Research program.
"We don't expect the utility companies to rip out their existing infrastructure," says CEO Chad Eckhardt. "But they need products that can help the infrastructure operate better and more efficiently."
Another FREEDM partner, energy giant ABB, is working to perfect the technology behind microgrids, which could significantly enhance grid security and reliability.
A microgrid essentially simulates the operations of the larger grid but, as the name suggests, provides power on a smaller scale, serving a town, military base or university, for example. Microgrids can operate independently of the main grid or run parallel to it. ABB's microgrid is designed to seamlessly integrate renewables, with their fluctuating energy profiles, and output reliable power. If the main grid goes down, its microgrid system isolates itself from the larger grid and continues to provide power to its customers. When the larger grid comes back online, the connection is re-established.
"Anything that produces power could potentially be a microgrid," says Brad Luyster, vice president and general manager of ABB's Microgrid Regional Execution Center. "If the power goes off from the main grid, the microgrid has its own generation on site."
The global marketplace
GridBridge and ABB aren't the only companies in the region eyeing the opportunities for energy innovation.
A recent study identified 169 firms within the 13-county Triangle region, including 16 Fortune 500 companies, working to develop sustainable solutions to the world's energy needs. The sector, called cleantech by the industry, spans every county in the region.
Lee Anne Nance, executive director of the Research Triangle Regional Partnership, spearheads a collaborative network called the Research Triangle Cleantech Cluster that promotes the region's competitive edge in the global marketplace. Its members include some of the industry's biggest players, including Duke Energy, Siemens Energy, ABB Inc. and Schneider Electric, as well as major high-tech companies such as SAS, Cisco, Power Analytics, Sensus, Power Secure, RTI International and Field2Base.
Combined, they pack a powerful punch, employing thousands of high-skill workers and driving innovation in energy management, water, transportation, data analytics, information technology, renewable energy, electronics and engineering.
"This is a disruptive and transformational time in infrastructure delivery throughout the world, and our region is leading the way," Nance says. "We're right in the middle of the action and that's good for the economy, the people who work here and the people who live here."
-- David Hunt, North Carolina State University
Investigators
Jim Zheng
Alex Huang
Gerald Heydt
Iqbal Husain
Mariesa Crow
Steinar Dale
Chad Eckhardt
Christopher Edrington
Related Institutions/Organizations
GridBridge, Inc
North Carolina State University
Monday, February 9, 2015
AI AND SAFE SELF-DRIVING CARS
FROM: NATIONAL SCIENCE FOUNDATION
Programming safety into self-driving cars
UMass researchers improve artificial intelligence algorithms for semi-autonomous vehicles
February 2, 2015
For decades, researchers in artificial intelligence, or AI, worked on specialized problems, developing theoretical concepts and workable algorithms for various aspects of the field. Computer vision, planning and reasoning experts all struggled independently in areas that many thought would be easy to solve, but which proved incredibly difficult.
However, in recent years, as the individual aspects of artificial intelligence matured, researchers began bringing the pieces together, leading to amazing displays of high-level intelligence: from IBM's Watson to the recent poker playing champion to the ability of AI to recognize cats on the internet.
These advances were on display this week at the 29th conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Austin, Texas, where interdisciplinary and applied research were prevalent, according to Shlomo Zilberstein, the conference committee chair and co-author on three papers at the conference.
Zilberstein studies the way artificial agents plan their future actions, particularly when working semi-autonomously--that is to say in conjunction with people or other devices.
Examples of semi-autonomous systems include co-robots working with humans in manufacturing, search-and-rescue robots that can be managed by humans working remotely and "driverless" cars. It is the latter topic that has particularly piqued Zilberstein's interest in recent years.
The marketing campaigns of leading auto manufacturers have presented a vision of the future where the passenger (formerly known as the driver) can check his or her email, chat with friends or even sleep while shuttling between home and the office. Some prototype vehicles included seats that swivel back to create an interior living room, or as in the case of Google's driverless car, a design with no steering wheel or brakes.
Except in rare cases, it's not clear to Zilberstein that this vision for the vehicles of the near future is a realistic one.
"In many areas, there are lots of barriers to full autonomy," Zilberstein said. "These barriers are not only technological, but also relate to legal and ethical issues and economic concerns."
In his talk at the "Blue Sky" session at AAAI, Zilberstein argued that in many areas, including driving, we will go through a long period where humans act as co-pilots or supervisors, passing off responsibility to the vehicle when possible and taking the wheel when the driving gets tricky, before the technology reaches full autonomy (if it ever does).
In such a scenario, the car would need to communicate with drivers to alert them when they need to take over control. In cases where the driver is non-responsive, the car must be able to autonomously make the decision to safely move to the side of the road and stop.
"People are unpredictable. What happens if the person is not doing what they're asked or expected to do, and the car is moving at sixty miles per hour?" Zilberstein asked. "This requires 'fault-tolerant planning.' It's the kind of planning that can handle a certain number of deviations or errors by the person who is asked to execute the plan."
With support from the National Science Foundation (NSF), Zilberstein has been exploring these and other practical questions related to the possibility of artificial agents that act among us.
Zilberstein, a professor of computer science at the University of Massachusetts Amherst, works with human studies experts from academia and industry to help uncover the subtle elements of human behavior that one would need to take into account when preparing a robot to work semi-autonomously. He then translates those ideas into computer programs that let a robot or autonomous vehicle plan its actions--and create a plan B in case of an emergency.
There are a lot of subtle cues that go into safe driving. Take for example a four-way stop. Officially, the first car to the crosswalk goes first, but in actuality, people watch each other to see if and when to make their move.
"There is a slight negotiation going on without talking," Zilberstein explained. "It's communicating by your action such as eye contact, the wave of a hand, or the slight revving of an engine."
In trials, autonomous vehicles often sit paralyzed at such stops, unable to safely read the cues of the other drivers on the road. This "undecidedness" is a big problem for robots. A recent paper by Alan Winfield of Bristol Robotics Laboratory in the UK showed how robots, when faced with a difficult decision, will often process for such a long period of time as to miss the opportunity to act. Zilberstein's systems are designed to remedy this problem.
"With some careful separation of objectives, planning algorithms could address one of the key problems of maintaining 'live state', even when goal reachability relies on timely human interventions," he concluded.
The ability to tailor one's trip based on human-centered factors--like how attentive the driver can be or the driver's desire to avoid highways--is another aspect of semi-autonomous driving that Zilberstein is exploring.
In a paper with Kyle Wray from the University of Massachusetts Amherst and Abdel-Illah Mouaddib from the University of Caen in France, Zilberstein introduced a new model and planning algorithm that allows semi-autonomous systems to make sequential decisions in situations that involve multiple objectives--for example, balancing safety and speed.
Their experiment focused on a semi-autonomous driving scenario where the decision to transfer control depended on the driver's level of fatigue. They showed that using their new algorithm a vehicle was able to favor roads where the vehicle can drive autonomously when the driver is fatigued, thus maximizing driver safety.
"In real life, people often try to optimize several competing objectives," Zilberstein said. "This planning algorithm can do that very quickly when the objectives are prioritized. For example, the highest priority may be to minimize driving time and a lower priority objective may be to minimize driving effort. Ultimately, we want to learn how to balance such competing objectives for each driver based on observed driving patterns."
It's an exciting time for artificial intelligence. The fruits of many decades of labor are finally being deployed in real systems and machine learning is being adopted widely and for different purposes than anyone had ever realized.
"We are beginning to see these kinds of remarkable successes that integrate decades-long research efforts in a variety of AI topics," said Héctor Muñoz-Avila, program director in NSF's Robust Intelligence cluster.
Indeed, over many decades, NSF's Robust Intelligence program has supported foundational research in artificial intelligence that, according to Zilberstein, has given rise to the amazing smart systems that are beginning to transform our world. But the agency has also supported researchers like Zilberstein who ask tough questions about emerging technologies.
"When we talk about autonomy, there are legal issues, technological issues and a lot of open questions," he said. "Personally, I think that NSF has been able to identify these as important questions and has been willing to put money into them. And this gives the U.S. a big advantage."
-- Aaron Dubrow, NSF
Programming safety into self-driving cars
UMass researchers improve artificial intelligence algorithms for semi-autonomous vehicles
February 2, 2015
For decades, researchers in artificial intelligence, or AI, worked on specialized problems, developing theoretical concepts and workable algorithms for various aspects of the field. Computer vision, planning and reasoning experts all struggled independently in areas that many thought would be easy to solve, but which proved incredibly difficult.
However, in recent years, as the individual aspects of artificial intelligence matured, researchers began bringing the pieces together, leading to amazing displays of high-level intelligence: from IBM's Watson to the recent poker playing champion to the ability of AI to recognize cats on the internet.
These advances were on display this week at the 29th conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Austin, Texas, where interdisciplinary and applied research were prevalent, according to Shlomo Zilberstein, the conference committee chair and co-author on three papers at the conference.
Zilberstein studies the way artificial agents plan their future actions, particularly when working semi-autonomously--that is to say in conjunction with people or other devices.
Examples of semi-autonomous systems include co-robots working with humans in manufacturing, search-and-rescue robots that can be managed by humans working remotely and "driverless" cars. It is the latter topic that has particularly piqued Zilberstein's interest in recent years.
The marketing campaigns of leading auto manufacturers have presented a vision of the future where the passenger (formerly known as the driver) can check his or her email, chat with friends or even sleep while shuttling between home and the office. Some prototype vehicles included seats that swivel back to create an interior living room, or as in the case of Google's driverless car, a design with no steering wheel or brakes.
Except in rare cases, it's not clear to Zilberstein that this vision for the vehicles of the near future is a realistic one.
"In many areas, there are lots of barriers to full autonomy," Zilberstein said. "These barriers are not only technological, but also relate to legal and ethical issues and economic concerns."
In his talk at the "Blue Sky" session at AAAI, Zilberstein argued that in many areas, including driving, we will go through a long period where humans act as co-pilots or supervisors, passing off responsibility to the vehicle when possible and taking the wheel when the driving gets tricky, before the technology reaches full autonomy (if it ever does).
In such a scenario, the car would need to communicate with drivers to alert them when they need to take over control. In cases where the driver is non-responsive, the car must be able to autonomously make the decision to safely move to the side of the road and stop.
"People are unpredictable. What happens if the person is not doing what they're asked or expected to do, and the car is moving at sixty miles per hour?" Zilberstein asked. "This requires 'fault-tolerant planning.' It's the kind of planning that can handle a certain number of deviations or errors by the person who is asked to execute the plan."
With support from the National Science Foundation (NSF), Zilberstein has been exploring these and other practical questions related to the possibility of artificial agents that act among us.
Zilberstein, a professor of computer science at the University of Massachusetts Amherst, works with human studies experts from academia and industry to help uncover the subtle elements of human behavior that one would need to take into account when preparing a robot to work semi-autonomously. He then translates those ideas into computer programs that let a robot or autonomous vehicle plan its actions--and create a plan B in case of an emergency.
There are a lot of subtle cues that go into safe driving. Take for example a four-way stop. Officially, the first car to the crosswalk goes first, but in actuality, people watch each other to see if and when to make their move.
"There is a slight negotiation going on without talking," Zilberstein explained. "It's communicating by your action such as eye contact, the wave of a hand, or the slight revving of an engine."
In trials, autonomous vehicles often sit paralyzed at such stops, unable to safely read the cues of the other drivers on the road. This "undecidedness" is a big problem for robots. A recent paper by Alan Winfield of Bristol Robotics Laboratory in the UK showed how robots, when faced with a difficult decision, will often process for such a long period of time as to miss the opportunity to act. Zilberstein's systems are designed to remedy this problem.
"With some careful separation of objectives, planning algorithms could address one of the key problems of maintaining 'live state', even when goal reachability relies on timely human interventions," he concluded.
The ability to tailor one's trip based on human-centered factors--like how attentive the driver can be or the driver's desire to avoid highways--is another aspect of semi-autonomous driving that Zilberstein is exploring.
In a paper with Kyle Wray from the University of Massachusetts Amherst and Abdel-Illah Mouaddib from the University of Caen in France, Zilberstein introduced a new model and planning algorithm that allows semi-autonomous systems to make sequential decisions in situations that involve multiple objectives--for example, balancing safety and speed.
Their experiment focused on a semi-autonomous driving scenario where the decision to transfer control depended on the driver's level of fatigue. They showed that using their new algorithm a vehicle was able to favor roads where the vehicle can drive autonomously when the driver is fatigued, thus maximizing driver safety.
"In real life, people often try to optimize several competing objectives," Zilberstein said. "This planning algorithm can do that very quickly when the objectives are prioritized. For example, the highest priority may be to minimize driving time and a lower priority objective may be to minimize driving effort. Ultimately, we want to learn how to balance such competing objectives for each driver based on observed driving patterns."
It's an exciting time for artificial intelligence. The fruits of many decades of labor are finally being deployed in real systems and machine learning is being adopted widely and for different purposes than anyone had ever realized.
"We are beginning to see these kinds of remarkable successes that integrate decades-long research efforts in a variety of AI topics," said Héctor Muñoz-Avila, program director in NSF's Robust Intelligence cluster.
Indeed, over many decades, NSF's Robust Intelligence program has supported foundational research in artificial intelligence that, according to Zilberstein, has given rise to the amazing smart systems that are beginning to transform our world. But the agency has also supported researchers like Zilberstein who ask tough questions about emerging technologies.
"When we talk about autonomy, there are legal issues, technological issues and a lot of open questions," he said. "Personally, I think that NSF has been able to identify these as important questions and has been willing to put money into them. And this gives the U.S. a big advantage."
-- Aaron Dubrow, NSF
Thursday, December 4, 2014
NASA PLANS MARS MISSION IN 2030'S
FROM: NASA
NASA is developing the capabilities needed to send humans to an asteroid by 2025 and Mars in the 2030s – goals outlined in the bipartisan NASA Authorization Act of 2010 and in the U.S. National Space Policy, also issued in 2010. Mars is a rich destination for scientific discovery and robotic and human exploration as we expand our presence into the solar system. Its formation and evolution are comparable to Earth, helping us learn more about our own planet’s history and future. Mars had conditions suitable for life in its past. Future exploration could uncover evidence of life, answering one of the fundamental mysteries of the cosmos: Does life exist beyond Earth? While robotic explorers have studied Mars for more than 40 years, NASA’s path for the human exploration of Mars begins in low-Earth orbit aboard the International Space Station. Astronauts on the orbiting laboratory are helping us prove many of the technologies and communications systems needed for human missions to deep space, including Mars. The space station also advances our understanding of how the body changes in space and how to protect astronaut health. Our next step is deep space, where NASA will send a robotic mission to capture and redirect an asteroid to orbit the moon. Astronauts aboard the Orion spacecraft will explore the asteroid in the 2020s, returning to Earth with samples. This experience in human spaceflight beyond low-Earth orbit will help NASA test new systems and capabilities, such as Solar Electric Propulsion, which we’ll need to send cargo as part of human missions to Mars. Beginning in FY 2018, NASA’s powerful Space Launch System rocket will enable these “proving ground” missions to test new capabilities. Human missions to Mars will rely on Orion and an evolved version of SLS that will be the most powerful launch vehicle ever flown. A fleet of robotic spacecraft and rovers already are on and around Mars, dramatically increasing our knowledge about the Red Planet and paving the way for future human explorers. The Mars Science Laboratory Curiosity rover measured radiation on the way to Mars and is sending back radiation data from the surface. This data will help us plan how to protect the astronauts who will explore Mars. Future missions like the Mars 2020 rover, seeking signs of past life, also will demonstrate new technologies that could help astronauts survive on Mars. Engineers and scientists around the country are working hard to develop the technologies astronauts will use to one day live and work on Mars, and safely return home from the next giant leap for humanity. NASA also is a leader in a Global Exploration Roadmap, working with international partners and the U.S. commercial space industry on a coordinated expansion of human presence into the solar system, with human missions to the surface of Mars as the driving goal. NASA's Orion Flight Test and the Journey to Mars Image Credit: NASA
Tuesday, October 14, 2014
Tuesday, July 29, 2014
NSF REPORTS ON TELE-ROBOTICS
FROM: NATIONAL SCIENCE FOUNDATION
Tele-robotics puts robot power at your fingertips
University of Washington research enables robot-assisted surgery and underwater spill prevention
The event brought together leaders from academia, industry and government and demonstrated the ways that smarter cyber-physical systems (CPS)--sometimes called the Internet of Things--can lead to improvements in health care, transportation, energy and emergency response, and other critical areas.
This week and next, we'll feature examples of Nationals Science Foundation (NSF)-supported research from the Smart America Expo. Today: tele-robotics technology that puts robot power at your fingertips. (See Part 1 of the series.)
In the aftermath of an earthquake, every second counts. The teams behind the Smart Emergency Response System (SERS) are developing technology to locate people quickly and help first responders save more lives. The SERS demonstrations at the Smart America Expo incorporated several NSF-supported research projects.
Howard Chizeck, a professor of electrical engineering at the University of Washington, showed a system he's helped develop where one can log in to a Wi-Fi network in order to tele-operate a robot working in a dangerous environment.
"We're looking to give a sense of touch to tele-robotic operators, so you can actually feel what the robot end-effector is doing," Chizeck said. "Maybe you're in an environment that's too dangerous for people. It's too hot, too radioactive, too toxic, too far away, too small, too big, then a robot can let you extend the reach of a human."
The device is being used to allow surgeons to perform remote surgeries from thousands of miles away. And through a start-up called BluHaptics--started by Chizeck and Fredrik Ryden and supported by a Small Business Investment Research grant from NSF--researchers are adapting the technology to allow a robot to work underwater and turn off a valve at the base of an off-shore oil rig to prevent a major spill.
"We're trying to develop tele-robotics for a wide range of opportunities," Chizeck said. "This is potentially a new industry, people operating in dangerous environments from a long distance."
-- Aaron Dubrow, NSF
Investigators
Fredrik Ryden
Howard Chizeck
Blake Hannaford
Tadayoshi Kohno
Related Institutions/Organizations
BluHaptics Inc
University of Washington
Labels:
CPS,
CYBER-PHYSICAL SYSTEMS,
ENERGY,
ENGINEERING,
HEALTH CARE,
INTERNET OF THINGS,
NSF,
RESEARCH,
ROBOTICS,
ROBOTS,
SCIENCE,
SMART EMERGENCY RESPONSE SYSTEM,
TELE-ROBOTICS,
TRANSPORTATION
Sunday, July 27, 2014
BEETLE INSPIRES NEW MATERIALS DEVELOPED TO TRAP AND CHANNEL SMALL AMOUNTS OF FLUIDS
FROM: NATIONAL SCIENCE FOUNDATION
Quenching the world's water and energy crises, one tiny droplet at a time
In pursuit of beetle biomimicry, NSF-funded engineers develop new, textured materials to trap and channel small amounts of liquid
In the Namib Desert of Africa, the fog-filled morning wind carries the drinking water for a beetle called the Stenocara.
Tiny droplets collect on the beetle's bumpy back. The areas between the bumps are covered in a waxy substance that makes them water-repellant, or hydrophobic (water-fearing). Water accumulates on the water-loving, or hydrophilic, bumps, forming droplets that eventually grow too big to stay put, then roll down the waxy surface.
The beetle slakes its thirst by tilting its back end up and sipping from the accumulated droplets that fall into its mouth. Incredibly, the beetle gathers enough water through this method to drink 12 percent of its body weight each day.
More than a decade ago, news of this creature's efficient water collection system inspired engineers to try and reproduce these surfaces in the lab.
Small-scale advances in fluid physics, materials engineering and nanoscience since that time have brought them close to succeeding.
These tiny developments, however, have the prospect to lead to macro-scale changes. Understanding how liquids interact with different materials can lead to more efficient, inexpensive processes and products, and might even lead to airplane wings impervious to ice and self-cleaning windows.
Beetle bumps in the lab
Using various methods to create intricately patterned surfaces, engineers can make materials that closely mimic the beetle's back.
"Ten years ago no one had the ability to pattern surfaces like this at the nanoscale," says Sumanta Acharya, a National Science Foundation (NSF) program director. "We observed naturally hydrophobic surfaces like the lotus leaf for decades. But even if we understood it, what could we do about it?"
What researchers have done is create surfaces that so excel at repelling or attracting water they've added a "super" at the front of their description: superhydrophobic or superhydrophilic.
Many superhydrophobic surfaces created by chemical coatings are already in the marketplace (water-repellant shoes! shirts! iPhones!).
However, many researchers focus on materials with physical elements that make them superhydrophobic.
These materials have micro or nano-sized pillars, poles or other structures that alter the angles at which water droplets contact their surface. These contact angles determine whether a water droplet beads up like a teeny crystal ball or relaxes a bit and rests on the surface like a spilled milkshake.
By varying the layout of these surfaces, researchers can now trap, direct and repulse small amounts of water for a variety of new purposes.
"We can now do things with fluids we only imagined before," says mechanical engineer Constantine Megaridis at the University of Illinois at Chicago. Megaridis and his team have two NSF grants from the Engineering Directorate's Division of Chemical, Bioengineering, Environmental and Transport Systems.
"The developments have enabled us to create devices -- devices with the potential to help humanity -- that do things much better than have ever been done before," he says.
Megaridis has used his beetle-inspired designs to put precise, textured patterns on inexpensive materials, making microfluidic circuits.
Plastic strips with superhydrophilic centers and superhydrophobic surroundings that combine or separate fluids have the potential to serve as platforms for diagnostic tests (watch "The ride of the water droplets").
"Imagine you want to bring drops of blood or water or any liquid to a certain location," Megaridis explains. "Just like a highway, the road is the strip for the liquid to travel down, and it ends up collecting in a fluid storage tank on the surface." The storage tank could hold a reactive agent. Medical personnel could use the disposable strips to field-test water samples for E. coli, for example.
Devices such as these -- created in engineering labs -- are now working their way to the marketplace.
Water, water in the air
NBD Nanotechnologies, a Boston-based company funded by NSF's Small Business Technology Transfer program, aims to scale up the durability and functionality of surface coatings for industrial use.
One of the most impactful applications for superhydrophobic or hydrophobic research is improved condensation efficiency. When water vapor condenses to a liquid, it typically forms a film. That film is a barrier between the vapor and the surface, making it more difficult for other droplets to form. If that film can be prevented by whisking away droplets immediately after they condense--say, with a superhydrophobic surface--the rate of condensation increases.
Condensers are everywhere. They're in your refrigerator, car and air conditioner. More efficient condensation would let all this equipment function with less energy. Better efficiency is especially important in places where large-scale cooling is paramount, such as power plants.
"NBD makes more durable coatings that span large surface areas," says NBD Nanotechnologies senior scientist Sara Beaini. "Durability is an important factor, because when you're working on the micro level you depend on having a pristine surface structure. Any mechanical or chemical abrasion that distorts the surface structures can significantly reduce or eliminate the advantageous surface properties quickly."
NBD, which you might have guessed stands for Namib Beetle Design, has partnered with Megaridis and others to improve durability, the main challenge in commercializing superhydrophobic research. Power plant condensers with durable hydrophobic or superhydrophobic coatings could be more efficient. And with water and energy shortages looming, partnerships such as theirs that help to transfer this breakthrough from the lab to the outside world are increasingly valuable.
Other groups have applied hydrophobic patterning methods in clever ways.
Kripa Varanasi, mechanical engineer at MIT and NSF CAREER awardee, has applied superhydrophobic coatings to metal, ceramics and glass, including the insides of ketchup bottles. Julie Crockett and Daniel Maynes at Brigham Young University developed extreme waterproofing by etching microscopic ridges or posts onto CD-sized wafers.
With all these cross-country efforts, many are optimistic for a future where people in dry areas can harvest fresh water from a morning wind, and lower their energy needs dramatically.
"If someone comes up with a really cheap solution, then applications are waiting," said Rajesh Mehta, NSF Small Business Innovation Research/Small Business Technology Transfer program director.
-- Sarah Bates
Investigators
Constantine Megaridis
Sara Beaini
Julie Crockett
Kripa Varanasi
Brent Webb
R Daniel Maynes
Related Institutions/Organizations
University of Illinois at Chicago
Iowa State University
Brigham Young University
NBD Nanotechnologies, Inc.
Massachusetts Institute of Technology
Thursday, July 3, 2014
NSF ON WALKING FOR ENERGY
FROM: NATIONAL SCIENCE FOUNDATION
Walking can recharge the spirit, but what about our phones?
Device captures energy from walking to recharge wireless gadgets
Smartphones, tablets, e-readers, not to mention wearable health and fitness trackers, smart glasses and navigation devices--today's population is more plugged in than ever before.
But our reliance on devices is not problem-free:
Wireless gadgets require regular recharging. While we may think we've cut the cord, we remain reliant on outlets and charging stations to keep our devices up and running.
According to a 2009 report by the International Energy Agency (IEA), consumer electronics and information and communication technologies currently account for nearly 15 percent of global residential electricity consumption. What's more, the IEA expects energy consumptions by these devices to double by 2022 and to triple by 2030--thereby slowly but surely adding to the burden on our power infrastructure.
With support from the National Science Foundation, a team of researchers at the Georgia Institute of Technology may have a solution to both problems: They're developing a new, portable, clean energy source that could change the way we power mobile electronics: human motion.
Led by material scientist Zhong Lin Wang, the team has created a backpack that captures mechanical energy from the natural vibration of human walking and converts it into electrical energy. This technology could revolutionize the way we charge small electronic devices, and thereby reduce the burden of these devices on non-renewable power sources and untether users from fixed charging stations.
Smaller, lighter, more energy efficient
Wearable generators that convert energy from the body's mechanical potential into electricity are not new, but traditional technologies rely on bulky or fragile materials. By contrast, Wang's backpack contains a device made from thin, lightweight plastic sheets, interlocked in a rhombic grid. (Think of the collapsible cardboard containers that separate a six pack of fancy soda bottles.)
As the wearer walks, the rhythmic movement that occurs as his/her weight shifts from side to side causes the inside surfaces of the plastic sheets to touch and then separate, touch and then separate. The periodic contact and separation drives electrons back and forth, producing an alternating electric current. This process, known as the triboelectrification effect, also underlies static electricity, a phenomenon familiar to anyone who has ever pulled a freshly laundered fleece jacket over his or her head in January.
But the key to Wang's technology is the addition of highly charged nanomaterials that maximize the contact between the two surfaces, pumping up the energy output of what Wang calls the triboelectric nanogenerator (TENG).
"The TENG is as efficient as the best electromagnetic generator, and is lighter and smaller than any other electric generators for mechanical energy conversion," says Wang. "The efficiency will only improve with the invention of new advanced materials."
Charging on the go
In the laboratory, Wang's team showed that natural human walking with a load of 2 kilograms, about the weight of a 2-liter bottle of soda, generated enough power to simultaneously light more than 40 commercial LEDs (which are the most efficient lights available).
Wang says that the maximum power output depends on the density of the surface electrostatic charge, but that the backpack will likely be able to generate between 2 and 5 watts of energy as the wearer walks--enough to charge a cell phone or other small electronic device.
The researchers anticipate that this will be welcome news to outdoor enthusiasts, field engineers, military personnel and emergency responders who work in remote areas.
As far as Wang and his colleagues are concerned however, human motion is only one potential source for clean and renewable energy. In 2013, the team demonstrated that it was possible to use TENGs to extract energy from ocean waves.
The research report, "Harvesting Energy from the Natural Vibration of Human Walking", was published in the journal ACS Nano on November 1, 2013.
-- Valerie Thompson, AAAS Science and Technology Policy Fellow
Investigators
Zhong Wang
Related Institutions/Organizations
Georgia Tech Research Corporation
Sunday, June 29, 2014
NSF-FUNDED SUPERCOMPUTER DOES WHAT LAB EXPERIMENTS CAN'T
FROM: NATIONAL SCIENCE FOUNDATION SCIENCE
A high-performance first year for Stampede
NSF-funded supercomputer enables discoveries throughout science and engineering
After all, you can't recreate an exploding star, manipulate quarks or forecast the climate in the lab. In cases like these, scientists rely on supercomputing simulations to capture the physical reality of these phenomena--minus the extraordinary cost, dangerous temperatures or millennium-long wait times.
When faced with an unsolvable problem, researchers at universities and labs across the United States set up virtual models, determine the initial conditions for their simulations--the weather in advance of an impending storm, the configurations of a drug molecule binding to an HIV virus, the dynamics of a distant dying star--and press compute.
And then they wait as the Stampede supercomputer in Austin, Texas, crunches the complex mathematics that underlies the problems they are trying to solve.
By harnessing thousands of computer processors, Stampede returns results within minutes, hours or just a few days (compared to the months and years without the use of supercomputers), helping to answer science's--and society's--toughest questions.
Stampede is one of the most powerful supercomputers in the U.S. for open research, and currently ranks as the seventh most powerful in the world, according to the November 2013 TOP500 List. Able to perform nearly 10 trillion operations per second, Stampede is the most capable of the high-performance computing, visualization and data analysis resources within the National Science Foundation's (NSF) Extreme Science and Engineering Discovery Environment (XSEDE).
Stampede went into operation at the Texas Advanced Computing Center (TACC) in January 2013. The system is a cornerstone of NSF's investment in an integrated advanced cyberinfrastructure, which allows America's scientists and engineers to access cutting-edge computational resources, data and expertise to further their research across scientific disciplines.
At any given moment, Stampede is running hundreds of separate applications simultaneously. Approximately 3,400 researchers computed on the system in its first year, working on 1,700 distinct projects. The researchers came from 350 different institutions and their work spanned a range of scientific disciplines from chemistry to economics to artificial intelligence.
These researchers apply to use Stampede through the XSEDE project. Their intended use of Stampede is assessed by a peer review committee that allocates time on the system. Once approved, researchers are provided access to Stampede free of charge and tap into an ecosystem of experts, software, storage, visualization and data analysis resources that make Stampede one of the most productive, comprehensive research environments in the world. Training and educational opportunities are also available to help scientists use Stampede effectively.
"It was a fantastic first year for Stampede and we're really proud of what the system has accomplished," said Dan Stanzione, acting director of TACC. "When we put Stampede together, we were looking for a general purpose architecture that would support everyone in the scientific community. With the achievements of its first year, we showed that was possible."
Helping today, preparing for tomorrow
When the National Science Foundation (NSF) released their solicitation for proposals for a new supercomputer to be deployed in 2013, they were looking for a system that could support the day-to-day needs of a growing community of computational scientists, but also one that would push the field forward by incorporating new, emerging technologies.
"The model that TACC used, incorporating an experimental component embedded in a state-of-the-art usable system, is a very innovative choice and just right for the NSF community of researchers who are focused on both today's and tomorrow's scientific discoveries," said Irene Qualters, division director for Advanced Cyberinfrastructure at NSF. "The results that researchers have achieved in Stampede's first year are a testimony to the system design and its appropriateness for the community."
"We wanted to put an innovative twist on our system and look at the next generation of capabilities," said TACC's Dan Stanzione. "What we came up with is a hybrid system that includes traditional Intel Xeon E5 processors and also has an Intel Xeon Phi card on every node on the system, and a few of them with two.
The Intel Xeon Phi [aka the 'many integrated core (MIC) coprocessor'] squeezes 60 or more processors onto a single card. In that respect, it is similar to GPUs (graphics processing units), which have been used for several years to aid parallel processing in high-performance computing systems, as well as to speed up graphics and gaming capabilities in home computers. The advantage of the Xeon Phi is its ability to perform calculations quickly while consuming less energy.
"The Xeon Phi is Intel's approach to changing these power and performance curves by giving us simpler cores with a simpler architecture but a lot more of them in the same size package," Stanzione said
As advanced computing systems grow more powerful, they also consume more energy--a situation that can be addressed by simpler, multicore chips. The Xeon Phi and other comparable technologies are believed to be critical to the effort to advance the field and develop future large-scale supercomputers.
"The exciting part is that MIC and GPU foreshadow what will be on the CPU in the future," Stanzione said. "The work that scientists are putting in now to optimize codes for these processors will pay off. It's not whether you should adopt them; it's whether you want to get a jump on the future. "
Though Xeon Phi adoption on Stampede started slowly, it now represents 10-20 percent of the usage of the system. Among the projects that have taken advantage of the Xeon Phi co-processor are efforts to develop new flu vaccines, simulations of the nucleus of the atom relevant to particle physics and a growing amount of weather forecasting.
Built to handle to big data
The power of Stampede reaches beyond its ability to gain insight into our world through computational modeling and simulation. The system's diverse resources can be used to explore research in fields too complex to describe with equations, such as genomics, neuroscience and the humanities. Stampede's extreme scale and unique technologies enable researchers to process massive quantities of data and use modern techniques to analyze measured data to reach previously unachievable conclusions.
Stampede provides four capabilities that most data problems take advantage of. Leveraging 14 petabytes of high speed internal storage, users can process massive amounts of independent data on multiple processers at once, thus reducing the time needed for the data analysis or computation.
Researchers can use many data analysis packages optimized to run on Stampede by TACC staff to statistically or visually analyze their results. Staff also collaborates with researchers to improve their software and make it run more efficiently in a high-performance environment.
Data is rich and complex. When the individual data computations become so large that Stampede's primary computing resources cannot handle the load, the system provides users with 16 compute nodes with one terabyte of memory each. This enables researchers to perform complex data analyses using Stampede's diverse and highly flexible computing engine.
Once data has been parsed and analyzed, GPUs can be used remotely to explore data interactively without having to move large amounts of information to less-powerful research computers.
"The Stampede environment provides data researchers with a single system that can easily overcome most of the technological hurdles they face today, allowing them to focus purely on discovering results from their data-driven research," said Niall Gaffney, TACC director of Data Intensive Computing.
Since it was deployed, Stampede has been in high demand. Ninety percent of the compute time on the system goes to researchers with grants from NSF or other federal agencies; the other 10 percent goes to industry partners and discretionary programs.
"The system is utilized all the time--24/7/365," Stanzione said. "We're getting proposals requesting 500 percent of our time. The demand exceeds time allocated by 5-1. The community is hungry to compute."
Stampede will operate through 2017 and will be infused with second generation Intel Xeon Phi cards in 2015.
With a resource like Stampede in the community's hands, great discoveries await.
"Stampede's performance really helped push our simulations to the limit," said Caltech astrophysicist Christian Ott who used the system to study supernovae. "Our research would have been practically impossible without Stampede."
-- Aaron Dubrow, NSF
Investigators
Daniel Stanzione
William Barth
Tommy Minyard
Niall Gaffney
Fuqing Zhang
Roseanna Zia
Christian Ott
Edward Marcotte
Tuesday, June 17, 2014
NSF ON "NEW CLOCKING TECHNOLOGIES"
FROM: NATIONAL SCIENCE FOUNDATION
Revolutionizing how we keep track of time in cyber-physical systems
New five-year, $4 million Frontier award aims to improve the coordination of time in networked physical systems
Examples of cyber-physical systems include autonomous cars, aircraft autopilot systems, tele-robotics devices and energy-efficient buildings, among many others.
The grant brings together expertise from five universities and establishes a center-scale research activity to improve the accuracy, efficiency, robustness and security with which computers maintain knowledge of time and synchronize it with other networked devices in the emerging "Internet of Things."
Time has always been a critical issue in science and technology. From pendulums to atomic clocks, the accurate measurement of time has helped drive scientific discovery and engineering innovation throughout history. For example, advances in distributed clock synchronization technology enabled GPS satellites to precisely measure distances. This, in turn, created new opportunities and even entirely new industries, enabling the development of mobile navigation systems. However, many other areas of clock technology are still ripe for development.
Time synchronization presents a particular fundamental challenge in emerging applications of CPS, which connect computers, communication, sensors and actuator technologies to objects and play a critical role in our physical and network infrastructure. Cyber-physical systems depend on precise knowledge of time to infer location, control communication and accurately coordinate activities. They are critical to real-time situational awareness, security and control in a broad and growing range of applications.
"The National Science Foundation has long supported research to integrate cyber and physical systems and has supported the experimentation and prototyping of these systems in a number of different sectors--from transportation and energy to medical systems," said Farnam Jahanian, head of NSF's Directorate for Computer and Information Science and Engineering. "As the 'Internet of Things' becomes more pervasive in our lives, precise timing will be critical for these systems to be more responsive, reliable and efficient."
The NSF award will support a project called Roseline, which seeks to develop new clocking technologies, synchronization protocols, operating system methods, as well as control and sensing algorithms. The project is led by engineering faculty from the University of California, Los Angeles (UCLA), and includes electrical engineering and computer science faculty from the University of California, San Diego; Carnegie Mellon University; the University of California, Santa Barbara and the University of Utah.
"Through the Roseline project, we will drive cyber-physical systems research with a deeper understanding of time and its trade-offs and advance the state-of-the-art in clocking circuits and platform architectures," said UCLA professor Mani Srivastava, principal investigator on the project.
Today, most computing systems use tiny clocks to manage time in a relatively simplistic and idealized fashion. For example, software in today's computers has little visibility into, and control over, the quality of time information received from its underlying hardware. At the same time, the clocks have little, if any, knowledge of the quality of time needed by the software, nor any ability to adapt to it. This leaves computing systems that are dependent on time vulnerable to complex and catastrophic disruptions.
The Roseline team will address this problem by rethinking and re-engineering how the knowledge of time is handled across a computing system's hardware and software.
"Roseline will drive accurate timing information deep into the software system," said Rajesh Gupta, University of California, San Diego computer science and engineering chair and a co-principal investigator on the project. "It will enable robust distributed control of smart grids, precise localization of structural faults in bridges and ultra-low-power wireless sensors."
Roseline will have a broad impact across many sectors, including smart electrical grids, aerospace systems, precision manufacturing, autonomous vehicles, safety systems and infrastructure monitoring.
In addition to Srivastava and Gupta, the Roseline team includes Sudhakar Pamarti of UCLA, João Hespanha of UC Santa Barbara, Ragunathan Rajkumar and Anthony Rowe of Carnegie Mellon University and Thomas Schmid of the University of Utah.
Beyond the research and testing of components, project leaders plan to integrate CPS and timing components into graduate and undergraduate course materials and engage pre-college students in outreach efforts, including the Los Angeles Computing Circle, which focuses on teaching real-world applications of computer science to students from local high schools.
"The measurement, distribution and synchronization of time have always been critical in science and technology, and there is a long history of new time-related technologies revolutionizing society," said David Corman, NSF program director for CPS. "As computation becomes embedded in the physical systems around us, it becomes all the more important that computers be able to know time accurately, efficiently and reliably. I am excited to see the Roseline team undertake this challenging and important task."
NSF's long-standing support for CPS research and education spans a range of awards amounting to an investment of nearly $200 million during the last five years.
-NSF-
Media Contacts
Aaron Dubrow
Sunday, June 1, 2014
ENGINEER STUDIES HUMAN BEHAVIOR TO HELP DEVELOP ENERGY-EFFICIENT TECHNOLOGIES
FROM: NATIONAL SCIENCE FOUNDATION
Energy-efficient technologies developed with people in mind
When engineers design environmentally-friendly cars, such as all-electric or hybrid vehicles, they often focus primarily on their technological features. Ricardo Daziano believes they also should consider the "human" element.
By this, he means they need to keep in mind the kinds of things consumers actually want from a "green car," and how these preferences will influence their buying decisions. While technology is important, he believes that engineers no longer can focus on it in isolation. It's not enough to create technically sound solutions if society isn't willing to adopt them.
While many consumers support the concept of sustainable energy cars, this doesn't always mean they will buy them. "This technology often is more expensive, so one question becomes whether consumers are willing to spend a lot of money now for cost-saving benefits that will come later?" he says.
"It's an energy paradox," he adds. "You do have the savings, but they come later. People like to have money now, rather than in savings. It's human nature. It may be difficult to sell the idea that this vehicle costs more now, but will save money in the future."
Daziano, assistant professor in the school of civil and environmental engineering at Cornell University, who teaches economics, has a social science background and believes that technical solutions for society's problems, such as the need for sustainable transportation, must reach beyond the technology into the psyche.
The National Science Foundation (NSF)-funded scientist is studying human behavior and how it relates to consumer decisions about energy efficient, low emission vehicles. His research potentially could provide important insights for policy makers, transportation planners, as well as for automobile manufacturers in advancing future sustainable vehicle designs.
He already has learned, for example, that one of the reasons the Toyota Prius has become so successful is because it is instantly recognizable as a hybrid, unlike those made by other manufacturers which fail to stand out, "Other car makers have hybrids, but they look like their other models," Daziano says. "People want a car that will tell the world: ‘I'm green."'
In other preliminary results, he has found that women appear likely to spend up to $2,000 more than men for an energy efficient car.
Daziano is conducting his research under a National Science Foundation Faculty Early Career Development (CAREER) award, which he received earlier this year. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization. He is receiving $410,000 over five years.
The project is studying data already collected by research centers in Germany, Italy, Canada, and California where scientists conducted consumer surveys asking people about their car buying choices, and how they arrived at them. Daziano is using a special computer algorithm to analyze them.
"I'm trying to determine the tradeoffs people make," he says. "How they process this information. Each car has different characteristics, and I want to see how they combine them to decide on the car they want." Ultimately, "we will be able to forecast how people will react if we make changes in the cars," he adds.
For example, "people may like the idea of a 100 percent electric car, but they still may hesitate to buy one because of its limitations," he says. "All-electric cars have a limited driving range, which is the maximum you can drive in the car, and there is this concept we call ‘range anxiety,' when you are concerned that the battery will die and you will not reach your destination. Also, we do not yet have a lot of charging stations available, certainly not like gas stations, which increases the anxiety."
He hopes the information he gathers will influence both auto makers and policy makers. "If I can determine from the point of view of the consumer the optimal driving range that will make them comfortable with an all-electric car, then hopefully the engineers will be able to come up with a battery that offers more," he says.
One research challenge is to find a way to incorporate consumers' wide-ranging and different tastes. "Consumers are heterogeneous," he says. "There are people who prefer luxury cars, others prefer power and space, while others care about color. That's why every car maker has a range of vehicles they offer. They need to address many things in their models.
"If everyone behaved the same, these would be easy problems to solve," he adds.
He already has begun to introduce these ideas into the engineering curriculum, where students "need to understand that we are doing these technologies for people," he says.
"They need to consider people in the design process," he adds. "We are splicing this into discussions in the classroom, for both graduates and undergraduates. They were not aware of this social component of engineering. Modeling consumer preferences is something completely new for engineering students."
-- Marlene Cimons, National Science Foundation
Investigators
Ricardo Daziano
Tuesday, May 27, 2014
RESEARCHERS LOOK AT THE BRAIN
FROM: NATIONAL SCIENCE FOUNDATION
Engineers ask the brain to say, "Cheese!"
How do we take an accurate picture of the world’s most complex biological structure?
Creating new brain imaging techniques is one of today's greatest engineering challenges.
The incentive for a good picture is big: looking at the brain helps us to understand how we move, how we think and how we learn. Recent advances in imaging enable us to see what the brain is doing more precisely across space and time and in more realistic conditions.
The newest advance in optical imaging brings researchers even closer to illuminating the whole brain and nervous system.
Researchers at the Massachusetts Institute of Technology and the University of Vienna achieved simultaneous functional imaging of all the neurons of the transparent roundworm C. elegans. This technique is the first that can generate 3-D movies of entire brains at the millisecond timescale.
The significance of this achievement becomes clear in light of the many engineering complexities associated with brain imaging techniques.
An imaging wish list
When 33 brain researchers put their minds together at a workshop funded by the National Science Foundation in August 2013, they identified three of the biggest challenges in mapping the human brain for better understanding, diagnosis and treatment.
Challenge one: High spatiotemporal resolution neuroimaging. Existing brain imaging technologies offer different advantages and disadvantages with respect to resolution. A method such as functional MRI that offers excellent spatial resolution (to several millimeters) can provide snapshots of brain activity in the order of seconds. Other methods, such as electroencephalography (EEG), provide precise information about brain activity over time (to the millisecond) but yield fuzzy information about the location.
The ability to conduct functional imaging of the brain, with high resolution in both space and time, would enable researchers to tease out some of the brain's most intricate workings. For example, each half of the thalamus--the brain's go-to structure for relaying sensory and motor information and a potential target for deep brain stimulation--has 13 functional areas in a package the size of a walnut.
With better spatial resolution, researchers would have an easier time determining which areas of the brain are involved in specific activities. This could ultimately help them identify more precise targets for stimulation, maximizing therapeutic benefits while minimizing unnecessary side effects.
In addition, researchers wish to combine data from different imaging techniques to study and model the brain at different levels, from molecules to cellular networks to the whole brain.
Challenge two: Perturbation-based neuroimaging. Much that we know about the brain relies on studies of dysfunction, when a problem such as a tumor or stroke affects a specific part of the brain and a correlating change in brain function can be observed.
But researchers also rely on techniques that temporarily ramp up, or turn off, brain activity in certain regions. What if the effects of such modifications on brain function could then be captured with neuroimaging techniques?
Being able to observe what happens when certain parts of the brain are activated could help researchers determine brain areas' functions and provide critical guidance for brain therapies.
Challenge three: Neuroimaging in naturalistic environments. Researchers aim to create new noninvasive methods for imaging the brain while a person interacts with his or her surroundings. This ability will become more valuable as new technologies that interface with the brain are developed.
For example, a patient undergoing brain therapy at home may choose to send information to his or her physician remotely rather than go to an office for frequent check-ups. The engineering challenges of this scenario include the creation of low-cost, wearable technologies to monitor the brain as well as the technical capability to differentiate between signs of trouble and normal fluctuations in brain activity during daily routines.
Other challenges the brain researchers identified are neuroimaging in patients with implanted brain devices; integrating imaging data from multiple techniques; and developing models, theories and infrastructures for better understanding and analyzing brain data. In addition, the research community must ensure that students are prepared to use and create new imaging techniques and data.
The workshop chair, Bin He of the University of Minnesota-Twin Cities, said, "Noninvasive human brain mapping has been a holy grail in science. Accomplishing the three grand challenges would change the future of brain science and our ability to treat numerous brain disorders that cost the nation over $500 billion each year."
The full workshop report was published in IEEE Transactions on Biomedical Engineering.
An imaging breakthrough
Engineers, in collaboration with neuroscientists, computer scientists and other researchers, are already at work devising creative ways to address these challenges.
The workshop findings place the new technique developed by the MIT and University of Vienna researchers into greater context. Their work had to overcome several of the challenges outlined.
The team captured neural activity in three dimensions at single-cell resolution by using a novel strategy not before applied to neurons--light-field microscopy, using a novel algorithm to reverse distortion, a process known as deconvolution.
The technique of light-field microscopy involves the shining of light at a 3-D sample, and capturing the locations of fluorophores in a still image, using a special set of lenses. The fluorophores in this case are modified proteins that attach to neuron and fluoresce when the neurons activate. However, this microscopy method requires a trade-off between the sample size and the spatial resolution possible, and thus it has not been before used for live biological imaging.
The advantage presented by light-field microscopy, here used in an optimized form, is that the technique may quickly capture the neuronal activity of whole animals, not simply still images, while providing high enough spatial resolution to make functional biological imaging possible.
"This elegant technique should have a large impact on the use of functional biological imaging for understanding brain cognitive function," said Leon Esterowitz, program director in NSF's Engineering Directorate, which provided partial funding for the research.
The researchers, led by Edward Boyden of MIT and Alipasha Vaziri of the University of Vienna, reported their results in this week's issue of the journal Nature Methods.
"Looking at the activity of just one neuron in the brain doesn't tell you how that information is being computed; for that, you need to know what upstream neurons are doing. And to understand what the activity of a given neuron means, you have to be able to see what downstream neurons are doing," said Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and one of the leaders of the research team.
"In short, if you want to understand how information is being integrated from sensation all the way to action, you have to see the entire brain."
-- Cecile J. Gonzalez,
Investigators
Edward Boyden
Bin He
Related Institutions/Organizations
Massachusetts Institute of Technology
University of Minnesota-Twin Cities
Wednesday, April 30, 2014
NSF ON BUILDING A BRAIN-MACHINE INTERFACE
FROM: NATIONAL SCIENCE FOUNDATION
How to build a brain-machine interface
New-generation brain technologies are now in use thanks to new body-compatible materials, smaller electronics and better sensors designed by engineers
Devices that tap directly into the nervous system can restore sensation, movement or cognitive function. These technologies, called brain-machine interfaces or BMIs, are on the rise, increasingly providing assistance to people who need it most.
But what exactly does it take to build a BMI?
To understand how (and why) BMIs are developed, the engineers who created the artifical retina can provide a kind of "how-to" guide for the curious and technically inclined.
While a greater understanding of biology has been essential to BMI development, advances in engineering and materials science have led to their design and performance. From creating new materials that are more compatible with the human body, to designing smaller electronics and better sensors, engineers are playing a major role in the development of existing and future brain technologies.
Like any other engineering challenge, building a BMI involves background research, feasibility testing, prototyping and production.
But building a BMI is unique in that engineers must design these devices to seamlessly interface with another complex system: the human nervous system.
The model: The bionic eye
In 2013, the U.S. Food and Drug Administration approved the Argus II® Retinal Prosthesis System for use in individuals who have lost their vision as a result of severe-to-profound retinitis pigmentosa. A genetic condition affecting one in every 4,000 individuals, early symptoms of retinitis pigmentosa often include night blindness, followed by gradual but progressive loss of peripheral vision and ultimately total blindness.
The system works by bypassing damaged photoreceptors, cells in the retina that normally convert light into electrical signals that the brain interprets as visual information. The Argus II® transmits images from a small camera to an implant in the back of the eye. Like the photoreceptors, the implant produces electrical signals that are transmitted to the brain.
"Seeing my grandmother go blind motivated me to pursue ophthalmology and biomedical engineering to develop a treatment for patients for whom there was no foreseeable cure," says the device's co-inventor, Mark Humayun, associate director of research at the Doheny Retina Institute at the University of Southern California.
Without this motivation, the daunting design challenges and constraints might have been enough to make even the most meticulous researcher think twice about tackling such a project.
"The artificial retina was a great engineering challenge under the interdisciplinary constraints of biology, enabling technology, [and] regulatory compliance" says Humayun's collaborator Wentai Liu, a professor of biomedical engineering at the University of California, Los Angeles.
In addressing such a challenge, before researchers begin to worry about the technological details, they must first determine whether a BMI is the right fit.
Step one: Decide if a condition is a good candidate for a BMI
Like most engineering endeavors, the first step in building a BMI has more to do with understanding the system at hand than with cutting-edge design.
When it came to creating an artificial retina, this meant that researchers needed to determine which parts of the visual pathway were working and which were not.
"We needed to know there were enough neurons left in the eye to stimulate and still transmit nerve impulses and communicate with the vision center of the brain," Humayun says.
With initial funding in the late 1980s and early 1990s from the National Eye Institute, the National Retinitis Pigmentosa Foundation and others, the researchers showed that neurons in the retina were still capable of responding to electrical stimulation--a sign that patients with this disease could potentially benefit from a BMI. If the nerves had been damaged, then signals would not have had a path to the brain, meaning that an artificial eye alone would not have solved the problem.
Step two: Determine if a fix is feasible
Once a condition is identified as a good candidate for a BMI, investigators need to determine whether the basic technologies needed to create such a device are even feasible.
For Humayun and colleagues, this meant tackling some tough engineering challenges, including how to mimic photoreceptor activity with artificial electrical stimulation, how to power the implant and enable real-time data transmission and how to integrate external components with the implant.
With early support from the National Science Foundation and others, the researchers set about answering each of these questions throughout the 1990s, meticulously developing prototypes of the miniature video camera and belt-worn computer that would capture and convert visual information, the integrated computer chip that would wirelessly receive the data and the tiny electrode array that would stand in for the damaged photoreceptors.
Step three: Consider the human factor
When designing a BMI, it's critical to remember that these devices must operate in concert with the human body. In addition to incorporating feedback from potential users throughout the design process, this means that the device must be designed in such a way that it can function effectively in the presence of body fluids and tissues.
Humayun and his collaborators addressed this challenge by creating a hermetically sealed packaging system that would allow the device to work in the gelatinous environment of the eye. They also carefully planned how they would implant the device to minimize the disruption to the body.
"The inside of the eye is a relatively immune-privileged site and the scarring reaction is minimal," Humayun says. "But, having said this, the surgery and the attachment of the device inside the eye has to be performed in the least invasive manner possible."
Step four: Optimize, shrink and integrate
Before a BMI reaches the end user, each component must be optimized, miniaturized and integrated with the rest of the device.
Unlike traditional design practices, which focus on optimizing each component, the artificial retina was developed by tweaking and streamlining the device as a whole, known as systems-level optimization.
The result?
A sleek, small system that packs a punch.
"The engine for the artificial retina is a 'system on a chip' of mixed voltages and mixed analog-digital design, which provides self-contained power and data management," Liu says.
Step five: Scale up and get the go-ahead
One of the final steps in building a BMI is getting it into the hands of those who need it. When the initial technology is developed in an academic research setting, this can often mean handing it off to a company that will facilitate manufacturing and manage clinical trials and commercial distribution to patients.
Founded in 1998 by Humayun's former graduate student Robert Greenberg, Second Sight Medical Products Inc., took the artificial retina from the laboratory bench to the marketplace. Clinical trials for the first-generation device (the Argus I) were conducted in 2002, and were followed by pilot studies and patient trials for the Argus II in 2006. On Feb. 14, 2013, the Argus II became the first visual prosthesis to receive market approval in the United States.
Step six: Rinse and repeat
Perhaps the most important aspect of building a BMI is recognizing that there is always room for improvement.
"While we are still at the earliest stages, people are already benefiting from these implants, through improved mobility," says James Weiland, former deputy director of the Biomimetic Microelectronic Systems (BMES) Engineering Research Center at the University of Southern California.
"Working on advanced technology projects convinces me that it is feasible to create the technology needed for better outcomes."
Led by Humayun, the BMES Engineering Research Center was founded in 2003 to continue to advance the development of this technology. The latest prototype features an ultra-miniature camera that can be implanted directly in the eye. The system also contains more than 15 times the number of electrodes in the Argus II, which the researchers anticipate will greatly improve image resolution.
-- Valerie Thompson, AAAS Science and Technology Policy Fellow, National Science Foundation vthompso@nsf.gov
Investigators
Wentai Liu
Gerald Loeb
Brian Justus
Mark Humayun
James Weiland
Related Institutions/Organizations
Doheny Eye Institute
Johns Hopkins University
North Carolina State University
University of Southern California
University of California-Santa Cruz
Subscribe to:
Posts (Atom)