FROM: THE NATIONAL SCIENCE FOUNDATION
Uncovering Alzheimer's complex genetic networks
Researchers from the Mayo Clinic use NSF-supported Blue Waters supercomputer to understand gene expression in the brain
February 3, 2015
The release of the film, "Still Alice," in September 2014 shone a much-needed light on Alzheimer's disease, a debilitating neurological disease that affects a growing number of Americans each year.
More than 5.2 million people in the U.S. are currently living with Alzheimer's. One out of nine Americans over 65 has Alzheimer's, and one out of three over 85 has the disease. For those over 65, it is the fifth leading cause of death.
There are several drugs on the market that can provide relief from Alzheimer's symptoms, but none stop the development of disease, in part because the root causes of Alzheimer's are still unclear.
"We re interested in studying the genetics of Alzheimer's disease," said Mariet Allen, a post-doctoral fellow at the Mayo Clinic in Florida. "Can we identify genetic risk factors and improve our understanding of the biological pathways and cellular mechanisms that can play a role in the disease process?"
Allen is part of a team of researchers from the Mayo Clinic who are using Blue Waters, one of the most powerful supercomputers in the world, to decode the complicated language of genetic pathways in the brain. In doing so, they hope to provide insights into what genes and proteins are malfunctioning in the brain, causing amyloid beta plaques, tau protein tangles and brain atrophy due to neuronal cell loss--the telltale signs of the disease--and how these genes can be detected and addressed.
In the case of late onset Alzheimer's disease (LOAD), it is estimated that as much as 80 percent of risk is due to genetic factors. In recent years, researchers discovered 20 common genetic loci, in addition to the well-known APOE gene, that are found to increase or decrease risk for the disease. (Loci are specific locations of a gene, DNA sequence, or position on a chromosome.) These loci do not necessarily have a causal connection to the disease, but they provide useful information about high-risk patients.
Despite all that doctors have learned in recent years about the genetic basis of Alzheimer's, according to Allen, a substantial knowledge gap still exists. It has been estimated that likely less than 40 percent of genetic risk for LOAD can be explained by known loci. Furthermore, it is not always clear which are the affected genes at these known loci.
In other words, scientists have a long way to go to get a full picture of which genes are involved in processes related to the disease and how they interact.
The Mayo team and their colleagues had been very successful in the past in finding genetic risk factors using a method that matched individual differences in the DNA code--single-nucleotide polymorphisms or SNPs, to phenotypes--the outward appearances of the disease. In particular, the Mayo team focused on identifying SNPs that influence expression of genes in the brain. However, they now hypothesize that the single SNP method may be too simplistic to find all genetic factors, and is likely not an accurate reflection of the complex biological interactions that take place in an organism.
For that reason, the Mayo researchers have recently turned their attention to investigating the brain using genetic interaction (epistasis) studies. Such studies allow researchers to understand the effects of pairs of gene changes on a given phenotype and can uncover additional genetic variants that influence gene expression and disease.
The process involves the analysis of billions of DNA base pairs (the familiar C, G, A and T) to find statistically significant correlations. Importantly, the search is not to discover simple one-to-one connections, since these have largely been found, but to study the interaction effects of pairs of DNA sequence variations.
Solving a problem of this size and complexity requires a huge amount of computational processing time, so the researchers turned to the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA).
Supported by the National Science Foundation and the University of Illinois at Urbana-Champaign, Blue Waters allows scientists and engineers across the country to tackle a wide range of challenging problems using massive computing and data processing power. From predicting the behavior of complex biological systems to simulating the evolution of the cosmos, Blue Waters assists researchers whose computing problems are at a scale or complexity that cannot be reasonably approached using any other method.
Allen and her colleagues used Blue Waters to rapidly advance their Alzheimer's epistasis study through NCSA's Private Sector Program, which lets teams outside of academia access the system.
Instead of requiring as much as a year or more of processing on a single workstation or university cluster, the research team was able to do each analysis on Blue Waters in less than two days.
The researchers conducted three sets of analysis to investigate brain gene expression levels in a group of individuals without Alzheimer's, a group of individuals with Alzheimer's and then a combined analysis of both groups together. To date, these analyses have been completed for the almost 14,000 genes expressed in the majority of the brain samples studied.
Through their work with collaborators at NCSA and the University of Illinois at Urbana-Champaign (including Victor Jongeneel and Liudmila Mainzer), the Mayo team overcame many of the challenges that a project of this scope presented.
"The analysis of epistatic effects in large studies, such as ours, requires powerful computational resources and would not be possible without the unique computing capabilities of Blue Waters," wrote project lead Nilufer Ertekin-Taner from the Mayo Clinic.
"The Mayo Clinic project is emblematic of the type of problem that is beginning to emerge in computational medicine," said Irene Qualters, division director of Advanced Computing Infrastructure at NSF. "Through engagement with the Blue Waters project, researchers at Mayo have demonstrated the potential of new analytic approaches in addressing the challenges of a daunting medical frontier."
The team reported on their progress at the Blue Waters Symposium in May 2014. Allen and her colleagues are currently processing and filtering the results so they can be analyzed.
"Recent studies by our collaborators and others have shown that both the risk for late onset Alzheimer's disease and gene expression are likely influenced by epistasis. However little is known about the effect of genetic interactions on brain gene expression specifically and how this might influence risk for neurological diseases such as LOAD," said Allen. "The goal of our study is to address this knowledge gap; something we have been uniquely positioned to do using our existing data and the resources available on Blue Waters."
-- Aaron Dubrow, NSF
A PUBLICATION OF RANDOM U.S.GOVERNMENT PRESS RELEASES AND ARTICLES
Showing posts with label COMPUTERS. Show all posts
Showing posts with label COMPUTERS. Show all posts
Sunday, February 8, 2015
Friday, October 31, 2014
USING COMPUTERS CAN MAKE EASIER COMPLICATED WATER RIGHTS TRADING
FROM: NATIONAL SCIENCE FOUNDATION
Selling and buying water rights
NSF Innovation Corps awardees founded the company Mammoth Trading to provide a neutral, centralized resource
It would be much easier if a computer could do it. Now, one can.
Scientists at the University of Nebraska and the University of Illinois at Urbana-Champaign have developed an algorithm that can match potential buyers and sellers, sift through the complexity of local physical and regulatory systems, and reach a fair deal designed especially for them. It also allows the negotiating parties to provide information confidentially during the process.
"It's a different way of matching buyers and sellers in places where there aren't established markets," says Nicholas Brozoviæ, director of policy at the Robert B. Daugherty Water for Food Institute and associate professor of agricultural economics at the University of Nebraska. "It's a different way of building a market for potential buyers and sellers of natural resources. It maintains confidentiality and it is structured in a way that is neutral and fair."
The National Science Foundation (NSF)-funded Brozoviæ's research focuses on using economic analysis to understand natural resource systems, with a special emphasis on water resources. He designs and evaluates management policies that can maintain or improve the condition of natural resources. Much involves collaborations with engineers, urban planners and others.
Mammoth Trading, a new company that grew out of his research, hopes to provide a neutral centralized place for both buyers and sellers interesting in trading water rights and other resource use rights. The goal is to craft each transaction by taking local community needs into account, as well as factors unique to the individuals involved.
There is a transaction fee associated with the market and any benefits from trading are split between buyer and seller, "which is not typically how brokerage works," Brozoviæ says. But "we view ours as a fairer system."
The company currently is developing a certified irrigated acreage market for groundwater rights in the Twin Platte Natural Resources District in Nebraska, as well as working on developing other systems, mostly in water quantity and quality, as well as other natural resources.
"Before we started, it was really difficult to identify those interested in buying or selling their rights," says Richael Young, company president, and an expert in environmental engineering and agricultural economics. "We provide a central hub for people. Right now, they spend a lot of time either calling up people, seeing if someone is interested in trading, or hiring a realtor, which can take months, and still may not tell you whether or not a person is eligible to trade."
Although the company is just getting started, the researchers see a future where the system will expand to other areas and natural resources.
"Part of our thinking is how we can scale this idea and broaden this scope to other environmental markets," Brozoviæ says. "Beyond this relatively narrow market of trading groundwater rights, there are many other natural resources that have the same features where a similar system could work, such as habitat markets or wetland mitigation markets, storm water management in urban areas, water quality in waste water treatment plants.
"There is a broader move within environmental regulations to move to market based systems," he adds. "If done correctly, it may be a cheaper way to achieve a better environmental outcome."
However, there can be considerable complexity in implementing environmental and resource use regulations using markets.
"For carbon, it's a pretty simple process, since the atmosphere is well mixed so it doesn't matter where carbon emissions occur," Young says. "But when you think about ground or surface water, those are highly localized resources. You can't have a trading scheme that allows people to trade wherever they are located, since there are physical laws that govern the movement of groundwater and surface water.
"So those are the kinds of rules we incorporate into the system to allow people to trade more effectively," she adds. "Our goal is to help businesses operate more efficiently within existing regulations. For now, we are focusing on groundwater. In the future, we hope to enter into markets for surface water and air pollutants."
One incentive for expansion to other areas is the fact that the market for trading groundwater permits is highly seasonal. "Once you've planted your seeds and made those decisions, you may not be interested in trading for more water rights," Young says.
NSF has supported their efforts with a $50,000 Innovation Corps (I-Corps) grant, awarded in 2013, which provides a set of activities and programs that prepare scientists and engineers to extend their focus beyond the laboratory into the commercial world.
The goal of the I-Corps program is to help researchers translate their discoveries into technologies with near-term benefits for the economy and society. It is a public-private partnership program that teaches grantees to identify valuable product opportunities that can emerge from academic research, and offers entrepreneurship training to student participants.
Mammoth Trading recently completed its first deal, and is working on others. "We expected it would take time for people to become familiar with our system, and how it works," Young says. "It surprised us how quickly people did hear about us. We got some trades earlier than expected, and we were able to get them through."
Brozoviæ agrees. "We now have the first trades in our system, which is exciting," he says. "Eventually we could do this nationwide, and potentially internationally."
-- Marlene Cimons, National Science Foundation
Investigators
Nicholas Brozovic
Ximing Cai
John Braden
Albert Valocchi
Stephen Gasteyer
Thursday, April 17, 2014
SUPERCOMPUTERS PREDICT SIGNS OF BLACK HOLES CONSUMING STARS
Right: Black Hole Caught in a Stellar Homicide. This computer-simulated image shows gas from a star that is ripped apart by tidal forces as it falls into a black hole. Some of the gas also is being ejected at high speeds into space. Image Credit: NASA, S. Gezari (The Johns Hopkins University), and J. Guillochon (University of California, Santa Cruz).
Somewhere out in the cosmos an ordinary galaxy spins, seemingly at slumber. Then all of a sudden, WHAM! A flash of light explodes from the galaxy's center. A star orbiting too close to the event horizon of the galaxy's central supermassive black hole has been torn apart by the force of gravity, heating up its gas and sending out a beacon to the far reaches of the universe.
In a universe with tens of billions of galaxies, how would we see it? What would such a beacon look like? And how would we distinguish it from other bright, monumental intergalactic events, such as supernovas?
"Black holes by themselves do not emit light," said Tamara Bogdanovic, an assistant professor of physics at the Georgia Institute of Technology. "Our best chance to discover them in distant galaxies is if they interact with the stars and gas that are around them."
In recent decades, with improved telescopes and observational techniques designed to repeatedly survey the vast numbers of galaxies in the sky, scientists noticed that some galaxies that previously looked inactive would suddenly light up at their very center.
"This flare of light was found to have a characteristic behavior as a function of time. It starts very bright and its luminosity then decreases in time in a particular way," she explained. "Astronomers have identified those as galaxies where a central black hole just disrupted and 'ate' a star. It's like a black hole putting up a sign that says 'Here I am.'"
Using a mix of theoretical and computer-based approaches, Bogdanovic tries to predict the dynamics of events such as the black-hole-devouring-star scenario described above, also known as a "tidal disruption." Such events would have a distinct signature to someone analyzing data from a ground-based or space-based observatory.
Using National Science Foundation-funded supercomputers at the Texas Advanced Computing Center (Stampede) and the National Institute for Computational Sciences (Kraken), Bogdanovic and her collaborators recently simulated the dynamics of these super powerful forces and charted their behavior using numerical models.
Tidal disruptions are relatively rare cosmic occurrences. Astrophysicists have calculated that a Milky Way-like galaxy stages the disruption of a star only once in about 10,000 years. The luminous flare of light, on the other hand, can fade away in only a few years. Because it is such a challenge to pinpoint tidal disruptions in the sky, astronomical surveys that monitor vast numbers of galaxies simultaneously are crucial.
Huge difference
So far, only a few dozen of these characteristic flare signatures have been observed and deemed "candidates" for tidal disruptions. But with data from PanSTARRS, Galex, the Palomar Transient Factory and other upcoming astronomical surveys becoming available to scientists, Bogdanovic believes this situation will change dramatically.
"As opposed to a few dozen that have been found over the past 10 years, now imagine hundreds per year--that's a huge difference!" she said. "It means that we will be able to build a varied sample of stars of different types being disrupted by supermassive black holes."
With hundreds of such events to explore, astrophysicists' understanding of black holes and the stars around them would advance by leaps and bounds, helping determine some key aspects of galactic physics.
"A diversity in the type of disrupted stars tells us something about the makeup of the star clusters in the centers of galaxies," Bodganovic said. "It may give us an idea about how many main sequence stars, how many red giants, or white dwarf stars are there on average."
Tidal disruptions also tell us something about the population and properties of supermassive black holes that are doing the disrupting.
"We use these observations as a window of opportunity to learn important things about the black holes and their host galaxies," she continued. "Once the tidal disruption flare dims below some threshold luminosity that can be seen in observations, the window closes for that particular galaxy."
Role of supercomputer
In a recent paper submitted to the Astrophysical Journal, Bogdanovic, working with Roseanne Cheng (Center for Relativistic Astrophysics at Georgia Tech) and Pau Amaro-Seoane (Albert Einstein Institute in Potsdam, Germany), considered the tidal disruption of a red giant star by a supermassive black hole using computer modeling.
The paper comes on the heels of the discovery of a tidal disruption event in which a black hole disrupted a helium-rich stellar core, thought to be a remnant of a red giant star, named PS1-10jh, 2.7 billion light years from Earth.
The sequence of events they described aims to explain some unusual aspects of the observational signatures associated with this event, such as the absence of the hydrogen emission lines from the spectrum of PS1-10jh.
As a follow-up to this theoretical study, the team has been running simulations on Kraken and Stampede, as well as Georgia Tech's Keeneland supercomputer. The simulations reconstruct the chain of events by which a stellar core, similar to the remnant of a tidally disrupted red giant star, might evolve under the gravitational tides of a massive black hole.
"Calculating the messy interplay between hydrodynamics and gravity is feasible on a human timescale only with a supercomputer," Cheng said. "Because we have control over this virtual experiment and can repeat it, fast forward, or rewind as needed, we can examine the tidal disruption process from many perspectives. This in turn allows us to determine and quantify the most important physical processes at play."
The research shows how supercomputer simulations complement and constrain theory and observation.
"There are many situations in astrophysics where we cannot get insight into a sequence of events that played out without simulations. We cannot stand next to the black hole and look at how it accretes gas. So we use simulations to learn about these distant and extreme environments," Bogdanovic said.
One of Bogdanovic's goals is to use the knowledge gained from simulations to decode the signatures of observed tidal disruption events.
"The most recent data on tidal disruption events is already outpacing theoretical understanding and calling for the development of a new generation of models," she explained. "The new, better quality data indicates that there is a great diversity among the tidal disruption candidates. This is contrary to our perception, based on earlier epochs of observation, that they are a relatively uniform class of events. We have yet to understand what causes these differences in observational appearance, and computer simulations are guaranteed to be an important part of this journey."
Investigators
Roseanne Cheng
Pau Amaro-Seoane
Tamara Bogdanovic
FROM: NATIONAL SCIENCE FOUNDATION
Cosmic slurp
Georgia Tech researchers use supercomputers to understand and predict signs of black holes swallowing stars
April 14, 2014
Somewhere out in the cosmos an ordinary galaxy spins, seemingly at slumber. Then all of a sudden, WHAM! A flash of light explodes from the galaxy's center. A star orbiting too close to the event horizon of the galaxy's central supermassive black hole has been torn apart by the force of gravity, heating up its gas and sending out a beacon to the far reaches of the universe.
In a universe with tens of billions of galaxies, how would we see it? What would such a beacon look like? And how would we distinguish it from other bright, monumental intergalactic events, such as supernovas?
"Black holes by themselves do not emit light," said Tamara Bogdanovic, an assistant professor of physics at the Georgia Institute of Technology. "Our best chance to discover them in distant galaxies is if they interact with the stars and gas that are around them."
In recent decades, with improved telescopes and observational techniques designed to repeatedly survey the vast numbers of galaxies in the sky, scientists noticed that some galaxies that previously looked inactive would suddenly light up at their very center.
"This flare of light was found to have a characteristic behavior as a function of time. It starts very bright and its luminosity then decreases in time in a particular way," she explained. "Astronomers have identified those as galaxies where a central black hole just disrupted and 'ate' a star. It's like a black hole putting up a sign that says 'Here I am.'"
Using a mix of theoretical and computer-based approaches, Bogdanovic tries to predict the dynamics of events such as the black-hole-devouring-star scenario described above, also known as a "tidal disruption." Such events would have a distinct signature to someone analyzing data from a ground-based or space-based observatory.
Using National Science Foundation-funded supercomputers at the Texas Advanced Computing Center (Stampede) and the National Institute for Computational Sciences (Kraken), Bogdanovic and her collaborators recently simulated the dynamics of these super powerful forces and charted their behavior using numerical models.
Tidal disruptions are relatively rare cosmic occurrences. Astrophysicists have calculated that a Milky Way-like galaxy stages the disruption of a star only once in about 10,000 years. The luminous flare of light, on the other hand, can fade away in only a few years. Because it is such a challenge to pinpoint tidal disruptions in the sky, astronomical surveys that monitor vast numbers of galaxies simultaneously are crucial.
Huge difference
So far, only a few dozen of these characteristic flare signatures have been observed and deemed "candidates" for tidal disruptions. But with data from PanSTARRS, Galex, the Palomar Transient Factory and other upcoming astronomical surveys becoming available to scientists, Bogdanovic believes this situation will change dramatically.
"As opposed to a few dozen that have been found over the past 10 years, now imagine hundreds per year--that's a huge difference!" she said. "It means that we will be able to build a varied sample of stars of different types being disrupted by supermassive black holes."
With hundreds of such events to explore, astrophysicists' understanding of black holes and the stars around them would advance by leaps and bounds, helping determine some key aspects of galactic physics.
"A diversity in the type of disrupted stars tells us something about the makeup of the star clusters in the centers of galaxies," Bodganovic said. "It may give us an idea about how many main sequence stars, how many red giants, or white dwarf stars are there on average."
Tidal disruptions also tell us something about the population and properties of supermassive black holes that are doing the disrupting.
"We use these observations as a window of opportunity to learn important things about the black holes and their host galaxies," she continued. "Once the tidal disruption flare dims below some threshold luminosity that can be seen in observations, the window closes for that particular galaxy."
Role of supercomputer
In a recent paper submitted to the Astrophysical Journal, Bogdanovic, working with Roseanne Cheng (Center for Relativistic Astrophysics at Georgia Tech) and Pau Amaro-Seoane (Albert Einstein Institute in Potsdam, Germany), considered the tidal disruption of a red giant star by a supermassive black hole using computer modeling.
The paper comes on the heels of the discovery of a tidal disruption event in which a black hole disrupted a helium-rich stellar core, thought to be a remnant of a red giant star, named PS1-10jh, 2.7 billion light years from Earth.
The sequence of events they described aims to explain some unusual aspects of the observational signatures associated with this event, such as the absence of the hydrogen emission lines from the spectrum of PS1-10jh.
As a follow-up to this theoretical study, the team has been running simulations on Kraken and Stampede, as well as Georgia Tech's Keeneland supercomputer. The simulations reconstruct the chain of events by which a stellar core, similar to the remnant of a tidally disrupted red giant star, might evolve under the gravitational tides of a massive black hole.
"Calculating the messy interplay between hydrodynamics and gravity is feasible on a human timescale only with a supercomputer," Cheng said. "Because we have control over this virtual experiment and can repeat it, fast forward, or rewind as needed, we can examine the tidal disruption process from many perspectives. This in turn allows us to determine and quantify the most important physical processes at play."
The research shows how supercomputer simulations complement and constrain theory and observation.
"There are many situations in astrophysics where we cannot get insight into a sequence of events that played out without simulations. We cannot stand next to the black hole and look at how it accretes gas. So we use simulations to learn about these distant and extreme environments," Bogdanovic said.
One of Bogdanovic's goals is to use the knowledge gained from simulations to decode the signatures of observed tidal disruption events.
"The most recent data on tidal disruption events is already outpacing theoretical understanding and calling for the development of a new generation of models," she explained. "The new, better quality data indicates that there is a great diversity among the tidal disruption candidates. This is contrary to our perception, based on earlier epochs of observation, that they are a relatively uniform class of events. We have yet to understand what causes these differences in observational appearance, and computer simulations are guaranteed to be an important part of this journey."
Investigators
Roseanne Cheng
Pau Amaro-Seoane
Tamara Bogdanovic
Thursday, September 26, 2013
ROBOT PERCEPTION
FROM: NATIONAL SCIENCE FOUNDATION
Teaching a computer to perceive the world without human input
Researcher's work could lead to assistive technology for the visually impaired, traffic modeling, and improved navigation and surveillance in robots
Humans can see an object--a chair, for example--and understand what they are seeing, even when something about it changes, such as its position. A computer, on the other hand, can't do that. It can learn to recognize a chair, but can't necessarily identify a different chair, or even the same chair if its angle changes.
"If I show a kid a chair, he will know it's a chair, and if I show him a different chair, he can still figure out that it's a chair," says Ming-Hsuan Yang, an assistant professor of electrical engineering and computer science at the University of California, Merced. "If I change the angle of the chair 45 degrees, the appearance will be different, but the kid will still be able to recognize it. But teaching a computer to see things is very difficult. They are very good at processing numbers, but not good at generalizing things."
Yang's goal is to change this. He is developing computer algorithms that he hopes will give computers, using a single camera, the ability to detect, track and recognize objects, including scenarios where the items drift, disappear, reappear or when other objects obscure them. The goal is to simulate human cognition without human input.
Most humans effortlessly can locate moving objects in a wide range of environments, since they are continually gathering information about the things they see, but it is a challenge for computers. Yang hopes the algorithms he's developing will enable computers to do the same thing, that is, continually amass information about the objects they are tracking.
"While it is not possible to enumerate all possible appearance variation of objects, it is possible to teach computers to interpolate from a wide range of training samples, thereby enabling machines to perceive the world," he says.
Currently, "for a computer, an image is composed of a long string of numbers," Yang says. "If the chair moves, the numbers for those two images will be very different. What we want to do is generalize all the examples from a large amount of data, so the computer will still be able to recognize it, even when it changes. How do we know when we have enough data? We cannot encompass all the possibilities, so we are trying to define ‘chair' in terms of its functionalities."
Potentially, computers that can "see" and track moving objects could improve assistive technology for the visually impaired, and also could have applications in medicine, such as locating and following cells; in tracking insect and animal motion; in traffic modeling for "smart" buildings, and improved navigation and surveillance in robots.
"For the visually impaired, the most important things are depth and obstacles," Yang says. "This could help them see the world around them. They don't need to see very far away, just to see whether there are obstacles near them, two or three feet away. The computer program, for example, could be in a cane. The camera would be able to create a 3-D world and give them feedback. The computer can tell them that the surface is uneven, so they will know, or sense a human or a car in front of them."
Yang is conducting his research under a National Science Foundation Faculty Early Career Development (CAREER) award, which he received in 2012. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education, and research within the context of the mission of their organization. He is receiving $473,797 over five years.
Yang's project also includes developing a code library of tracking algorithms and a large data set, which will become publicly available. The grant also provides for an educational component that will involve both undergraduate and graduate students, with an emphasis on encouraging underrepresented minority groups from California's Central Valley to study computer sciences and related fields. The goal is to integrate computer vision material in undergraduate courses so that students will want to continue studying in the field.
Additionally, Yang is helping several undergraduate students design vision applications for mobile phones, and trying to write programs that will enable computers to infer depth and distance, as well as to interpret the images it "sees."
"It is not clear exactly how human vision works, but one way to explain visual perception of depth is based on people's two eyes and trigonometry," he says. "By figuring out the geometry of the points, we can figure out depth. We do it all the time, without thinking. But for computers, it's still very difficult to do that.
"The Holy Grail of computer vision is to tell a story using an image or video, and have the computer understand on some level what it is seeing," he adds. "If you give an image to a kid, and ask the kid to tell a story, the kid can do it. But if you ask a computer program to do it, now it can only do a few primitive things. A kid already has the cognitive knowledge to tell a story based on the image, but the computer just sees things as is, but doesn't have any background information. We hope to give the computer some interpretation, but we aren't there yet."
-- Marlene Cimons, National Science Foundation
Investigators
Ming-Hsuan Yang
Related Institutions/Organizations
University of California - Merced
Teaching a computer to perceive the world without human input
Researcher's work could lead to assistive technology for the visually impaired, traffic modeling, and improved navigation and surveillance in robots
Humans can see an object--a chair, for example--and understand what they are seeing, even when something about it changes, such as its position. A computer, on the other hand, can't do that. It can learn to recognize a chair, but can't necessarily identify a different chair, or even the same chair if its angle changes.
"If I show a kid a chair, he will know it's a chair, and if I show him a different chair, he can still figure out that it's a chair," says Ming-Hsuan Yang, an assistant professor of electrical engineering and computer science at the University of California, Merced. "If I change the angle of the chair 45 degrees, the appearance will be different, but the kid will still be able to recognize it. But teaching a computer to see things is very difficult. They are very good at processing numbers, but not good at generalizing things."
Yang's goal is to change this. He is developing computer algorithms that he hopes will give computers, using a single camera, the ability to detect, track and recognize objects, including scenarios where the items drift, disappear, reappear or when other objects obscure them. The goal is to simulate human cognition without human input.
Most humans effortlessly can locate moving objects in a wide range of environments, since they are continually gathering information about the things they see, but it is a challenge for computers. Yang hopes the algorithms he's developing will enable computers to do the same thing, that is, continually amass information about the objects they are tracking.
"While it is not possible to enumerate all possible appearance variation of objects, it is possible to teach computers to interpolate from a wide range of training samples, thereby enabling machines to perceive the world," he says.
Currently, "for a computer, an image is composed of a long string of numbers," Yang says. "If the chair moves, the numbers for those two images will be very different. What we want to do is generalize all the examples from a large amount of data, so the computer will still be able to recognize it, even when it changes. How do we know when we have enough data? We cannot encompass all the possibilities, so we are trying to define ‘chair' in terms of its functionalities."
Potentially, computers that can "see" and track moving objects could improve assistive technology for the visually impaired, and also could have applications in medicine, such as locating and following cells; in tracking insect and animal motion; in traffic modeling for "smart" buildings, and improved navigation and surveillance in robots.
"For the visually impaired, the most important things are depth and obstacles," Yang says. "This could help them see the world around them. They don't need to see very far away, just to see whether there are obstacles near them, two or three feet away. The computer program, for example, could be in a cane. The camera would be able to create a 3-D world and give them feedback. The computer can tell them that the surface is uneven, so they will know, or sense a human or a car in front of them."
Yang is conducting his research under a National Science Foundation Faculty Early Career Development (CAREER) award, which he received in 2012. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education, and research within the context of the mission of their organization. He is receiving $473,797 over five years.
Yang's project also includes developing a code library of tracking algorithms and a large data set, which will become publicly available. The grant also provides for an educational component that will involve both undergraduate and graduate students, with an emphasis on encouraging underrepresented minority groups from California's Central Valley to study computer sciences and related fields. The goal is to integrate computer vision material in undergraduate courses so that students will want to continue studying in the field.
Additionally, Yang is helping several undergraduate students design vision applications for mobile phones, and trying to write programs that will enable computers to infer depth and distance, as well as to interpret the images it "sees."
"It is not clear exactly how human vision works, but one way to explain visual perception of depth is based on people's two eyes and trigonometry," he says. "By figuring out the geometry of the points, we can figure out depth. We do it all the time, without thinking. But for computers, it's still very difficult to do that.
"The Holy Grail of computer vision is to tell a story using an image or video, and have the computer understand on some level what it is seeing," he adds. "If you give an image to a kid, and ask the kid to tell a story, the kid can do it. But if you ask a computer program to do it, now it can only do a few primitive things. A kid already has the cognitive knowledge to tell a story based on the image, but the computer just sees things as is, but doesn't have any background information. We hope to give the computer some interpretation, but we aren't there yet."
-- Marlene Cimons, National Science Foundation
Investigators
Ming-Hsuan Yang
Related Institutions/Organizations
University of California - Merced
Subscribe to:
Posts (Atom)