FROM: NATIONAL SCIENCE FOUNDATION
A day in the life of Robotina
What might daily life be like for a research robot that's training to work closely with humans?
On the day of the Lego experiment, I roll out of my room early. I scan the lab with my laser, which sits a foot off the floor, and see a landscape of points and planes. My first scan turns up four dense dots, which I deduce to be a table's legs...
Robotina is a sophisticated research robot. Specifically, it's a Willow Garage PR2, designed to work with people.
But around the MIT Computer Science and Artificial Intelligence Laboratory, it is most-often called Robotina.
"We chose a name for every robot in our lab. It's more personal that way," said graduate student Claudia Pérez D'Arpino, who grew up watching the futuristic cartoon The Jetsons. In the Spanish-language version, Rosie, the much-loved household robot, is called Robotina.
Robotina has been in the interactive robotics lab of engineering professor Julie Shah since 2011, where it is one of three main robot platforms Shah's team works with. Robotina is aptly named, as an aim is to give it many of Rosie's capabilities: to interact with humans and perform many types of work.
In her National Science Foundation (NSF)-supported research, Shah and her team study how humans and robots can work together more efficiently. Hers is one of dozens of projects supported by the National Robotics Initiative, a government-wide effort to develop robots that can work alongside humans.
"We focus on how robots can assist people in high-intensity situations, like manufacturing plants, search-and-rescue situations and even space exploration," Shah said.
What Shah and her team are finding in their experiments is that humans often work better and feel more at ease when Robotina is calling the shots--that is, when it's scheduling tasks. In fact, a recent MIT experiment showed that a decision-making robotic helper can make humans significantly more productive.
Part of the reason for this seems to be that people not only trust Robotina's impeccable ability to crunch numbers, they also believe the robot trusts and understands them.
As roboticists develop more sophisticated, human-like robotic assistants, it's easy to anthropomorphize them. Indeed, it's nothing new.
So, what is a day in the life of Robotina like as she struggles to learn social skills?
Give that robot a Coke
I don't just crash into things all the time like some two-year-old human, if that's what you're wondering. My mouth also contains a laser scanner, so I can get a 3-D sense of my surroundings. My eyes are cameras and I can recognize objects...
Robotina has sensors from head to base to help it interact with its environment. With proper programming, its pincher-like hands can do everything from fold towels to fetch Legos (more on that soon).
It could even sip a Coke if it wanted to. Well, not quite. But it could pick up the can without smashing it.
Matthew Gombolay, graduate student and NSF research fellow, once witnessed the act. At the time, he wasn't sure how Robotina would handle the bendable aluminum can.
"I wanted it to pick up a Coke can to see what would happen," Gombolay said. "I thought it'd be really strong and crush the Coke can, but it didn't. It stopped."
That's because Robotina has the ability to gauge how much pressure is just enough to hold or manipulate an object. It can also sense when it is too close to something--or someone--and stop.
Look, I'm 5-feet-and-4.7-inches tall--even taller if I stretch my metal spine--and weigh a lot more than your average human. If I sense something, I stop...
Proximity awareness in robots designed to work around people not only prevents dangerous or awkward robot-human collisions, it builds trust.
"I am definitely someone who likes to test things to failure. I want to know if I can trust it," Gombolay said. "So, I know it's not going to crush a Coke can, and I'm strong enough to crush a Coke can, so I feel safer."
Roboticists who aim to integrate robots into human teams are serious about trying to hard-wire robots to follow the spirit of Isaac Asimov's first Law of Robotics: A robot may not injure a human being.
Luckily, when decision-making robots like Robotina move into factories, they don't have to be ballet dancers. They just have to move well enough to do their jobs without hurting anyone. Perhaps as importantly, the people around them must know that the robots won't hurt them.
Robots love Legos, too
The day of the Lego experiment is eight hours of fetching Legos and making decisions about how to assemble them. The calculations are easy enough, but all that labor makes my right arm stop working. So I switch to my left...
In an exercise last fall that mimicked a manufacturing scenario, the researchers set up an experiment that required robot-human teams to build models out of Legos.
In one trial, Robotina created a schedule to complete the tasks; in the other, a human made the decisions. The goal was to determine whether having an autonomous robot on the team might improve efficiency.
The researchers found that when Robotina organized the tasks, they took less time--both for scheduling and assembly. The humans trusted the robot to make impartial decisions and do what was best for the team.
I have to decide what task needs doing next to complete the Lego structure. The humans text me when they are done with a task or ready to start a new one. I schedule the tasks based on the data. I don't play favorites. When I'm not fetching Legos or thinking, I sit quietly...
"People thought the robot would be unbiased, while a human would be biased based on skills," Gombolay said. "People generally viewed the robot positively as a good teammate."
As it turned out, workers preferred increased productivity over having more control. When it comes to assembling something, "the humans almost always perform better when Robotina makes all the decisions," Shah said.
Predicting the unpredictable
I stand across a table from a human. I sort Legos into cups while the human takes things out of the cups. Humans are incredibly unpredictable, but I do my best to analyze where the human is most likely to move next so that I can accommodate him...
Ideally, in the factories of the future, robots will be able to predict human behavior and movement so well they can easily stay out of the way of their human co-workers.
The goal is to have robots that never even have to use their proximity sensors to avoid collisions. They already know where a human is going and can steer clear.
"Suppose you want a robot to help you out but are uncomfortable when the robot moves in an awkward way. You may be afraid to interact with it, which is highly inefficient," Pérez D'Arpino said. "At the end of the day, you want to make humans comfortable."
To help do so, Pérez D'Arpino is developing a model that will help Robotina guess what a human will do next.
In an experiment where it and a student worked together to sort Lego pieces and build models, Robotina was able to guess in only 400 milliseconds where the human would go next based on the person's body position.
The angle of the arm, elbow, wrist... they all help me determine in what direction the hand will go. I am limited only by the rate at which sensors and processors can collect and analyze data, which means I can predict where a person will move in about the average time a human eye blinks...
Once Robotina knew where the person would reach, it reached for a different spot. The result was a more natural, more fluid collaboration.
Putting Robotinas to work
I ask myself the same question you do: Am I reaching my full potential?
While Robotina's days now involve seemingly endless cups of Legos, its successes in the MIT lab will eventually enable it to become a more well-rounded robot. The experiments also demonstrate humans' willingness to embrace robots in the right roles.
To make them the superb, cooperative assistants envisioned by the National Robotics Initiative--to give people a better quality of life and benefit society and the economy--could require that some robots be nearly as dynamic and versatile as humans.
"An old-school way of thinking is to make a robot for each task, like the Roomba," Gombolay said. "But unless we make an advanced, general-purpose robot, we won't be able to fully realize their full potential."
To have the ideal Robotina--the Jetsons' Robotina--in our home or workplace means a lot more training days for humans and robots alike. With the help of NSF funding, progress is being made.
"We're at a really exciting time," Gombolay said.
What would I say if I could talk? Probably that I'd really like to watch that Transformers movie.
-- Sarah Bates,
Investigators
Julie Shah
Related Institutions/Organizations
Massachusetts Institute of Technology
Association for the Advancement of Artificial Intelligence
A PUBLICATION OF RANDOM U.S.GOVERNMENT PRESS RELEASES AND ARTICLES
Showing posts with label MIT. Show all posts
Showing posts with label MIT. Show all posts
Saturday, November 22, 2014
Tuesday, May 27, 2014
RESEARCHERS LOOK AT THE BRAIN
FROM: NATIONAL SCIENCE FOUNDATION
Engineers ask the brain to say, "Cheese!"
How do we take an accurate picture of the world’s most complex biological structure?
Creating new brain imaging techniques is one of today's greatest engineering challenges.
The incentive for a good picture is big: looking at the brain helps us to understand how we move, how we think and how we learn. Recent advances in imaging enable us to see what the brain is doing more precisely across space and time and in more realistic conditions.
The newest advance in optical imaging brings researchers even closer to illuminating the whole brain and nervous system.
Researchers at the Massachusetts Institute of Technology and the University of Vienna achieved simultaneous functional imaging of all the neurons of the transparent roundworm C. elegans. This technique is the first that can generate 3-D movies of entire brains at the millisecond timescale.
The significance of this achievement becomes clear in light of the many engineering complexities associated with brain imaging techniques.
An imaging wish list
When 33 brain researchers put their minds together at a workshop funded by the National Science Foundation in August 2013, they identified three of the biggest challenges in mapping the human brain for better understanding, diagnosis and treatment.
Challenge one: High spatiotemporal resolution neuroimaging. Existing brain imaging technologies offer different advantages and disadvantages with respect to resolution. A method such as functional MRI that offers excellent spatial resolution (to several millimeters) can provide snapshots of brain activity in the order of seconds. Other methods, such as electroencephalography (EEG), provide precise information about brain activity over time (to the millisecond) but yield fuzzy information about the location.
The ability to conduct functional imaging of the brain, with high resolution in both space and time, would enable researchers to tease out some of the brain's most intricate workings. For example, each half of the thalamus--the brain's go-to structure for relaying sensory and motor information and a potential target for deep brain stimulation--has 13 functional areas in a package the size of a walnut.
With better spatial resolution, researchers would have an easier time determining which areas of the brain are involved in specific activities. This could ultimately help them identify more precise targets for stimulation, maximizing therapeutic benefits while minimizing unnecessary side effects.
In addition, researchers wish to combine data from different imaging techniques to study and model the brain at different levels, from molecules to cellular networks to the whole brain.
Challenge two: Perturbation-based neuroimaging. Much that we know about the brain relies on studies of dysfunction, when a problem such as a tumor or stroke affects a specific part of the brain and a correlating change in brain function can be observed.
But researchers also rely on techniques that temporarily ramp up, or turn off, brain activity in certain regions. What if the effects of such modifications on brain function could then be captured with neuroimaging techniques?
Being able to observe what happens when certain parts of the brain are activated could help researchers determine brain areas' functions and provide critical guidance for brain therapies.
Challenge three: Neuroimaging in naturalistic environments. Researchers aim to create new noninvasive methods for imaging the brain while a person interacts with his or her surroundings. This ability will become more valuable as new technologies that interface with the brain are developed.
For example, a patient undergoing brain therapy at home may choose to send information to his or her physician remotely rather than go to an office for frequent check-ups. The engineering challenges of this scenario include the creation of low-cost, wearable technologies to monitor the brain as well as the technical capability to differentiate between signs of trouble and normal fluctuations in brain activity during daily routines.
Other challenges the brain researchers identified are neuroimaging in patients with implanted brain devices; integrating imaging data from multiple techniques; and developing models, theories and infrastructures for better understanding and analyzing brain data. In addition, the research community must ensure that students are prepared to use and create new imaging techniques and data.
The workshop chair, Bin He of the University of Minnesota-Twin Cities, said, "Noninvasive human brain mapping has been a holy grail in science. Accomplishing the three grand challenges would change the future of brain science and our ability to treat numerous brain disorders that cost the nation over $500 billion each year."
The full workshop report was published in IEEE Transactions on Biomedical Engineering.
An imaging breakthrough
Engineers, in collaboration with neuroscientists, computer scientists and other researchers, are already at work devising creative ways to address these challenges.
The workshop findings place the new technique developed by the MIT and University of Vienna researchers into greater context. Their work had to overcome several of the challenges outlined.
The team captured neural activity in three dimensions at single-cell resolution by using a novel strategy not before applied to neurons--light-field microscopy, using a novel algorithm to reverse distortion, a process known as deconvolution.
The technique of light-field microscopy involves the shining of light at a 3-D sample, and capturing the locations of fluorophores in a still image, using a special set of lenses. The fluorophores in this case are modified proteins that attach to neuron and fluoresce when the neurons activate. However, this microscopy method requires a trade-off between the sample size and the spatial resolution possible, and thus it has not been before used for live biological imaging.
The advantage presented by light-field microscopy, here used in an optimized form, is that the technique may quickly capture the neuronal activity of whole animals, not simply still images, while providing high enough spatial resolution to make functional biological imaging possible.
"This elegant technique should have a large impact on the use of functional biological imaging for understanding brain cognitive function," said Leon Esterowitz, program director in NSF's Engineering Directorate, which provided partial funding for the research.
The researchers, led by Edward Boyden of MIT and Alipasha Vaziri of the University of Vienna, reported their results in this week's issue of the journal Nature Methods.
"Looking at the activity of just one neuron in the brain doesn't tell you how that information is being computed; for that, you need to know what upstream neurons are doing. And to understand what the activity of a given neuron means, you have to be able to see what downstream neurons are doing," said Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and one of the leaders of the research team.
"In short, if you want to understand how information is being integrated from sensation all the way to action, you have to see the entire brain."
-- Cecile J. Gonzalez,
Investigators
Edward Boyden
Bin He
Related Institutions/Organizations
Massachusetts Institute of Technology
University of Minnesota-Twin Cities
Monday, March 12, 2012
GRAIL TWIN SPACECRAFT BEGIN MOON STUDY
The following excerpt is from the NASA website:
“WASHINGTON -- NASA's Gravity Recovery And Interior Laboratory (GRAIL)
spacecraft orbiting the moon officially have begun their science
collection phase. During the next 84 days, scientists will obtain a
high-resolution map of the lunar gravitational field to learn about
the moon's internal structure and composition in unprecedented
detail. The data also will provide a better understanding of how
Earth and other rocky planets in the solar system formed and evolved.
"The initiation of science data collection is a time when the team
lets out a collective sigh of relief because we are finally doing
what we came to do," said Maria Zuber, principal investigator for the
GRAIL mission at the Massachusetts Institute of Technology in
Cambridge. "But it is also a time where we have to put the coffee pot
on, roll up our sleeves and get to work."
The GRAIL mission's twin, washing-machine-sized spacecraft, named Ebb
and Flow, entered lunar orbit on New Year's Eve and New Years Day.
GRAIL's science phase began yesterday at 8:15 p.m. EST (5:15 p.m.
PST). During this mission phase, the spacecraft will transmit radio
signals precisely defining the distance between them. As they fly
over areas of greater and lesser gravity caused by visible features
such as mountains, craters and masses hidden beneath the lunar
surface, the distance between the two spacecraft will change
slightly. Science activities are expected to conclude on May 29,
after GRAIL maps the gravity field of the moon three times.
"We are in a near-polar, near-circular orbit with an average altitude
of about 34 miles (55 kilometers) right now," said David Lehman,
GRAIL project manager from NASA's Jet Propulsion Laboratory (JPL) in
Pasadena, Calif. "During the science phase, our spacecraft will orbit
the moon as high as 31 miles (51 kilometers) and as low as 10 miles
(16 kilometers). They will get as close to each other as 40 miles (65
kilometers) and as far apart as 140 miles (225 kilometers)."
Previously named GRAIL A and B, the names Ebb and Flow were the result
of a nation-wide student contest to choose new names for the
spacecraft. The winning entry was submitted by fourth graders from
the Emily Dickinson Elementary School in Bozeman, Mont. Nearly 900
classrooms with more than 11,000 students from 45 states, Puerto Rico
and the District of Columbia, participated in the contest.
JPL manages the GRAIL mission for NASA's Science Mission Directorate
in Washington. The GRAIL mission is part of the Discovery Program
managed at NASA's Marshall Space Flight Center in Huntsville, Ala.
Lockheed Martin Space Systems in Denver built the spacecraft.”
“WASHINGTON -- NASA's Gravity Recovery And Interior Laboratory (GRAIL)
spacecraft orbiting the moon officially have begun their science
collection phase. During the next 84 days, scientists will obtain a
high-resolution map of the lunar gravitational field to learn about
the moon's internal structure and composition in unprecedented
detail. The data also will provide a better understanding of how
Earth and other rocky planets in the solar system formed and evolved.
"The initiation of science data collection is a time when the team
lets out a collective sigh of relief because we are finally doing
what we came to do," said Maria Zuber, principal investigator for the
GRAIL mission at the Massachusetts Institute of Technology in
Cambridge. "But it is also a time where we have to put the coffee pot
on, roll up our sleeves and get to work."
The GRAIL mission's twin, washing-machine-sized spacecraft, named Ebb
and Flow, entered lunar orbit on New Year's Eve and New Years Day.
GRAIL's science phase began yesterday at 8:15 p.m. EST (5:15 p.m.
PST). During this mission phase, the spacecraft will transmit radio
signals precisely defining the distance between them. As they fly
over areas of greater and lesser gravity caused by visible features
such as mountains, craters and masses hidden beneath the lunar
surface, the distance between the two spacecraft will change
slightly. Science activities are expected to conclude on May 29,
after GRAIL maps the gravity field of the moon three times.
"We are in a near-polar, near-circular orbit with an average altitude
of about 34 miles (55 kilometers) right now," said David Lehman,
GRAIL project manager from NASA's Jet Propulsion Laboratory (JPL) in
Pasadena, Calif. "During the science phase, our spacecraft will orbit
the moon as high as 31 miles (51 kilometers) and as low as 10 miles
(16 kilometers). They will get as close to each other as 40 miles (65
kilometers) and as far apart as 140 miles (225 kilometers)."
Previously named GRAIL A and B, the names Ebb and Flow were the result
of a nation-wide student contest to choose new names for the
spacecraft. The winning entry was submitted by fourth graders from
the Emily Dickinson Elementary School in Bozeman, Mont. Nearly 900
classrooms with more than 11,000 students from 45 states, Puerto Rico
and the District of Columbia, participated in the contest.
JPL manages the GRAIL mission for NASA's Science Mission Directorate
in Washington. The GRAIL mission is part of the Discovery Program
managed at NASA's Marshall Space Flight Center in Huntsville, Ala.
Lockheed Martin Space Systems in Denver built the spacecraft.”
Subscribe to:
Posts (Atom)