Showing posts with label ARTIFICIAL INTELLIGENCE. Show all posts
Showing posts with label ARTIFICIAL INTELLIGENCE. Show all posts

Monday, February 9, 2015

AI AND SAFE SELF-DRIVING CARS

FROM:  NATIONAL SCIENCE FOUNDATION
Programming safety into self-driving cars
UMass researchers improve artificial intelligence algorithms for semi-autonomous vehicles
February 2, 2015

For decades, researchers in artificial intelligence, or AI, worked on specialized problems, developing theoretical concepts and workable algorithms for various aspects of the field. Computer vision, planning and reasoning experts all struggled independently in areas that many thought would be easy to solve, but which proved incredibly difficult.

However, in recent years, as the individual aspects of artificial intelligence matured, researchers began bringing the pieces together, leading to amazing displays of high-level intelligence: from IBM's Watson to the recent poker playing champion to the ability of AI to recognize cats on the internet.

These advances were on display this week at the 29th conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Austin, Texas, where interdisciplinary and applied research were prevalent, according to Shlomo Zilberstein, the conference committee chair and co-author on three papers at the conference.

Zilberstein studies the way artificial agents plan their future actions, particularly when working semi-autonomously--that is to say in conjunction with people or other devices.

Examples of semi-autonomous systems include co-robots working with humans in manufacturing, search-and-rescue robots that can be managed by humans working  remotely and "driverless" cars. It is the latter topic that has particularly piqued Zilberstein's interest in recent years.

The marketing campaigns of leading auto manufacturers have presented a vision of the future where the passenger (formerly known as the driver) can check his or her email, chat with friends or even sleep while shuttling between home and the office. Some prototype vehicles included seats that swivel back to create an interior living room, or as in the case of Google's driverless car, a design with no steering wheel or brakes.

Except in rare cases, it's not clear to Zilberstein that this vision for the vehicles of the near future is a realistic one.

"In many areas, there are lots of barriers to full autonomy," Zilberstein said. "These barriers are not only technological, but also relate to legal and ethical issues and economic concerns."

In his talk at the "Blue Sky" session at AAAI, Zilberstein argued that in many areas, including driving, we will go through a long period where humans act as co-pilots or supervisors, passing off responsibility to the vehicle when possible and taking the wheel when the driving gets tricky, before the technology reaches full autonomy (if it ever does).

In such a scenario, the car would need to communicate with drivers to alert them when they need to take over control. In cases where the driver is non-responsive, the car must be able to autonomously make the decision to safely move to the side of the road and stop.

"People are unpredictable. What happens if the person is not doing what they're asked or expected to do, and the car is moving at sixty miles per hour?" Zilberstein asked. "This requires 'fault-tolerant planning.' It's the kind of planning that can handle a certain number of deviations or errors by the person who is asked to execute the plan."

With support from the National Science Foundation (NSF), Zilberstein has been exploring these and other practical questions related to the possibility of artificial agents that act among us.

Zilberstein, a professor of computer science at the University of Massachusetts Amherst, works with human studies experts from academia and industry to help uncover the subtle elements of human behavior that one would need to take into account when preparing a robot to work semi-autonomously. He then translates those ideas into computer programs that let a robot or autonomous vehicle plan its actions--and create a plan B in case of an emergency.

There are a lot of subtle cues that go into safe driving. Take for example a four-way stop. Officially, the first car to the crosswalk goes first, but in actuality, people watch each other to see if and when to make their move.

"There is a slight negotiation going on without talking," Zilberstein explained. "It's communicating by your action such as eye contact, the wave of a hand, or the slight revving of an engine."

In trials, autonomous vehicles often sit paralyzed at such stops, unable to safely read the cues of the other drivers on the road. This "undecidedness" is a big problem for robots. A recent paper by Alan Winfield of Bristol Robotics Laboratory in the UK showed how robots, when faced with a difficult decision, will often process for such a long period of time as to miss the opportunity to act. Zilberstein's systems are designed to remedy this problem.

"With some careful separation of objectives, planning algorithms could address one of the key problems of maintaining 'live state', even when goal reachability relies on timely human interventions," he concluded.

The ability to tailor one's trip based on human-centered factors--like how attentive the driver can be or the driver's desire to avoid highways--is another aspect of semi-autonomous driving that Zilberstein is exploring.

In a paper with Kyle Wray from the University of Massachusetts Amherst and Abdel-Illah Mouaddib from the University of Caen in France, Zilberstein introduced a new model and planning algorithm that allows semi-autonomous systems to make sequential decisions in situations that involve multiple objectives--for example, balancing safety and speed.

Their experiment focused on a semi-autonomous driving scenario where the decision to transfer control depended on the driver's level of fatigue. They showed that using their new algorithm a vehicle was able to favor roads where the vehicle can drive autonomously when the driver is fatigued, thus maximizing driver safety.

"In real life, people often try to optimize several competing objectives," Zilberstein said. "This planning algorithm can do that very quickly when the objectives are prioritized. For example, the highest priority may be to minimize driving time and a lower priority objective may be to minimize driving effort. Ultimately, we want to learn how to balance such competing objectives for each driver based on observed driving patterns."

It's an exciting time for artificial intelligence. The fruits of many decades of labor are finally being deployed in real systems and machine learning is being adopted widely and for different purposes than anyone had ever realized.

"We are beginning to see these kinds of remarkable successes that integrate decades-long research efforts in a variety of AI topics," said Héctor Muñoz-Avila, program director in NSF's Robust Intelligence cluster.

Indeed, over many decades, NSF's Robust Intelligence program has supported foundational research in artificial intelligence that, according to Zilberstein, has given rise to the amazing smart systems that are beginning to transform our world. But the agency has also supported researchers like Zilberstein who ask tough questions about emerging technologies.

"When we talk about autonomy, there are legal issues, technological issues and a lot of open questions," he said. "Personally, I think that NSF has been able to identify these as important questions and has been willing to put money into them. And this gives the U.S. a big advantage."

-- Aaron Dubrow, NSF

Saturday, November 22, 2014

THE TRAINING OF A RESEARCH ROBOT

FROM:   NATIONAL SCIENCE FOUNDATION 
A day in the life of Robotina
What might daily life be like for a research robot that's training to work closely with humans?

On the day of the Lego experiment, I roll out of my room early. I scan the lab with my laser, which sits a foot off the floor, and see a landscape of points and planes. My first scan turns up four dense dots, which I deduce to be a table's legs...
Robotina is a sophisticated research robot. Specifically, it's a Willow Garage PR2, designed to work with people.

But around the MIT Computer Science and Artificial Intelligence Laboratory, it is most-often called Robotina.

"We chose a name for every robot in our lab. It's more personal that way," said graduate student Claudia Pérez D'Arpino, who grew up watching the futuristic cartoon The Jetsons. In the Spanish-language version, Rosie, the much-loved household robot, is called Robotina.

Robotina has been in the interactive robotics lab of engineering professor Julie Shah since 2011, where it is one of three main robot platforms Shah's team works with. Robotina is aptly named, as an aim is to give it many of Rosie's capabilities: to interact with humans and perform many types of work.

In her National Science Foundation (NSF)-supported research, Shah and her team study how humans and robots can work together more efficiently. Hers is one of dozens of projects supported by the National Robotics Initiative, a government-wide effort to develop robots that can work alongside humans.

"We focus on how robots can assist people in high-intensity situations, like manufacturing plants, search-and-rescue situations and even space exploration," Shah said.

What Shah and her team are finding in their experiments is that humans often work better and feel more at ease when Robotina is calling the shots--that is, when it's scheduling tasks. In fact, a recent MIT experiment showed that a decision-making robotic helper can make humans significantly more productive.

Part of the reason for this seems to be that people not only trust Robotina's impeccable ability to crunch numbers, they also believe the robot trusts and understands them.

As roboticists develop more sophisticated, human-like robotic assistants, it's easy to anthropomorphize them. Indeed, it's nothing new.

So, what is a day in the life of Robotina like as she struggles to learn social skills?

Give that robot a Coke

I don't just crash into things all the time like some two-year-old human, if that's what you're wondering. My mouth also contains a laser scanner, so I can get a 3-D sense of my surroundings. My eyes are cameras and I can recognize objects...

Robotina has sensors from head to base to help it interact with its environment. With proper programming, its pincher-like hands can do everything from fold towels to fetch Legos (more on that soon).

It could even sip a Coke if it wanted to. Well, not quite. But it could pick up the can without smashing it.

Matthew Gombolay, graduate student and NSF research fellow, once witnessed the act. At the time, he wasn't sure how Robotina would handle the bendable aluminum can.

"I wanted it to pick up a Coke can to see what would happen," Gombolay said. "I thought it'd be really strong and crush the Coke can, but it didn't. It stopped."

That's because Robotina has the ability to gauge how much pressure is just enough to hold or manipulate an object. It can also sense when it is too close to something--or someone--and stop.

Look, I'm 5-feet-and-4.7-inches tall--even taller if I stretch my metal spine--and weigh a lot more than your average human. If I sense something, I stop...

Proximity awareness in robots designed to work around people not only prevents dangerous or awkward robot-human collisions, it builds trust.

"I am definitely someone who likes to test things to failure. I want to know if I can trust it," Gombolay said. "So, I know it's not going to crush a Coke can, and I'm strong enough to crush a Coke can, so I feel safer."

Roboticists who aim to integrate robots into human teams are serious about trying to hard-wire robots to follow the spirit of Isaac Asimov's first Law of Robotics: A robot may not injure a human being.

Luckily, when decision-making robots like Robotina move into factories, they don't have to be ballet dancers. They just have to move well enough to do their jobs without hurting anyone. Perhaps as importantly, the people around them must know that the robots won't hurt them.

Robots love Legos, too

The day of the Lego experiment is eight hours of fetching Legos and making decisions about how to assemble them. The calculations are easy enough, but all that labor makes my right arm stop working. So I switch to my left...

In an exercise last fall that mimicked a manufacturing scenario, the researchers set up an experiment that required robot-human teams to build models out of Legos.

In one trial, Robotina created a schedule to complete the tasks; in the other, a human made the decisions. The goal was to determine whether having an autonomous robot on the team might improve efficiency.

The researchers found that when Robotina organized the tasks, they took less time--both for scheduling and assembly. The humans trusted the robot to make impartial decisions and do what was best for the team.

I have to decide what task needs doing next to complete the Lego structure. The humans text me when they are done with a task or ready to start a new one. I schedule the tasks based on the data. I don't play favorites. When I'm not fetching Legos or thinking, I sit quietly...

"People thought the robot would be unbiased, while a human would be biased based on skills," Gombolay said. "People generally viewed the robot positively as a good teammate."

As it turned out, workers preferred increased productivity over having more control. When it comes to assembling something, "the humans almost always perform better when Robotina makes all the decisions," Shah said.

Predicting the unpredictable

I stand across a table from a human. I sort Legos into cups while the human takes things out of the cups. Humans are incredibly unpredictable, but I do my best to analyze where the human is most likely to move next so that I can accommodate him...

Ideally, in the factories of the future, robots will be able to predict human behavior and movement so well they can easily stay out of the way of their human co-workers.

The goal is to have robots that never even have to use their proximity sensors to avoid collisions. They already know where a human is going and can steer clear.

"Suppose you want a robot to help you out but are uncomfortable when the robot moves in an awkward way. You may be afraid to interact with it, which is highly inefficient," Pérez D'Arpino said. "At the end of the day, you want to make humans comfortable."

To help do so, Pérez D'Arpino is developing a model that will help Robotina guess what a human will do next.

In an experiment where it and a student worked together to sort Lego pieces and build models, Robotina was able to guess in only 400 milliseconds where the human would go next based on the person's body position.

The angle of the arm, elbow, wrist... they all help me determine in what direction the hand will go. I am limited only by the rate at which sensors and processors can collect and analyze data, which means I can predict where a person will move in about the average time a human eye blinks...

Once Robotina knew where the person would reach, it reached for a different spot. The result was a more natural, more fluid collaboration.

Putting Robotinas to work

I ask myself the same question you do: Am I reaching my full potential?

While Robotina's days now involve seemingly endless cups of Legos, its successes in the MIT lab will eventually enable it to become a more well-rounded robot. The experiments also demonstrate humans' willingness to embrace robots in the right roles.

To make them the superb, cooperative assistants envisioned by the National Robotics Initiative--to give people a better quality of life and benefit society and the economy--could require that some robots be nearly as dynamic and versatile as humans.

"An old-school way of thinking is to make a robot for each task, like the Roomba," Gombolay said. "But unless we make an advanced, general-purpose robot, we won't be able to fully realize their full potential."

To have the ideal Robotina--the Jetsons' Robotina--in our home or workplace means a lot more training days for humans and robots alike. With the help of NSF funding, progress is being made.

"We're at a really exciting time," Gombolay said.

What would I say if I could talk? Probably that I'd really like to watch that Transformers movie.

-- Sarah Bates,
Investigators
Julie Shah
Related Institutions/Organizations
Massachusetts Institute of Technology
Association for the Advancement of Artificial Intelligence

Tuesday, July 1, 2014

DRIVERLESS VEHICLES

FROM:  NATIONAL SCIENCE FOUNDATION 
Demonstrating a driverless future
Carnegie Mellon researchers bring NSF-funded autonomous vehicle to D.C. to show promise of driverless cars

In the coming decades, we will likely commute to work and explore the countryside in autonomous, or driverless, cars capable of communicating with the roads they are traveling on. A convergence of technological innovations in embedded sensors, computer vision, artificial intelligence, control and automation, and computer processing power is making this feat a reality.

This week, researchers from Carnegie Mellon University (CMU) will mark a significant milestone, demonstrating one of the most advanced autonomous vehicles ever designed, capable of navigating on urban roads and highways without human intervention. The car was brought to Washington, D.C., at the request of Congressman Bill Shuster of Pennsylvania, who participated in a 33-mile drive in the autonomous vehicle between a Pittsburgh suburb and the city's airport last September.

Developed with support from the National Science Foundation (NSF), the U.S. Department of Transportation, DARPA and General Motors, the car is the result of more than a decade of research and development by scientists and engineers at CMU and elsewhere. Their work has advanced the underlying technologies--sensors, software, wireless communications and network integration--required to make sure a vehicle on the road is as safe--and ultimately safer--without a driver than with one. (In the case of the Washington, D.C., demonstration, an engineer will be on hand to take the wheel if required.)

"This technology has been enabled by remarkable advances in the seamless blend of computation, networking and control into physical objects--a field known as cyber-physical systems," said Cora Marrett, NSF deputy director. "The National Science Foundation has long supported fundamental research that has built a strong foundation to enable cyber-physical systems to become a reality--like Dr. Raj Rajkumar's autonomous car."

Raj Rajkumar, a professor of electrical and computer engineering and robotics at CMU, is a leader not just in autonomous vehicles, but in the broader field of cyber-physical systems, or CPS. Such systems are already in use in sectors such as agriculture, energy, healthcare and advanced manufacturing, and they are poised to make an impact in transportation as well.

"Federal funding has been critical to our work in dealing with the uncertainties of real-world operating conditions, making efficient real-time usage of on-board computers, enabling vehicular communications and ensuring safe driving behaviors," Rajkumar said.

In 2007, Carnegie Mellon's then state-of-the-art driverless car, BOSS, took home the $2 million grand prize in the DARPA Urban Challenge, which pitted the leading autonomous vehicles in the world against one another in a challenging, urban environment. The new vehicle that Rajkumar is demonstrating in Washington, D.C., is the successor to that vehicle.

Unlike BOSS, which was rigged with visible antennas and large sensors, CMU's new car--a Cadillac SRX--doesn't appear particularly "smart." In fact, it looks much like any other car on the road. However, top-of-the-line radar, cameras, sensors and other technologies are built into the body of the vehicle. The car's computers are tucked away under the floor.

The goal of CMU's researchers is simple but important: To develop a driverless car that can decrease injuries and fatalities on roads. Automotive accidents result in 1.2 million fatalities annually around the world and cost citizens and governments $518 billion. It is estimated that 90 percent of those accidents are caused by human error.

"Because computers don't get distracted, sleepy or angry, they can actually keep us much safer--that is the promise of this technology," Rajkumar said. "Over time, the technology will augment automotive safety significantly."

In addition to controlling the steering, speed and braking, the autonomous systems in the vehicle also detect and avoid obstacles in the road, including pedestrians and bicyclists.

In their demonstration in D.C., cameras in the vehicle will visually detect the status of traffic lights and respond appropriately. In collaboration with the D.C. Department of Transportation, the researchers have even added a technology that allows some of the traffic lights in the Capitol Hill neighborhood of Washington to wirelessly communicate with the car, telling it the status of the lights ahead.

NSF has supported Rajkumar's work on autonomous vehicles since 2005, but it is not the only project of this kind that NSF supports. In addition to CMU's driverless car, NSF supports Sentry, an autonomous underwater vehicle deployed at Woods Hole Oceanographic Institute, and several projects investigating unmanned aerial vehicles (UAVs) including those in use in search and rescue and disaster recovery operations. Moreover, NSF supports numerous projects that advance the fundamental theories and applications that underlie all autonomous vehicles and other cyber-physical systems.

In the last five years, NSF has invested over $200 million in CPS research and education, building a foundation for the smart systems of the future.

-NSF-

Media Contacts
Aaron Dubrow, NSF
Byron Spice, Carnegie Mellon University
Principal Investigators
Raj Rajkumar, Carnegie Mellon University

Monday, May 19, 2014

LEARNING TO ADAPT WHEN YOU'RE AN ARTIFICIAL BRAIN

FROM:  NATIONAL SCIENCE FOUNDATION 
Artificial brains learn to adapt
Neural networks imitate intelligence of biological brains

For every thought or behavior, the brain erupts in a riot of activity, as thousands of cells communicate via electrical and chemical signals. Each nerve cell influences others within an intricate, interconnected neural network. And connections between brain cells change over time in response to our environment.

Despite supercomputer advances, the human brain remains the most flexible, efficient information processing device in the world. Its exceptional performance inspires researchers to study and imitate it as an ideal of computing power.

Artificial neural networks

Computer models built to replicate how the brain processes, memorizes and/or retrieves information are called artificial neural networks. For decades, engineers and computer scientists have used artificial neural networks as an effective tool in many real-world problems involving tasks such as classification, estimation and control.

However, artificial neural networks do not take into consideration some of the basic characteristics of the human brain such as signal transmission delays between neurons, membrane potentials and synaptic currents.

A new generation of neural network models -- called spiking neural networks -- are designed to better model the dynamics of the brain, where neurons initiate signals to other neurons in their networks with a rapid spike in cell voltage. In modeling biological neurons, spiking neural networks may have the potential to mimick brain activities in simulations, enabling researchers to investigate neural networks in a biological context.

With funding from the National Science Foundation, Silvia Ferrari of the Laboratory for Intelligent Systems and Controls at Duke University uses a new variation of spiking neural networks to better replicate the behavioral learning processes of mammalian brains.

Behavioral learning involves the use of sensory feedback, such as vision, touch and sound, to improve motor performance and enable people to respond and quickly adapt to their changing environment.

"Although existing engineering systems are very effective at controlling dynamics, they are not yet capable of handling unpredicted damages and failures handled by biological brains," Ferrari said.

How to teach an artificial brain

Ferrari's team is applying the spiking neural network model of learning on the fly to complex, critical engineering systems, such as aircraft and power plants, with the goal of making them safer, more cost-efficient and easier to operate.

The team has constructed an algorithm that teaches spiking neural networks which information is relevant and how important each factor is to the overall goal. Using computer simulations, they've demonstrated the algorithm on aircraft flight control and robot navigation.

They started, however, with an insect.

"Our method has been tested by training a virtual insect to navigate in an unknown terrain and find foods," said Xu Zhang, a Ph.D. candidate who works on training the spiking neural network. "The nervous system was modeled by a large spiking neural network with unknown and random synaptic connections among those neurons."

Having tested their algorithm in computer simulations, they now are in the process of testing it biologically.

To do so, they will use lab-grown brain cells genetically altered to respond to certain types of light. This technique, called optogenetics, allows researchers to control how nerve cells communicate. When the light pattern changes, the neural activity changes.

The researchers hope to observe that the living neural network adapts over time to the light patterns and therefore have the ability to store and retrieve sensory information, just as human neuronal networks do.

Large-scale applications of small-scale findings

Uncovering the fundamental mechanisms responsible for the brain's learning processes can potentially yield insights into how humans learn--and make an everyday difference in people's lives.

Such insights may advance the development of certain artificial devices that can substitute for certain motor, sensory or cognitive abilities, particularly prosthetics that respond to feedback from the user and the environment. People with Parkinson's disease and epilepsy have already benefited from these types of devices.

"One of the most significant challenges in reverse-engineering the brain is to close the knowledge gap that exists between our understanding of biophysical models of neuron-level activity and the synaptic plasticity mechanisms that drive meaningful learning," said Greg Foderaro, a postdoctoral fellow involved the the research.

"We believe that by considering the networks at several levels--from computation to cell cultures to brains--we can greatly expand our understanding of the system of sensory and motor functions, as well as making a large step towards understanding the brain as a whole."

-- Sarah Bates,
-- Silvia Ferrari, Duke University
-- Greg Foderaro, Duke University
-- Xu Zhang, Duke University
Investigators
Silvia Ferrari
Pankaj Agarwal
John Albertson
Craig Henriquez
Gabriel Katul
Ronald Parr
Antonius VanDongen
Related Institutions/Organizations
Duke University

Monday, December 17, 2012

COGNITIVE SIMULATION TOOL MAY HELP IMPROVE CULTUREAL INTERACTION

U.S. Military and Provincial Troops in Afghanistan.  Credit:  U.S. Navy.

FROM: U.S. DEPARTMENT OF DEFENSE,  'ARMED WITH SCIENCE'

by jtozer

Top Tech-Cognitive Simulation Tool

Top Technology is an Armed with Science series that highlights the latest and greatest federal laboratory inventions which are available for transfer to business partners.
Naval Research Laboratory has patented an artificial intelligence and cognitive modeling technology designed to better understand what can happen in culturally diverse circumstances. It’s called the Cognitive Simulation Tool, and it could very well change the way we interact with people from different cultures.

So what is it?

The techno-babble for the

Cognitive Simulation Tool is that NRL has patented this technology so it applies to a learning algorithm grounded in social science to model interactions of agents/actors from different groups or cultures. The tool embedding this technology uses agent-based simulation of preference-driven agents endowed with cognitive maps representing their causal beliefs.

What does that mean?

That means that this is simulation technology that allows us to get a better understanding of what can happen when two very different groups have to interact with each other. Agents can modify their cognitive maps through social learning, and a user can seed the simulation with diverse belief structures and activate the simulation to predict coalitions/conflicts and shifts of allegiance.

Basically, it’s a what-if social scenario simulator (say that ten times fast).

What does it do?

I don’t think I need to tell you that the balance of social interaction can be a delicate one.

When it comes to speaking or working with foreign nationals, being able to respect them and possibly encourage cooperation to a mutual benefit can be influential, and in some cases necessary. This technology is designed to measure the impact of a foreign presence on a society before systems collide. It can predict coalitions, population attitudes in response to exogenous events, and even visualize group information.

How can this help the warfighter?

Service members typically spend a lot of time interacting with different people from different social, economic and religious backgrounds. Having a better understanding of how to approach people is as valuable as having situational awareness. Indeed, it’s a viable element of SA. This tool could provide service members with the skills they need to interact with diverse groups effectively and positively, while also teaching them how to be more effective at certain forms of communication.

Also it includes a video gaming system, so that’s bound to be fun.

My take?

I think everyone could benefit from a little social interaction training. If people could plan ahead on how to interact with others I think the world would be a less awkward place.

Imagine how different first dates would be if you’d already ruled out all those cheesy one-liners and unfunny jokes. Or an interview simulator that allowed you to figure out if your self-depreciating humor would fall flat or not. Now I’m not saying this Cognitive Simulation Tool is capable of that – it’s certainly not going to fix all the awkward conversations in the world – but it can help service members to cross certain cultural barriers in times where it could really be important to do so.

Like on a deployment. Or establishing new multi-cultural collaborations. Or ordering food in a foreign country.

Now, this is something that falls under the heading of education and homeland security, but really I think it would help our warfighters to become better, more effective ambassadors to these other countries. Part of our mission is to be able to connect to people from other countries. It’s intuitive that we ought to prepare our troops for any and all circumstances they might encounter.

Having adequate training that prepares warfighters for any real-world scenario is important.

Having a social interaction simulator is, in my opinion, a long time coming.

My take on this is make it so, NRL. And besides, you know I’m a fan of anything that brings us that much closer to a holodeck.


Tuesday, August 7, 2012

MAKING A LASER FROM TOY PIECES?

FROM: U.S. DEPARTMENT OF DEFENSE "ARMED WITH SCIENCE"
Here' s another view of the Prototyping High Bay in the LEGO model. In the actual Prototyping High Bay, lighting can be adjusted to simulate nighttime conditions. (Photo: U.S. Naval Research Laboratory, Jamie Hartman)
 

LEGO LASER!
By jtozer
For his day time job, William Adams works in the Navy Center for Applied Research in Artificial Intelligence at the Naval Research Laboratory, supporting research in human-robot interaction, sensing, and autonomy. He manages the resources of the Center’s robot lab, and keeps the Center’s Mobile, Dexterous, Social (MDS) robots – Octavia, Isaac, and Lucas – operating and configured to meet research needs.

It was in April of 2012 that NRL opened the brand-new Laboratory for Autonomous Systems Research facility. The building and opening of that one-of-a-kind facility sparked an idea in William’s mind that led to a LEGO model. For those of us who enjoyed simple LEGO projects as children or with our children, the scope of this project is beyond our imagination.

Here’s how William describes the project:
How long did it take you to build the LEGO model of LASR?It took approximately 120 hours, working a few evenings a week, over the course of 3 months. It also took seven trips to the three local Lego stores to buy additional bricks.

Do you know how many pieces are used in the model?It wasn’t practical to keep an accurate tally during construction, but I have made a rather detailed post-construction estimate of 13,400 pieces.

Tell us about the details from inside some of the LASR rooms. Were you able to build all of the actual LASR environments in your LEGO model?Limitations on time and brick (the community’s collective term for LEGO pieces) prevented a complete interior, but I tried to represent most of the spaces. The Reconfigurable Prototyping High Bay, Littoral High Bay, Desert High Bay,Tropical High Bay, Power and Energy Lab, two Human-System Interaction Labs, the Machine Shop, Electrical Shop, and changing room all have full interiors.

What sparked the idea for you to attempt making this model?LEGO recently released a line of architectural kits, all in a very small scale. I had some aging LEGO models in my office that needed replacing and figured that I could build a model of the LASR building. Then I thought about the larger models sometimes seen on display and decided to build it larger for the opening of the LASR facility (a deadline which I ended up missing). Building to a larger scale, approximately 1:60, allowed for detailed interiors while keeping it slightly under LEGO figure ("minifig") scale, approximately 1:48, cut the brick demands in half and kept it transportable.

Have you built other models of this scale and complexity?Not really. When we were kids my siblings and I would build custom castles on the dining room table and lay siege,

according to a well thought out set of rules inspired by various board games. My brother and I built a model of the National Cathedral that rose with different color strata as we exhausted our brick supply of each.

Several years ago I built a set of detailed models with the theme of a medieval shipyard, each showing a particular trade or technology, but those models were much smaller and could have all fit within the LASR model’s large high bay.

Where is the model located now? Will you keep it as a permanent model?The model is on display in the front area of the LASR facility, where it will stay until either the LASR Director needs the space, or I need the brick for recycling into a new model. It will probably be there through the holidays this year.

How and when did you start working with LEGOS?I remember a pre-existing butter-tub of LEGOS from way back. Things really got moving when I was 5, in 1975, and my father took us to the toy store and bought us the moon landing kit; that’s #565 for the AFOLs. Since it kept us kids occupied, LEGO became standard fare for birthday and Christmas presents. After a high school and college hiatus, I picked up the habit again, although now we "kids" never really get the time to build together.
(Editor’s note: "AFOL" refers to "Adult Fan of LEGO" and describes those adult hobbyists who build or collect LEGO.)

Have you started a new LEGO project yet?I don’t have any specific plans for another LEGO project. I’ll be adding to the LASR model to keep it up to date and keep it interesting.

We look forward to seeing the updates on the LASR model … or William’s next big LEGO project.

 

Monday, June 18, 2012

ARTIFICIAL INTELLIGENCE AND UNDERSTANDING INFERENCES

Icon Credit:  lcb.
FROM:  U.S. DEPARTMENT OF DEFENSE ARMED WITH SCIENCE
DEEP EXPLORATION AND FILTERING TEXT  
Written on JUNE 8, 2012 AT 7:10 AM by JTOZER
Are You Inferring What I think You’re Inferring?
Much of the operationally-relevant information relied on in support of DoD missions may be implicit rather than explicitly expressed, and in many cases, information is deliberately obfuscated and important activities and objects are only indirectly referenced.”

In short, sometimes the meaning in messages just isn’t clear.  Ironic, isn’t it?
That can be a problem, though, especially on the ground where accurate and timely intelligence can affect the success or failure of a mission.

Through various and sundry means, DoD collects vast sets of data in the form of messages, documents, notes and the like, both on and off the battlefield. Thoroughly and efficiently processing this data to extract valuable content is a challenge based on volume alone, but the problem is magnified when important information within those files is deliberately masked by its authors.

So, what is there to do when you have commanders and warfighters on the front line depending on analysts to help them build informed plans?
Why, you use technology, of course.

DARPA is developing a new type of automated, deep natural-language understanding technology which they say may hold a solution for more efficiently processing text information.  A mumbo-jumbo decomplicator?
Go on…
When processed at its most basic level without ingrained cultural filters, language offers the key to understanding connections in text that might not be readily apparent to humans.  A “just the facts” approach is more effective than the “the give us the whole story” angle,
so to speak.  Also, it’s fun to talk like a 1940s detective.
But how do you do that?  Not the detective talk, sweetheart, I mean the dialing-it-down.  Getting to the root of the story.  The meat and potatoes of the whole shebang (okay, I’ll stop).

In short, DEFT.  At length…it makes more sense.
DARPA created the Deep Exploration and Filtering of Text (DEFT) program to harness the power of language. Sophisticated artificial intelligence of this nature has the potential to enable defense analysts to efficiently investigate documents so they can discover implicitly expressed, actionable information contained within them.
Letting the technology do the dirty work, eh? (last one I swear).

Actually that’s a smart idea.  But that implies that the system will be capable of understanding the thought that goes behind human communication, right?  How is an AI going to know what we mean by our vague platitudes or double entendres?  Will it know what I mean when I type LOL?

“DEFT is attempting to create technology to make reliable inferences based on basic text,” said Bonnie Dorr, DARPA program manager for DEFT.  “We want the ability to mitigate ambiguity in text by stripping away filters that can cloud meaning and by rejecting false information.  To be successful, the technology needs to look beyond what is explicitly expressed in text to infer what is actually meant.”

The development of an automated solution may rely on contributions from the linguistics and computer science fields in the areas of artificial intelligence, computational linguistics, machine learning, natural-language understanding, discourse and dialogue analysis, and others.

They could have used this in Star Trek, you know.  Something complicated would happen and then one of the characters would have to sum it up in a simple metaphor.  With DEFT, they could have taken all those pithy one-liners out.  Imagine all the time and energy they could have saved while the warp core breached.  Again.

DEFT will build on existing DARPA programs and ongoing academic re
search into deep language understanding and artificial intelligence to address remaining capability gaps related to inference, causal relationships and anomaly detection.

“Much of the basic research needed for DEFT has been accomplished, but now has to be scaled, applied and integrated through the development of new technology,” Dorr said.
As information is processed, DEFT also aims to integrate individual facts into large domain models for assessment, planning and prediction. If successful, DEFT will allow analysts to move from limited, linear processing of insurmountable quantities of data to a nuanced, strategic exploration of available information.

So, by reading between the overly-complicated lines, DEFT might actually be able to glean useful information from what would otherwise be muddled and confusing messages.  I wonder if they can turn this technology into a teenage-text filter so I can weed through the bizarre pseudo-cavemen texts I get from my cousin.  Then I’ll finally know what “IDK, TTYL8R” means.

Search This Blog

Translate

White House.gov Press Office Feed