. Military Space News .
UCSD Researchers Give Computers Common Sense

Looking at the photo above, you see a person on a tennis court, wielding a tennis racket and chasing a...lemon. Right? Wrong. You don't think it's a lemon. You know it's a tennis ball. Computers with the latest image labeling algorithms don't have the contextual wits to know a lemon is very unlikely in this scene. UCSD computer scientists are looking to change that. Image credit: UC San Diego
by Staff Writers
San Diego CA (SPX) Oct 18, 2007
Using a little-known Google Labs widget, computer scientists from UC San Diego and UCLA have brought common sense to an automated image labeling system. This common sense is the ability to use context to help identify objects in photographs. For example, if a conventional automated object identifier has labeled a person, a tennis racket, a tennis court and a lemon in a photo, the new post-processing context check will re-label the lemon as a tennis ball.

"We think our paper is the first to bring external semantic context to the problem of object recognition," said computer science professor Serge Belongie from UC San Diego.

The researchers show that the Google Labs tool called Google Sets can be used to provide external contextual information to automated object identifiers. The paper will be presented on Thursday 18 October 2007 at ICCV 2007 - the 11th IEEE International Conference on Computer Vision in Rio de Janeiro, Brazil.

Google Sets generates lists of related items or objects from just a few examples. If you type in John, Paul and George, it will return the words Ringo, Beatles and John Lennon. If you type "neon" and "argon" it will give you the rest of the noble gasses.

"In some ways, Google Sets is a proxy for common sense. In our paper, we showed that you can use this common sense to provide contextual information that improves the accuracy of automated image labeling systems," said Belongie.

The image labeling system is a three step process. First, an automated system splits the image up into different regions through the process of image segmentation. In the photo above, image segmentation separates the person, the court, the racket and the yellow sphere.

Next, an automated system provides a ranked list of probable labels for each of these image regions.

Finally, the system adds a dose of context by processing all the different possible combinations of labels within the image and maximizing the contextual agreement among the labeled objects within each picture.

It is during this step that Google Sets can be used as a source of context that helps the system turn a lemon into a tennis ball. In this case, these "semantic context constraints" helped the system disambiguate between visually similar objects.

In another example, the researchers show that an object originally labeled as a cow is (correctly) re-labeled as a boat when the other objects in the image - sky, tree, building and water - are considered during the post-processing context step. In this case, the semantic context constraints helped to correct an entirely wrong image label. The context information came from the co-occurrence of object labels in the training sets rather than from Google Sets.

The computer scientists also highlight other advances they bring to automated object identification. First, instead of doing just one image segmentation, the researchers generated a collection of image segmentations and put together a shortlist of stable image segmentations. This increases the accuracy of the segmentation process and provides an implicit shape description for each of the image regions.

Second, the researchers ran their object categorization model on each of the segmentations, rather than on individual pixels. This dramatically reduced the computational demands on the object categorization model.

In the two sets of images that the researchers tested, the categorization results improved considerably with inclusion of context. For one image dataset, the average categorization accuracy increased more than 10 percent using the semantic context provided by Google Sets. In a second dataset, the average categorization accuracy improved by about 2 percent using the semantic context provided by Google Sets. The improvements were higher when the researchers gleaned context information from data on co-occurrence of object labels in the training data set for the object identifier.

Right now, the researchers are exploring ways to extend context beyond the presence of objects in the same image. For example, they want to make explicit use of absolute and relative geometric relationships between objects in an image - such as "above" or "inside" relationships. This would mean that if a person were sitting on top of an animal, the system would consider the animal to be more likely a horse than a dog.

Community
Email This Article
Comment On This Article

Related Links
All about the robots on Earth and beyond!



Memory Foam Mattress Review
Newsletters :: SpaceDaily :: SpaceWar :: TerraDaily :: Energy Daily
XML Feeds :: Space News :: Earth News :: War News :: Solar Energy News


Robotic Rockhounds: Interview with David Wettergreen Part 2
Moffett Field CA (SPX) Oct 12, 2007
Astrobiology Magazine recently interviewed David Wettergreen, an associate research professor with Carnegie Mellon University's Field Robotics Center. In this, the second segment of a four-part interview, Wettergreen talks about the robot Nomad, which began its career as a fossil-hunter in Chile's Atacama Desert, and later was sent to Antarctica to search for meteorites.







  • USS Fort McHenry Mission To Set Tone For US Africa Command
  • Walker's World: Inflating Russian reality
  • Analysis: China's unique assets
  • US reassures Russia on bases, warns over arms sales

  • Commentary: Not since Stalin
  • Bar Iran from nuclear arms to avoid World War III : Bush
  • Putin boosting Russia's interests with Iran visit
  • Israel PM heads to Russia after Putin's Iran visit

  • Russian Military Chief Says No Need To Give Up INF Treaty
  • Pentagon confirms accidental Patriot launch
  • Cruise Missile Sector Facing Supersonic Challenge
  • NKorea tests new solid-fuel missile, MP says

  • US could change missile shield plan if Iranian threat subsides
  • Barak, Gates discuss joint US-Israel anti-missile system
  • Russia's Army Chief Criticizes US Missile Defense Proposals
  • Putin sees US shift in missile shield row

  • MEPs seek limits on aircraft emissions by 2010
  • New Delft Material Concept For Aircraft Wings Could Save Billions
  • Aircraft And Automobiles Thrive In Hurricane-Force Winds At Lockheed Martin
  • Cathay Pacific chief hits out at anti-aviation critics

  • BAE Systems Led UCAV Programme Breaks New Ground
  • Reaper Aids Commanders On Battlefield
  • UAS Video Terminal Connects Boots On The Ground To Eyes In The Sky
  • Unmanned Aerial Vehicles Increase In Numbers

  • Bush urges Turkey against Iraq incursion
  • SKorea set to decide on troops in Iraq
  • Gates mulls central authority over Iraq security contractors
  • Pentagon sees little Turkish appetite for an Iraq incursion

  • Thompson Files: Osprey myths
  • iRobot Awarded US Military Order For PackBot Robots
  • Northrop Grumman's LITENING AT System Completes Bold Quest Demonstration
  • Defense Focus: Border business -- Part 1

  • The content herein, unless otherwise known to be public domain, are Copyright 1995-2006 - SpaceDaily.AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA PortalReports are copyright European Space Agency. All NASA sourced material is public domain. Additionalcopyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement