Subscribe free to our newsletters via your
. Military Space News .




INTERNET SPACE
Object recognition for free
by Staff Writers
Boston MA (SPX) May 14, 2015


The first layers (1 and 2) of a neural network trained to classify scenes seem to be tuned to geometric patterns of increasing complexity, but the higher layers (3 and 4) appear to be picking out particular classes of objects. Image courtesy of the researchers.

Object recognition - determining what objects are where in a digital image - is a central research topic in computer vision. But a person looking at an image will spontaneously make a higher-level judgment about the scene as whole: It's a kitchen, or a campsite, or a conference room. Among computer science researchers, the problem known as "scene recognition" has received relatively little attention.

Last December, at the Annual Conference on Neural Information Processing Systems, MIT researchers announced the compilation of the world's largest database of images labeled according to scene type, with 7 million entries. By exploiting a machine-learning technique known as "deep learning" - which is a revival of the classic artificial-intelligence technique of neural networks - they used it to train the most successful scene-classifier yet, which was between 25 and 33 percent more accurate than its best predecessor.

At the International Conference on Learning Representations this weekend, the researchers will present a new paper demonstrating that, en route to learning how to recognize scenes, their system also learned how to recognize objects. The work implies that at the very least, scene-recognition and object-recognition systems could work in concert. But it also holds out the possibility that they could prove to be mutually reinforcing.

"Deep learning works very well, but it's very hard to understand why it works - what is the internal representation that the network is building," says Antonio Torralba, an associate professor of computer science and engineering at MIT and a senior author on the new paper.

"It could be that the representations for scenes are parts of scenes that don't make any sense, like corners or pieces of objects. But it could be that it's objects: To know that something is a bedroom, you need to see the bed; to know that something is a conference room, you need to see a table and chairs. That's what we found, that the network is really finding these objects."

Torralba is joined on the new paper by first author Bolei Zhou, a graduate student in electrical engineering and computer science; Aude Oliva, a principal research scientist, and Agata Lapedriza, a visiting scientist, both at MIT's Computer Science and Artificial Intelligence Laboratory; and Aditya Khosla, another graduate student in Torralba's group.

Under the hood
Like all machine-learning systems, neural networks try to identify features of training data that correlate with annotations performed by human beings - transcriptions of voice recordings, for instance, or scene or object labels associated with images. But unlike the machine-learning systems that produced, say, the voice-recognition software common in today's cellphones, neural nets make no prior assumptions about what those features will look like.

That sounds like a recipe for disaster, as the system could end up churning away on irrelevant features in a vain hunt for correlations. But instead of deriving a sense of direction from human guidance, neural networks derive it from their structure.

They're organized into layers: Banks of processing units - loosely modeled on neurons in the brain - in each layer perform random computations on the data they're fed. But they then feed their results to the next layer, and so on, until the outputs of the final layer are measured against the data annotations. As the network receives more data, it readjusts its internal settings to try to produce more accurate predictions.

After the MIT researchers' network had processed millions of input images, readjusting its internal settings all the while, it was about 50 percent accurate at labeling scenes - where human beings are only 80 percent accurate, since they can disagree about high-level scene labels. But the researchers didn't know how their network was doing what it was doing.

The units in a neural network, however, respond differentially to different inputs. If a unit is tuned to a particular visual feature, it won't respond at all if the feature is entirely absent from a particular input. If the feature is clearly present, it will respond forcefully.

The MIT researchers identified the 60 images that produced the strongest response in each unit of their network; then, to avoid biasing, they sent the collections of images to paid workers on Amazon's Mechanical Turk crowdsourcing site, who they asked to identify commonalities among the images.

Beyond category
"The first layer, more than half of the units are tuned to simple elements - lines, or simple colors," Torralba says. "As you move up in the network, you start finding more and more objects. And there are other things, like regions or surfaces, that could be things like grass or clothes. So they're still highly semantic, and you also see an increase."

According to the assessments by the Mechanical Turk workers, about half of the units at the top of the network are tuned to particular objects. "The other half, either they detect objects but don't do it very well, or we just don't know what they are doing," Torralba says. "They may be detecting pieces that we don't know how to name. Or it may be that the network hasn't fully converged, fully learned."

In ongoing work, the researchers are starting from scratch and retraining their network on the same data sets, to see if it consistently converges on the same objects, or whether it can randomly evolve in different directions that still produce good predictions. They're also exploring whether object detection and scene detection can feed back into each other, to improve the performance of both. "But we want to do that in a way that doesn't force the network to do something that it doesn't want to do," Torralba says.


Thanks for being here;
We need your help. The Space Media Network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceMediaNetwork Contributor
$5 Billed Once


credit card or paypal
SpaceMediaNetwork Monthly Supporter
$5 Billed Monthly


paypal only


.


Related Links
Massachusetts Institute of Technology
Satellite-based Internet technologies






Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.

Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle




Memory Foam Mattress Review
Newsletters :: SpaceDaily :: SpaceWar :: TerraDaily :: Energy Daily
XML Feeds :: Space News :: Earth News :: War News :: Solar Energy News





INTERNET SPACE
Skype opens door to real-time translation feature
San Francisco (AFP) May 12, 2015
Microsoft-owned Skype on Tuesday cleared the way for anyone to use a new feature that translates video chats or instant messages in real time. People no longer need to sign-up to use a preview version of Skype Translator, which handles spoken English, Spanish, Italian and Mandarin. The number of languages translated jumps to 50 for written instant messages, with missives written in one l ... read more


INTERNET SPACE
US Missile Defense System Beset by Delays

US Awards $600 Million for NATO's Ballistic Missile Defense Kill Vehicle

Teledyne to provide missile defense test and evaluation services

Turkish firm joins NATO BMD support effort

INTERNET SPACE
N. Korea says successfully test-fired underwater ballistic missile

Iraq, Indonesia, Malaysia seek ammunition, missile systems

Raytheon's SM-6 missile in full-rate production

Indian Army inducts missile system

INTERNET SPACE
Next X-37B Mission Set To Begin Soon

Tern Tech Offshoots Show Potential for New UAS Capabilities at Sea

Drone Aviation receives order for aerostats

US moves step closer to commercial drone use

INTERNET SPACE
German ships receiving Indra's satellite communications terminals

French-Italian military communications satellite launched

Harris wins IDIQ contract for Rifleman Radio

U.S. Special Operations Command orders MUOS-capable radios

INTERNET SPACE
Romania, Thailand receiving Lockheed Martin sensor system

Germany approves Puma IFV for full deployment

Precision guidance kit for artillery shells tests positive

FNSS of Turkey intros new armored vehicle

INTERNET SPACE
French industrialist gives up Thales job in spat over Russia ties

Rheinmetall, MKEK of Turkey forming joint venture company

Iran's Rouhani denounces boasting over arms deals

Navy Sees Future Not in F-35s, But in Unmanned Aircraft

INTERNET SPACE
Philippines, Japan to hold joint naval exercise in S. China Sea

China pursuing huge South China Sea land reclamation: US

China Warns Philippines Military to Stay Away from Disputed Territory

French navy ships make first China visit since 2013

INTERNET SPACE
'Microcombing' creates stronger, more conductive carbon nanotube films

Chemists strike nano-gold with 4 new atomic structures

New technique for exploring structural dynamics of nanoworld

Nanotubes with 2 walls have singular qualities




The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.