Monday, October 8, 2007

Playstation3 helps robots see


Print This StorySend As EmailReprints
EE Times

AUSTIN, Texas — Robots took a tiny step closer to seeing the same way humans do thanks to a team of university researchers who ported to the Cell processor new vision algorithms derived from brain research. The team from Dartmouth and the University of California at Irvine were able to get three networked Playstation3 devices to recognize a given object in one second using their software.

"We are headed toward a custom chip that can accelerate brain algorithms," said Andrew Felch, an associate research professor at Dartmouth's Neukom Institute for Computational Science, one of four researchers on the project.

The team won a $10,000 award for its work porting the algorithm to the Cell processor as part of a university challenge organized by IBM Corp. As many as 80,000 students from 25 countries competed in the challenge with winners announced at the first Power.org technical conference here Monday (Sept. 24).

The same brain algorithm used for vision is also employed for language processing. Thus researchers hope their work can ultimately lead to the creation of small robots that can both respond to commands and then autonomously navigate their way to do useful work such as delivering small packages.

"We aim to put all the speech and vision recognition into a working robot, so we need real-time performance," said Felch. "DARPA wants to see people create robots that can actually drive a vehicle without harming anyone in the process," he added.

"The hardest part of our effort was in understanding the brain algorithm and translating it into something we could use on the computer," said Jayram Moorkanikara, a doctoral student at UC Irvine. "The language brain researchers speak is completely different from the one computer scientists use," he added.

The team spent about eight months on the project first implementing the algorithm on a 2 GHz Intel Core 2 Dupo processor. Using the PC, the team showed machine vision that could recognize in three minutes a bar stool in an image of an office setting.

Using a network of three Playstation3 consoles linked to a PC, the tem was able to speed the recognition rate up to just one second. "A one-second delay is essentially real time object recognition, and that is just what humans do," said Felch.

Thanks to its on-board accelerators, the Cell processor in the consoles was able to handle key computations in three cycles that the Intel chip had to compute sequentially in 15 cycles. Overall the three consoles handled the work at rates up to 140 times the speed of the single PC processor, Felch said.

The underlying algorithm breaks objects down into a hierarchy of key shapes and objects called line triplets. Those primitives are then compared to similar shapes in a new image. The research effort was focused on speeding up the process of making those comparisons.

Latency is significantly less important for the brain algorithm. Synapses typically have latency delays of one millisecond, Felch said. That's more than an order of magnitude higher than the latencies in today's fastest computers, he added.

No comments: