tag:blogger.com,1999:blog-54699258561627825642024-03-19T01:50:02.383-07:00Vision and BrainThis is a scientific blog with information about computer vision (CV) and intelligent transportation systems (ITS). The goal is also to providing topics about biological models concerning human mimic systems.Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.comBlogger26125tag:blogger.com,1999:blog-5469925856162782564.post-78394301543515163662010-05-28T01:01:00.000-07:002010-05-28T01:10:24.141-07:00Volvo S60 with pedestrian detectionVolvo has deployed his novel system based on a radar and a camera. The commercial, at least, make it worth. I didn't see any test on this:<br /><br /><object width="640" height="385"><param name="movie" value="http://www.youtube.com/v/I4EY9_mOvO8&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/I4EY9_mOvO8&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="640" height="385"></embed></object><br /><br /><object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/FElNAsKLV48&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/FElNAsKLV48&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-70034248337147701262009-03-15T05:00:00.000-07:002009-03-15T05:08:07.440-07:00Sentience intelligent cruise controlThis article has been taken in http://www.gizmag.com/sentience-adaptive-intelligent-cruise-control-driverless-car/11223/<br /><br />More and more initiative is coming, demonstrating the future of intelligent vehicles. Now, in intelligent cruise control.<br /><br />-----------------<br />March 11, 2009 The driverless car of the future is getting closer every day, as more and more technologies come along that take critical jobs away from the driver and put them in the hands of lightning-fast, all-seeing computers. One of the latest and most ambitious of these systems has just been successfully demonstrated in the UK; the Sentience system is a kind of hyper-intelligent cruise control system designed specifically to minimize fuel consumption and emissions. It calculates the best route for you based on traffic, topography, curves, speed limits and a host of other information, and then actually takes over the throttle and brakes for you for the entire journey. It keeps you strictly within speed limits, slows down for corners, speed bumps and roundabouts, and it even knows when the lights ahead are about to turn red, so you don't waste petrol accelerating towards a stop point. Fuel savings in testing have been between 5% and 24% - a very significant figure - and Sentience is expected to be available on production cars, for a minimal cost, as soon as 2012. Incredible stuff.<br />Cruise control systems are getting smarter. Adaptive cruise control units can monitor the distance to the car in front of you on the freeway and maintain a safe gap - and they're becoming a fairly common factory option. Other systems are learning to read speed signs, in order to save drivers from draconian automatic speed enforcement systems.<br />Now, a UK research partnership has demonstrated a system which goes far beyond anything we've seen before. The ominously titled Sentience system runs from a mobile phone that's connected to the car's onboard ECU. It sucks in huge amounts of data as you travel, analyzing your planned route in terms of traffic, gradients, curves, speed limits and even probable speed limiting features such as junctions, crossings, schools, speed bumps, roundabouts and traffic lights. It then manages your acceleration and deceleration in such a way as to deliver maximum efficiency from a hybrid engine, resulting in demonstrated fuel savings of between 5% and 24%, depending on traffic and topology. Scale that out to a large number of vehicles and you're looking at huge benefits, fuel-wise and in terms of emissions.<br /><br /><span style="font-weight: bold;">The Sentience partnership</span><br />The Sentience system is the result of a multi-industry research partnership aimed at reducing CO2 emissions and fuel usage in hybrid vehicles. Ricardo and Jaguar-Land Rover brought their knowledge of car electronics and engine management to the table, TRL provided expertise on traffic, traffic signals and road usage patterns, Ordnance Survey contributed a massive breadth and depth of information about the UK road system, including curvature and topograpy information, and Orange Business Services chipped in with their knowledge of mobile phone software and handset connectivity.<br />Between the five major partners, a system has emerged that acts like a kind of intelligent adaptive cruise control system that knows the roads you're taking, and how exactly to drive them for maximal energy efficiency and minimal emissions. The Sentience test vehicle was demonstrated earlier this month at a UK test track, with representatives of the media invited to drive a car equipped with the system, running through a Nokia N95 mobile phone.<br /><br /><span style="font-weight: bold;">How it works</span><br />Based on route information – which could eventually be integrated with a commercial navigation system – the Sentience vehicle will calculate and follow an optimal driving strategy. Its control system adjusts vehicle speed, acceleration and deceleration via its adaptive cruise control and regenerative braking. Using GPS and mapping data it takes into account the speed limits, traffic conditions, the road’s gradient and features including bends and even speed bumps, as well as less predictable road features including roundabouts, to determine the most efficient possible route.<br />It's also keyed in to traffic light timing, so it will automatically start decelerating if it knows the green light you're approaching is about to turn red. The driver simply keys in a destination, and steers the car without a foot on either pedal, letting the car make the decisions on acceleration and braking. Of course, you'd want to keep your foot close to the brakes to over-ride the system in case of an emergency situation.<br />The Sentience system also concentrates on getting the most out of hybrid drive systems, by optimizing the regenerative braking strategy for the batteries and increasing the availability of electric-only drive mode where possible.<br /><br /><span style="font-weight: bold;">Towards a driverless future</span><br />It's a clear step forward toward a future where your car will do the driving and you'll just be a passenger - and a demonstration of how a computer with detailed route and traffic signal information can make a huge difference to fuel consumption and emissions.<br />The Sentience system is expected to be made available in new cars as soon as 2012 - and the fuel savings will add up to around UKP500 per year if you spend around UKP50 a week on petrol. Scaled out to a large number of cars across the UK, the system could save between 1.2 and 2.9 million barrels of oil per year.<br /><br /><span style="font-weight: bold;">What are your thoughts?</span><br />Assuming it works as well in the real world as it has on the test track, would you sacrifice your total control and a few minutes of travel time if the Sentience system could guarantee you the minimum possible fuel consumption, the most fuel-efficient route and an absolute certainty that you won't be getting a speeding ticket? How do you feel about a future where a large number of cars run this sort of system? Let us know in the comments below.<br /><br />Loz BlainLuciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-22858292262476939472008-12-29T07:11:00.000-08:002008-12-29T07:17:34.238-08:00Volvo City Safety<small></small>CitySatety, the new system from Volvo. News taken from http://www.thecarthatstopsitself.com/<br /><br />Videos with the systems in <a href="http://www.youtube.com/user/VolvoXC60">Volvo Channel in Youtube</a><br /><span style="text-decoration: underline;"><span style="font-weight: bold;"><br /></span></span><div class="entry"> <p>--------------------<br /></p><p>Ok, so like many of my male friends, we don’t want to give up control of anything to anyone, much less a car. I mean, why would I want some high tech device to stop my car…I’m not good enough? Well probably not. Last summer my wife and I were heading down to Washington, D.C., stuck in world famous I-95 traffic. I’m stopped waiting for some signs of life ahead. I looked over my shoulder at our dog and bumped into the guy in front. I looked up and saw that I was moving….too late. Honestly, wasn’t more than 4 mph. The only damage was to his car, where my license plate bolts scrapped his fender. That cost me $550. That wouldn’t have happened with City Safety.</p> <p>In April this year I drove an XC60 with City Safety. After I-95, I think City Safety has my vote. When driving the XC60 with City Safety, I was surprised at how quickly and aggressively the system applies brakes, at speeds less than 9 mph it stop our XC60 completely. OK, if there’s ice, snow, rain or whatever makes a road surface slippery, the stopping distance will change. From 10-18 mph the system will reduce vehicle speed and help to reduce vehicle damage and injuries, especially whiplash. I borrowed this from an <a href="http://www.iihs.org/" target="_blank">IIHS</a> report: “Rear-end collisions are frequent, and neck injuries are the most common serious injuries reported in automobile crashes. They account for 2 million insurance claims each year costing at least $8.5 billion. Such injuries aren’t life-threatening, but they can be painful and debilitating.” Guess I helped contribute to that cost next go around.</p> <p>City Safety works by calculating the distance to the forward vehicle and road speeds relative to each vehicle to determine if your near future life includes bumping into a stranger’s vehicle. Mounted up behind the windshield, near inside rear view mirror is a laser camera. Interesting – all these years we’ve built safety systems that are hidden, like energy absorbing systems, great brakes, collapsing steering columns, whiplash seat, stuff that’s just waiting to be used, but you’ll never see. With City Safety it’s right in sight; guess you could use it to impress the neighbors. Hey, no one has this but Volvo.</p> <p><a href="http://www.thecarthatstopsitself.com/wp-content/upload/14616_1_5.jpg"><img class="aligncenter size-medium wp-image-106" title="City Safety" src="http://www.thecarthatstopsitself.com/wp-content/upload/14616_1_5-300x225.jpg" alt="" height="225" width="300" /></a></p> <p>I asked the engineer who worked on this, why no red warring light, like a “heads up display?” “By the time you see the light, and we wait for you to apply brakes, you’ll have hit the car in front.” His logic, and he’s right, is the system works faster than you or I. Our Collision Warning System with Automatic Braking does have a nice red led ‘bar’ light to warn you, looks like someone’s brake light right in front – heads up display style, but then in most cases you do have time to react.</p> <p>Guess we men just have to learn when to bow to something smarter than us. I like City Safety. Well, back to work. James, my counterpart out West is pestering me for help with the <a href="http://www.naias.com/" target="_blank">Detroit Auto Show</a>, we have press days January 11-13 and public days start the 19th.</p> </div>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-15260857069778549652008-12-19T03:03:00.000-08:002008-12-20T02:00:07.934-08:00ISR-UC ranked as excellentAmazing news: the Institute of System and Robotics of University of Coimbra (the institute in which I have been taking my PhD), was recently ranked as excellent for an international jury ( More in: <a href="http://www.fct.mctes.pt/unidades/08/areas.asp?aid=%7B3E3D4ADF-C7B6-453A-9AAE-0938CC77F657%7D">All evaluations</a> and <a href="http://www.fct.mctes.pt/unidades/08/painel.asp?uid=%7B60725AF9-92F2-4805-8BB2-8548F78764CF%7D">ISR evaluation</a>). Bellow an email from ISR-UC's director:<br /><br />"<span style="font-style: italic;">Dear Researchers</span><br /><br /><span style="font-style: italic;">I want to inform you that the International Panel which evaluated the research units in Portugal in the area of Electrical Engineering and Computer Science, gave us then top ranking grade - Excellent. Only one unit in 25 received this grade. I want to thank all of you for your hard work and dedication towards producing first class research results. We want to keep carrying out the same level or even better research work, and for this purpose your commitment will be essential and most appreciated.</span><br /><span style="font-style: italic;"> </span><br /><span style="font-style: italic;"> I wish you and your families Happy Christmas Holidays, as well as a Very Good New Year.</span>"<br /><br /><br />Prof. Anibal T. de Almeida<br />Director<br />ISR - University of Coimbra<br />Dep. Electrical Engineering , Polo II<br />3030 Coimbra, PortugalLuciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-7917618389319133702008-10-16T07:39:00.000-07:002008-12-20T02:01:11.321-08:00Design flaws in computer vision tests<p>I completely agree with this article bellow! I took it from http://web.mit.edu/newsoffice/2008/computer-vision-0124.html. The question is unsolvable in my point of view, since we don't have any metric to be applied in order to reference how clutter is an image or pointing some difficult characteristics of the image! Read the article bellow!<br /></p><p>-------------------------</p><p>For years, scientists have been trying to teach computers how to see like humans, and recent research has seemed to show computers making progress in recognizing visual objects. </p><p>A new MIT study, however, cautions that this apparent success may be misleading because the tests being used are inadvertently stacked in favor of computers.</p><p>Computer vision is important for applications ranging from "intelligent" cars to visual prosthetics for the blind. Recent computational models show apparently impressive progress, boasting 60-percent success rates in classifying natural photographic image sets. These include the widely used Caltech101 database, intended to test computer vision algorithms against the variety of images seen in the real world.</p><p>However, James DiCarlo, a neuroscientist in the McGovern Institute for Brain Research at MIT, graduate student Nicolas Pinto and David Cox of the Rowland Institute at Harvard argue that these image sets have design flaws that enable computers to succeed where they would fail with more-authentically varied images. For example, photographers tend to center objects in a frame and to prefer certain views and contexts. The visual system, by contrast, encounters objects in a much broader range of conditions.</p><p>"The ease with which we recognize visual objects belies the computational difficulty of this feat," explains DiCarlo, senior author of the study in the Jan. 25 online edition of PLoS Computational Biology. "The core challenge is image variation. Any given object can cast innumerable images onto the retina depending on its position, distance, orientation, lighting and background."</p><p>The team exposed the flaws in current tests of computer object recognition by using a simple "toy" computer model inspired by the earliest steps in the brain's visual pathway. Artificial neurons with properties resembling those in the brain's primary visual cortex analyze each point in the image and capture low-level information about the position and orientation of line boundaries. The model lacks the more sophisticated analysis that happens in later stages of visual processing to extract information about higher-level features of the visual scene such as shapes, surfaces or spaces between objects.</p><p>The researchers intended this model as a straw man, expecting it to fail as a way to establish a baseline. When they tested it on the Caltech101 images, however, the model did surprisingly well, with performance similar or better than five state-of-the-art object-recognition systems.</p><p>How could that be? "We suspected that the supposedly natural images in current computer vision tests do not really engage the central problem of variability, and that our intuitions about what makes objects hard or easy to recognize are incorrect," Pinto explains.</p><p>To test this idea, the authors designed a more carefully controlled test. Using just two categories--planes and cars--they introduced variations in position, size and orientation that better reflect the range of variation in the real world.</p><p>"With only two types of objects to distinguish, this test should have been easier for the 'toy' computer model, but it proved harder," Cox says. The team's conclusion: "Our model did well on the Caltech101 image set not because it is a good model but because the 'natural' images fail to adequately capture real-world variability."</p><p>As a result, the researchers argue for revamping the current standards and images used by the computer-vision community to compare models and measure progress. Before computers can approach the performance of the human brain, they say, scientists must better understand why the task of object recognition is so difficult and the brain's abilities are so impressive.</p>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-82406023165830209812008-10-16T07:32:00.000-07:002008-10-16T07:35:55.348-07:00Computer Vision on a brain<p>A new brain-computer-interface technology could turn our brains into automatic image-identifying machines that operate faster than human consciousness.</p> <p>Researchers at Columbia University are combining the processing power of the human brain with computer vision to develop a novel device that will allow people to search through images ten times faster than they can on their own.</p> <p>Darpa, or the Defense Advanced Research Projects Agency, is funding research into the system with hopes of making federal agents' jobs easier. The technology would allow hours of footage to be very quickly processed, so security officers could identify terrorists or other criminals caught on surveillance video much more efficiently.</p> <p>The "cortically coupled computer vision system," known as C3 Vision, is the brainchild of professor <a href="http://liinc.bme.columbia.edu/mainTemplate.htm?liinc_people_sajda.htm">Paul Sajda</a>, director of the <a href="http://newton.bme.columbia.edu/">Laboratory for Intelligent Imaging and Neural Computing</a> at Columbia University. He received a one-year, $758,000 grant from Darpa for the project in late 2005.</p> <p>The system harnesses the brain's well-known ability to recognize an image much faster than the person can identify it.</p> <p>"Our human visual system is the ultimate visual processor," says Sajda. "We are just trying to couple that with computer vision techniques to make searching through large volumes of imagery more efficient."</p> <p>The brain emits a signal as soon as it sees something interesting, and that "aha" signal can be detected by an electroencephalogram, or EEG cap. While users sift through streaming images or video footage, the technology tags the images that elicit a signal, and ranks them in order of the strength of the neural signatures. Afterwards, the user can examine only the information that their brains identified as important, instead of wading through thousands of images.</p> <p>No existing computer vision systems connect with the human brain, and computers on their own don't do well at identifying unusual events or specific targets.</p> <p>"The major weakness of computer vision systems today is their narrow range of purpose," says <a href="http://faculty.babson.edu/gordon/">Steven Gordon</a>, an information systems and technology professor at <a href="http://www.babson.edu/">Babson College</a> in Massachusetts. "You cannot take a system that is intended to recognize faces and apply it to recognizing handwriting or identifying whether one object in a photo is behind another. Unlike a computer, which can perform a variety of tasks, a computer vision system is highly customized to the task it is intended to perform. They are limited in their ability to recognize suspicious activities or events."</p> <p>People, on the other hand, excel at spotting them. The new system's advantage lies in combining the strengths of traditional computer vision with human cortical vision.</p> <p>For example, when a computer searches for vehicles, it will identify and discard parts of the image that contain water. The human user, who is more likely to easily spot oddities, can then look only at the parts of the image that matter. This could allow time-sensitive searches to be performed in real time.</p> <p>Gordon believes that the technology would be most appropriate for analyzing stored video and for intelligence gathering.</p> <p>"Conceivably, the proposed solution could be applied in quasi-real-time to allow a single human to monitor ten times as many sites as he or she would otherwise monitor," says Gordon.</p> <p>The Columbia team is currently working on making the system more robust and reducing instances of false positives. They plan to demonstrate the technology for Darpa in a few months.</p><p>Source: http://www.wired.com/medtech/health/news/2006/07/71364<br /></p><p><br /></p>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-31110961572664514322008-10-15T08:16:00.000-07:002008-10-15T08:29:36.150-07:00Open submission conferencesRecently, I've heard that there is a new format of conference that you can choose between two formats of revision process: the regular one and open submission! The latter one seems to bring up a new concept, where the reviewers are known and the reviewer feedback is attached to the final accepted paper at the time of publishing it in the proceedings! At the end, we end up knowing that the blindness model is not that perfect!<br /><br />Everyone making part of the academic community is somewhat aware about the revision process and the possible flawless of this process. Judgment is always a hard day task and when it is done without consciousness can lead to injustices in Science! In the openness of the reviewing process, we are also able to judge who judges and turn it a democratic framework, where everyone should be aware of their role in the process, avoiding (or trying to avoid) the little trick and game of the publication statistics without substance!<br /><br />I would like to congratulate the organizers of the <a href="http://www.cpl.uh.edu/html/LocalUser/avss2008/">AVSS conference</a> ! I wish that all other conferences should follow this new concept!Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-82445290382709844882008-09-05T04:34:00.000-07:002008-09-05T04:56:54.173-07:003rd Place in Intel/GV Entrepreneurship and Venture Capital CompetitionA long time since the last post! But I definitely did not stop working and a lot of news is coming. For now, I must say that my great friend, Jacques Chicourel, and I got the 3rd Place (with a Honorable Mention) in Intel/GV Entrepreneurship and Venture Capital Competition.<br /><br />More information on this competition in <a href="http://www.cepe.fgvsp.br/desafio/">http://www.cepe.fgvsp.br/desafio/</a><br /><br />Some videos on the competition in <a href="http://www.ustream.tv/search/all/intel%20gv">http://www.ustream.tv/search/all/intel gv</a>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com1tag:blogger.com,1999:blog-5469925856162782564.post-22901608476613034182008-07-30T08:32:00.000-07:002008-07-30T08:33:07.753-07:00Control your console games using just your mind<p>Mind-controlled gaming could be on its way to the Sony PS3, Microsoft Xbox 360 and the Nintendo Wii. OCZ's Neural Impulse Actuator interprets electrical signals from your brain in order to input commands into a PC.</p><p>Using this device, you can play games like Crysis and Unreal Tournament without using your hands.</p><p>And now it seems as though the same technology might be developed for use with the top three games consoles as well.</p><p><strong>OCZ on the case</strong></p><p>In a meeting today with TechRadar, Tobias Brinkmann - director of marketing at OCZ - told us that the company is actively looking into the possibility of developing the NIA for use with the leading consoles.</p><p>"It's definitely something we are looking into," he said. "The thing we think would be most cool is to get the NIA working with the Nintendo Wii - that would be good. But of course it would be great if we could get it working with all the consoles."</p><p>Brinkmann revealed that in the past Microsoft has attempted to acquire the NIA technology from OCZ, but the offer was turned down flat. Microsoft has clearly seen the potential that this technology has.</p><p><strong>The future of gaming?</strong></p><p>It would certainly make a lot of sense for OCZ - better known for its high-speed computer memory - to develop the NIA for games consoles. Although PC gaming is still going strong, the force is well and truly with the consoles at present - it's a multi-billion pound industry.</p><p>And what with the Nintendo Wii being the best-selling console, and the fact that the Wii is actually the most basic platform out there, it makes sense that this would likely be the first console that the NIA might be developed for.</p><p>From: http://www.techradar.com/news/gaming/consoles/mind-control-coming-to-ps3-xbox-and-wii-432822<br /></p><p><br /></p>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-52669865751825801022008-07-29T03:03:00.000-07:002008-07-29T03:04:53.338-07:00ROBOSOFT delivers 3 driverless vehicles to Vulcania theme parkROBOSOFT announces the delivery of three VolcanBuls to <a href="http://www.vulcania.com/fr/attractions/le-volcanbul.html" title="Vulcania website">Vulcania</a> . These vehicles, without a driver and guided exclusively by GPS, take visitors on a 1km tour through the park to observe the Puy volcanic mountains and to learn about their history. <p>Vulcania is the world's first site where such an autonomous transportation system has been installed and certified. No driver, as well as no need for any specific infrastructure, and its exploitation in an area shared by walkers make the VolcanBul particularly well adapted for all sites welcoming public, such as city centers, airports, hospitals, campuses, amusement parks, etc.</p>Video of the vehicles:<br />http://www.youtube.com/watch?v=E-1Qht1fGCY<br /><br />Homepage: http://www.robosoft.com/eng/actualite_detail.php?id=1018Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-59750665311737931962008-07-25T15:24:00.000-07:002008-07-25T15:25:17.099-07:00STI accelerates Cell CPU to 6 GHz, Intel bumps 80-core chip to 4 GHz<table class="contentpaneopen"><tbody><tr> <td> <span> Trendwatch </span> </td> </tr> <tr> <td colspan="2" valign="top" width="70%" align="left"> <span class="small"> By Wolfgang Gruener </span> </td> </tr> <tr> <td colspan="2" class="createdate" valign="top"> Thursday, January 04, 2007 20:53 </td> </tr> <tr> <td colspan="2" valign="top"> <div style="padding: 2px 10px 10px 0pt; float: left;"><iframe src="http://digg.com/tools/diggthis.php?u=http%3A%2F%2Fwww.tgdaily.com%2Fcontent%2Fview%2F30573%2F113%2F&t=STI+accelerates+Cell+CPU+to+6+GHz%2C+Intel+bumps+80-core+chip+to+4+GHz&w=new&m=news" scrolling="no" width="55" frameborder="0" height="82"></iframe><br /></div><div class="inner_content"><p><strong>San Francisco (CA) - Integrated circuit designers will come together next month in San Francisco to get updates on hardware developments at the annual IEEE International Solid State Circuits Conference (ISSCC). Among the more visible presentations will be a next-generation Cell processor as well as AMD's quad-core Opteron and Intel's 80-core Teraflop processor.</strong></p><p><strong></strong></p><p>The 119-page program of the conference provides a first glimpse of what we may hear at the ISSCC, which will begin on February 11. Sony, Toshiba and IBM, short STI, will present first details about a 65 nm Cell processor design, which, apparently, already runs at 6 GHz and 1.3 volts in STI's labs.</p><p>The updated Cell will use two power supplies to increase SRAM stability as well as performance, but STI promises that actual logic power consumption will decrease. The current Cell processor, used in Sony's Playstation 3 game console, is manufactured in a 90 nm process and runs at a clock speed of 3.2 GHz.</p><p>AMD plans to provide more details about its native quad-core Opteron processor, code-named "Barcelona." According to the conference materials, the design of Barcelona builds on the current Opteron dual-core generation and "employs power- and thermal-management techniques throughout". AMD is also expected to describe its DDR3 transition plans.</p><p>Intel apparently has an update to its 80-core processor that was first presented at the 2006 Fall IDF conference. The processor, which was clocked at 3.1 GHz at IDF, has reached 4 GHz, while retaining the original 20 MB of SRAM. Performance climbs from a claimed 1 TFlops at the IDF to 1.28 TFlops in this new version. Most impressive, however, is the described power consumption: According to Intel, the 80-core, 100-million-transistor-die is rated at power consumption of just 98 watts (at 1 TFlops).</p></div></td></tr></tbody></table>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-32295613595474957802008-01-15T06:27:00.000-08:002008-01-15T06:31:33.183-08:00Moving by Thoughts<nyt_byline version="1.0" type=" "></nyt_byline>By <a href="http://topics.nytimes.com/top/reference/timestopics/people/b/sandra_blakeslee/index.html?inline=nyt-per" title="More Articles by Sandra Blakeslee">SANDRA BLAKESLEE</a> <div class="timestamp">Published: January 15, 2008</div> <nyt_text> </nyt_text><p>If Idoya could talk, she would have plenty to boast about. </p> <div id="articleInline"> <div id="inlineBox"><a href="http://www.nytimes.com/2008/01/15/science/15robo.html?_r=2&scp=1&sq=nicolelis&oref=slogin&oref=slogin#secondParagraph" class="jumpLink"></a><div id="inlineMultimedia"> <div class="story first"> <a href="javascript:pop_me_up2('http://www.nytimes.com/imagepages/2008/01/14/science/20080115_monkey.html', '900_555', 'width=900,height=555,location=no,scrollbars=yes,toolbars=no,resizable=yes')"> <img src="http://graphics8.nytimes.com/images/2008/01/14/science/0115-sci-ROBOT_190.jpg" alt="Moving by Thought" border="0" height="126" width="190" /><span class="mediaType graphic"></span> </a> </div> <div class="story"> <h2> </h2> </div> </div> </div> </div><a name="secondParagraph"></a><!--calling embedded video jsp --> <!--feedroom player begins --><p>On Thursday, the 12-pound, 32-inch monkey made a 200-pound, 5-foot humanoid robot walk on a treadmill using only her brain activity. </p><p>She was in North Carolina, and the robot was in Japan.</p><p>It was the first time that brain signals had been used to make a robot walk, said Dr. Miguel A. L. Nicolelis, a neuroscientist at <a href="http://topics.nytimes.com/top/reference/timestopics/organizations/d/duke_university/index.html?inline=nyt-org" title="More articles about Duke University.">Duke University</a> whose laboratory designed and carried out the experiment. </p><p>In 2003, Dr. Nicolelis’s team proved that monkeys could use their thoughts alone to control a robotic arm for reaching and grasping.</p><p>These experiments, Dr. Nicolelis said, are the first steps toward a brain machine interface that might permit paralyzed people to walk by directing devices with their thoughts. Electrodes in the person’s brain would send signals to a device worn on the hip, like a cell phone or pager, that would relay those signals to a pair of braces, a kind of external skeleton, worn on the legs. </p><p>“When that person thinks about walking,” he said, “walking happens.” </p><p>Richard A. Andersen, an expert on such systems at the <a href="http://topics.nytimes.com/top/reference/timestopics/organizations/c/california_institute_of_technology/index.html?inline=nyt-org" title="More articles about California Institute of Technology">California Institute of Technology</a> in Pasadena who was not involved in the experiment, said that it was “an important advance to achieve locomotion with a brain machine interface.”</p><p>Another expert, Nicho Hatsopoulos, a professor at the <a href="http://topics.nytimes.com/top/reference/timestopics/organizations/u/university_of_chicago/index.html?inline=nyt-org" title="More articles about the University of Chicago.">University of Chicago</a>, said that the experiment was “an exciting development. And the use of an exoskeleton could be quite fruitful.” </p><p>A brain machine interface is any system that allows people or animals to use their brain activity to control an external device. But until ways are found to safely implant electrodes into human brains, most research will remain focused on animals. </p><p>In preparing for the experiment, Idoya was trained to walk upright on a treadmill. She held onto a bar with her hands and got treats — raisins and Cheerios — as she walked at different speeds, forward and backward, for 15 minutes a day, 3 days a week, for 2 months. </p><p>Meanwhile, electrodes implanted in the so-called leg area of Idoya’s brain recorded the activity of 250 to 300 neurons that fired while she walked. Some neurons became active when her ankle, knee and hip joints moved. Others responded when her feet touched the ground. And some fired in anticipation of her movements. </p><p>To obtain a detailed model of Idoya’s leg movements, the researchers also painted her ankle, knee and hip joints with fluorescent stage makeup and, using a special high speed camera, captured her movements on video. </p><p>The video and brain cell activity were then combined and translated into a format that a computer could read. This format is able to predict with 90 percent accuracy all permutations of Idoya’s leg movements three to four seconds before the movement takes place. </p><p>On Thursday, an alert and ready-to-work Idoya stepped onto her treadmill and began walking at a steady pace with electrodes implanted in her brain. Her walking pattern and brain signals were collected, fed into the computer and transmitted over a high-speed Internet link to a robot in Kyoto, Japan. </p><p>The robot, called CB for Computational Brain, has the same range of motion as a human. It can dance, squat, point and “feel” the ground with sensors embedded in its feet, and it will not fall over when shoved. </p><p>Designed by Gordon Cheng and colleagues at the ATR Computational Neuroscience Laboratories in Kyoto, the robot was chosen for the experiment because of its extraordinary ability to mimic human locomotion. </p><p>As Idoya’s brain signals streamed into CB’s actuators, her job was to make the robot walk steadily via her own brain activity. She could see the back of CB’s legs on an enormous movie screen in front of her treadmill and received treats if she could make the robot’s joints move in synchrony with her own leg movements.</p><p>As Idoya walked, CB walked at exactly the same pace. Recordings from Idoya’s brain revealed that her neurons fired each time she took a step and each time the robot took a step.</p><p>“It’s walking!” Dr. Nicolelis said. “That’s one small step for a robot and one giant leap for a primate.” </p><p>The signals from Idoya’s brain sent to the robot, and the video of the robot sent back to Idoya, were relayed in less than a quarter of a second, he said. That was so fast that the robot’s movements meshed with the monkey’s experience. </p><p>An hour into the experiment, the researchers pulled a trick on Idoya. They stopped her treadmill. Everyone held their breath. What would Idoya do? </p><p>“Her eyes remained focused like crazy on CB’s legs,” Dr. Nicolelis said. </p><p>She got treats galore. The robot kept walking. And the researchers were jubilant.</p>When Idoya’s brain signals made the robot walk, some neurons in her brain controlled her own legs, whereas others controlled the robot’s legs. The latter set of neurons had basically become attuned to the robot’s legs after about an hour of practice and visual feedback.<p>Idoya cannot talk but her brain signals revealed that after the treadmill stopped, she was able to make CB walk for three full minutes by attending to its legs and not her own. </p><p>Vision is a powerful, dominant signal in the brain, Dr. Nicolelis said. Idoya’s motor cortex, where the electrodes were implanted, had started to absorb the representation of the robot’s legs — as if they belonged to Idoya herself. </p><p>In earlier experiments, Dr. Nicolelis found that 20 percent of cells in a monkey’s motor cortex were active only when a robotic arm moved. He said it meant that tools like robotic arms and legs could be assimilated via learning into an animal’s body representation. </p><p>In the near future, Idoya and other bipedal monkeys will be getting more feedback from CB in the form of microstimulation to neurons that specialize in the sense of touch related to the legs and feet. When CB’s feet touch the ground, sensors will detect pressure and calculate balance. When that information goes directly into the monkeys’ brains, Dr. Nicolelis said, they will have the strong impression that they can feel CB’s feet hitting the ground. </p><p>At that point, the monkeys will be asked to make CB walk across a room by using just their thoughts. </p><p>“We have shown that you can take signals across the planet in the same time scale that a biological system works,” Dr. Nicolelis said. “Here the target happens to be a robot. It could be a crane. Or any tool of any size or magnitude. The body does not have a monopoly for enacting the desires of the brain.” </p><p>To prove this point, Dr. Nicolelis and his colleague, Dr. Manoel Jacobsen Teixeira, a neurosurgeon at the Sirio-Lebanese Hospital in São Paulo, Brazil, plan to demonstrate by the end of the year that humans can operate an exoskeleton with their thoughts. </p><p>It is not uncommon for people to have their arms ripped from their shoulder sockets during a motorcycle or automobile accident, Dr. Nicolelis said. All the nerves are torn, leaving the arm paralyzed but in chronic pain. </p><p>Dr. Teixeira is implanting electrodes on the surface of these patients’ brains and stimulating the underlying region where the arm is represented. The pain goes away. </p><p>By pushing the same electrodes slightly deeper in the brain, Dr. Nicolelis said, it should be possible to record brain activity involved in moving the arm and intending to move the arm. The patients’ paralyzed arms will then be placed into an exoskeleton or shell equipped with motors and sensors.</p><p>“They should be able to move the arm with their thoughts,” he said. “This is science fiction coming to life.”</p>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-60071025623107745672007-12-21T03:31:00.000-08:002007-12-21T03:36:49.999-08:00Open position at Institute of Systems and RoboticsWe are looking for an outstanding student, graduated in Computer Science or related, to work in 6-month project (renewable) to run computer vision algorithms in a PlayStation3. The project will run in ISR, Coimbra, Portugal. The following skills are required:<br /> <ul><li>Programming in C/C++</li><li>Computer architecture</li><li>Distributed systems </li><li>Computer vision (not required, but an added value) </li><li>English (at least, good reading)</li><li>High undergraduated score (encouraging to take a PhD in our Institute)<br /></li></ul> <p style="text-align: justify;">The goal of this project is to use PS3 for running Computer Vision algorithms, like:</p> <p>- 3D reconstruction from monocular cameras</p> <p>- Moving camera / moving object segmentation</p> <p>- Tracking as detection</p> <p>- Pattern recognition</p> <p>- Scene analysis</p><br />If you are interested, please send an email to: lreboucas@isr.uc.ptLuciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-16471893669286743102007-12-07T08:33:00.001-08:002007-12-07T08:33:40.127-08:00CUDA and TESLA from NVidia<b>SUPERCOMPUTING 2007—RENO, NEVADA—NOVEMBER 13, 2007</b>—<i>Popular Science</i> has announced that the NVIDIA® CUDA™ C-compiler and software development kit (SDK) has been recognized as one of the Top 100 innovations of the year, winning the coveted "Best of What's New" award. CUDA was selected for its ability to transform a graphics processing unit (GPU) into a supercomputer and to deliver the level of performance normally found in large and expensive clusters residing in datacenters to the desktop of scientists and engineers around the world. <p> "Running electromagnetic simulations using NVIDIA's compute hardware and CUDA accelerate processing times by factors of 25 or more—applying a level of complexity to the analysis and optimization of medical products which nobody dreamed of, even two years ago," said Ryan Schneider, CTO of Acceleware Corp. </p> <p> "Many of the molecular structures we analyze are so large that they can take weeks of processing time to run the calculations required for their physical simulation," said John Stone, senior research programmer at the University of Illinois at Urbana-Champaign. "GPUs have given us a 100-fold increase in some of our programs, and this is on desktop machines where previously we would have had to run these calculations on a cluster. </p> <p> "For 20 years, <i>Popular Science's</i> Best of What's New awards has honored the innovations that make a positive impact on life today and change our views of the future," says Mark Jannot, editor-in-chief of <i>Popular Science</i>. "<i>Popular Science's</i> editors evaluate thousands of products each year to develop this thoughtful list, there's no higher accolade <i>Popular Science</i> can give." </p> <p> For more information on NVIDIA CUDA and NVIDIA Tesla™ GPUs for HPC, please visit: <a href="http://www.nvidia.com/object/tesla_computing_solutions.html">http://www.nvidia.com/object/tesla_computing_solutions.html</a> </p> <p> <b>About Best of What's New</b><br />Each year, the editors of <i>Popular Science</i> review thousands of products in search of the top 100 tech innovations of the year; breakthrough products and technologies that represent a significant leap in their categories. The winners—the Best of What's New—are awarded inclusion in the much-anticipated December issue of <i>Popular Science</i>, which has been the most widely read issue of the year since the debut of Best of What's New in 1987. Best of What's New awards are presented to 100 new products and technologies in 10 categories: Automotive, Aviation & Space, Computing, Engineering, Gadgets, Green Tech, Home Entertainment, Home Tech, Personal Health and Recreation. </p> <p> <b>About <i>Popular Science</i></b><br />Founded in 1872, <i>Popular Science</i> is the world's largest science and technology magazine; with a circulation of 1.3 million and 6.8 million monthly readers. Each month, <i>Popular Science</i> reports on the intersection of science and everyday life, with an eye toward what's new and why it matters. <i>Popular Science</i> is published by Bonnier Active Media, a subsidiary of Bonnier Corporation. </p> <p> <b>NVIDIA Corporation</b><br />NVIDIA Corporation is the worldwide leader in programmable graphics processor technologies. The Company creates innovative, industry-changing products for computing, consumer electronics, and mobile devices. NVIDIA is headquartered in Santa Clara, CA and has offices throughout Asia, Europe, and the Americas. For more information, visit <a href="http://www.nvidia.com/">www.nvidia.com</a>. </p>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-27436416462749380872007-12-05T13:50:00.000-08:002007-12-05T13:52:40.604-08:00Analyzing scenes by monocular perspectiveVery interesting this video! Putting 2D scenes into 3D perspectives!<br /><br />http://www.youtube.com/watch?v=vXTqsCapanE<br /><br />Pretty amazing!Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-6479808442956780492007-11-20T12:52:00.000-08:002007-11-20T13:00:31.688-08:00NiSIS CompetitionI am very happy to announce that I won the <a href="http://www.nisis.risk-technologies.com/msc/competition2007.aspx?compid=2">NiSIS Competition</a><br />My proposed model was based on ensemble method which will be later published! The error rate of this method was 4.03% and achieved the best accuracy among the other methods over the Daimler Chrysler image dataset provided by the competition organization!Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com1tag:blogger.com,1999:blog-5469925856162782564.post-71579977808935683462007-10-17T09:10:00.000-07:002007-10-17T09:11:53.318-07:00Facts Prove No Match for Gossip, It Seems<div class="byline">By <a href="http://topics.nytimes.com/top/reference/timestopics/people/t/john_tierney/index.html?inline=nyt-per" title="More Articles by John Tierney">JOHN TIERNEY</a></div> <div class="timestamp">Published: October 16, 2007</div> <!--NYT_INLINE_IMAGE_POSITION1 --> <nyt_text> </nyt_text><p>Until now, I was firmly pro-gossip. I welcomed the theory that gossip was the reason language developed. I cheered on researchers who believed gossip was the great evolutionary leap that enabled human apes to live peacefully in large groups, develop moral codes, build civilizations and, eventually, sell supermarket tabloids.</p><p>But now I wonder if we’ve leaped too far, and it’s not because I’ve been watching “Gossip Girl.” In a paper on gossip published yesterday, evolutionary biologists in Germany and Austria have identified a vulnerability that might be called the Chico Marx Paradox, for reasons that will be clear once you hear about this experiment.</p><p>The researchers set out to test the power of gossip, which has been exalted by theorists in recent decades. Language, according to the anthropologist Robin Dunbar, evolved because gossip is a more efficient version of the “social grooming” essential for animals to live in groups. </p><p>Apes and other creatures solidify their social bonds by cleaning and stroking one another, but the size of the group is limited because there’s not enough time in the day to groom a large number of animals.</p><p>Speech enabled humans to bond with lots of people while going about their hunting and gathering. Instead of spending hours untangling hair, they could bond with friendly conversation (“Your hair looks so unmatted today!”) or by picking apart someone else’s behavior (“Yeah, he was supposed to share the wildebeest, but I heard he kept <span class="italic">both </span>haunches”).</p><p>Gossip also told people whom to trust, and the prospect of a bad reputation discouraged them from acting selfishly, so large groups could peacefully cooperate. At least, that was the theory: gossip promoted the “indirect reciprocity” that made human society possible. </p><p>To test it, researchers at the Max Planck Institute for Evolutionary Biology and the University of Vienna gave 10 Euros apiece to 126 students and had them play a game that put them in a dilemma. On each turn, the players would be paired off, and one of them was offered a chance to give 1.25 Euros to the other. If he agreed, the researchers added a bonus of .75 Euro so that the recipient ended up gaining 2 Euros. </p><p>If the first player refused to give the money, he’d save 1.25 Euros, but if others found out about his miserliness they might later withhold money from him. As the game progressed, with the players changing partners frequently and alternating between the donor and recipient roles, the players were given information about their partners’ past decisions. </p><p>Sometimes the donor was shown a record of what the partner had done previously while playing the donor role. The more generous this partner had earlier been toward other players, the more likely the donor was to give him something.</p><p>Sometimes the donor was shown gossip about the partner from another player. When the partner was paid a compliment like “<span class="italic">spendabler spieler</span><span class="italic">!</span>” — generous player! — the donor was more likely to give money. But the donor turned stingy when he saw gossip like “<span class="italic">übler geizkragen</span>” — nasty miser.</p><p>So far, so good. As predicted, gossip promoted indirect reciprocity. The research, published in the <a href="http://topics.nytimes.com/top/reference/timestopics/organizations/p/proceedings_of_the_national_academy_of_sciences/index.html?inline=nyt-org" title="More articles about Proceedings of The National Academy of Sciences">Proceedings of the National Academy of Sciences</a>, showed that most people passed on accurate gossip and used it for the common good. They rewarded cooperative behavior even when they themselves weren’t directly affected by the behavior. </p><p>If a cooperation game like this was played without consequences for the players’ reputations — as has been done in other experiments — most players would be miserly, and cooperation would collapse. In this experiment they were generous most of the time, and on average ended up with twice as much money as they had at the beginning of the game. </p><p>But here’s the disconcerting news from the experiment. In a couple of rounds, each donor was given both hard facts and gossip. He was given a record of how his partner had behaved previously as well as some gossip — positive gossip in one round, negative in another.</p><p>The donor was told that the source of the gossip didn’t have any extra information beyond what the donor could already see for himself. Yet the gossip, whether positive or negative, still had a big influence on the donors’ decisions, and it didn’t even matter if the source of the gossip had a good reputation himself. On average, cooperation increased by about 20 percent if the gossip was good, and fell by 20 percent if the gossip was negative. </p><p>Now, you might think the gossip mattered just in borderline cases — when the partner had a mixed record of generosity, and the donor welcomed outside guidance in making a tough decision. But the gossip had an impact in other situations, too. Even when a player saw that his partner had a record of consistent meanness, he could be swayed by positive gossip to reward the partner anyway. Or withhold help from a perfectly nice partner just on the basis of malicious buzz. </p><p>This result may come as no shock to fans of “Gossip Girl,” or to publicists trying to plant items in Page Six about the charitable works of despicable clients. But it seemed surprising to the researchers, according to the lead author, Ralf D. Sommerfeld of the Max Planck Institute. </p><p>“If you know you already have the full information about someone,” he said, “rationally you shouldn’t care so much what someone else says.” </p><p>So why do we? “It could be,” he suggested, “that we are just more adapted to listen to other information than to observe people, because most of the time we’re not able to observe how other people are behaving. Thus we might believe we have missed something.”</p><p>This makes a certain sense, but I still wonder if evolution has taken a Chico Marxist turn here. In “Duck Soup,” Chico tries to pass himself off as Groucho’s character, complete with moustache and cigar, but encounters a skeptical Margaret Dumont, who protests that she just saw Groucho leave the room. </p><p> “Well, who you gonna believe, me or your own eyes?” Chico asks. </p><p>Now, at last, we know the answer.</p>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-28477947818808959552007-10-12T04:00:00.000-07:002007-10-12T04:04:53.309-07:00Computational photographyAdobe has been working in a new concept of lenses, which allow a user to take pictures in different angles!<br /><br />These lenses are composed by a 3D apparatus that could focuses the image in different point of views and later a new software may apply different transformation to modify the picture....<br /><br />One more interesting idea by Adobe. See the complete article in:<br /><a href="http://gizmodo.com/gadgets/3d-magic/adobe-tinkering-with-3d-image-manipulation-using-camera-and-software-308659.php">3D Image Manipulation</a>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com1tag:blogger.com,1999:blog-5469925856162782564.post-42745798149772968852007-10-08T14:01:00.001-07:002007-10-08T14:04:38.664-07:00Playstation3 helps robots see<table border="0" cellpadding="0" cellspacing="0" width="100%"><tbody><tr><td colspan="2" align="left" valign="top"><span class="storyheadline"></span><span class="storysubheadline"></span><br /></td> </tr> <!-- AUTHOR | PRINT EMAIL DISCUSS --> <tr> <td class="storyauthor" align="left" valign="top"> <a href="mailto:">Rick Merritt</a></td> <td rowspan="3" align="right" valign="bottom"> <map name="sendprintdiscuss"><area shape="rect" alt="Print This Story" coords="2,2,140,20" href="http://www.eetimes.com/rss/showArticle.jhtml?articleID=202101163&printable=true" title="Print This Story"><area shape="rect" alt="Send As Email" coords="2,20,140,34" href="http://www.blogger.com/%27javascript:launcher%28202101163," title="Send As Email"><area shape="rect" alt="Reprints" coords="2,34,140,52" href="http://www.magreprints.com/quickquote.asp" title="Reprints"><area shape="default" nohref=""> </map><br /></td> </tr> <!-- STORY PAGE NUMBER --> <!-- DATE / EMAIL THIS - PRINT THIS --> <tr> <td class="storysiteoriginator" align="left" valign="top"> <!-- remove http:// substring (if present) from the url --> <a href="http://www.eetimes.com/;jsessionid=5D3OSA0IMO3MGQSNDLQSKH0CJUNN2JVN" target="_blank"> EE Times </a><br /><span class="storydateline">(09/24/2007 7:59 PM EDT)</span> </td> </tr> <!-- SPACER --> <tr> <td colspan="2" align="left" valign="top"> <img src="http://i.cmpnet.com/automotivedesignline/spacer.gif" border="0" height="5" width="50" /><br /></td> </tr> <!-- ARTICLE BODY --> <tr> <td colspan="2" class="storybody" align="left" valign="top"> <!-- target links in template parts story articlelayout --> <!-- <script src="http://targetlink.eetimes.com/targetlink.js" language="javascript"></script> --> <story> <!-- <div class="targetlink"> --> <!--body--> AUSTIN, Texas — Robots took a tiny step closer to seeing the same way humans do thanks to a team of university researchers who ported to the Cell processor new vision algorithms derived from brain research. The team from Dartmouth and the University of California at Irvine were able to get three networked Playstation3 devices to recognize a given object in one second using their software. </story><p> "We are headed toward a custom chip that can accelerate brain algorithms," said Andrew Felch, an associate research professor at Dartmouth's Neukom Institute for Computational Science, one of four researchers on the project. </p><p> The team won a $10,000 award for its work porting the algorithm to the Cell processor as part of a university challenge organized by IBM Corp. As many as 80,000 students from 25 countries competed in the challenge with winners announced at the first <a href="http://www.power.org/devcon/07">Power.org technical conference</a> here Monday (Sept. 24). </p><p> The same brain algorithm used for vision is also employed for language processing. Thus researchers hope their work can ultimately lead to the creation of small robots that can both respond to commands and then autonomously navigate their way to do useful work such as delivering small packages. </p><p>"We aim to put all the speech and vision recognition into a working robot, so we need real-time performance," said Felch. "DARPA wants to see people create robots that can actually drive a vehicle without harming anyone in the process," he added. </p><p>"The hardest part of our effort was in understanding the brain algorithm and translating it into something we could use on the computer," said Jayram Moorkanikara, a doctoral student at UC Irvine. "The language brain researchers speak is completely different from the one computer scientists use," he added. </p><p> The team spent about eight months on the project first implementing the algorithm on a 2 GHz Intel Core 2 Dupo processor. Using the PC, the team showed machine vision that could recognize in three minutes a bar stool in an image of an office setting. </p><p> Using a network of three Playstation3 consoles linked to a PC, the tem was able to speed the recognition rate up to just one second. "A one-second delay is essentially real time object recognition, and that is just what humans do," said Felch. </p><p>Thanks to its on-board accelerators, the Cell processor in the consoles was able to handle key computations in three cycles that the Intel chip had to compute sequentially in 15 cycles. Overall the three consoles handled the work at rates up to 140 times the speed of the single PC processor, Felch said. </p><p>The underlying algorithm breaks objects down into a hierarchy of key shapes and objects called line triplets. Those primitives are then compared to similar shapes in a new image. The research effort was focused on speeding up the process of making those comparisons. </p><p> Latency is significantly less important for the brain algorithm. Synapses typically have latency delays of one millisecond, Felch said. That's more than an order of magnitude higher than the latencies in today's fastest computers, he added. </p></td></tr></tbody></table>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-91718359537163589652007-10-08T13:32:00.000-07:002007-10-08T13:43:39.488-07:00Playstation 3 used as a supercomputerResearchers from Unicamp (Brazil) are using Playstation 3, actually, 12 of them, connected as a Network to analyze biological data!!!<br /><br />They use the power of its 8 cell processors and a huge memory to do the job!!!<br /><br />The complete article can be found in<br /><a href="http://g1.globo.com/Noticias/Ciencia/0,,MUL146410-5603,00.html">http://g1.globo.com/Noticias/Ciencia/0,,MUL146410-5603,00.html</a><br />or<br /> <a href="http://br-linux.org/linux/unicamp-usa-playstation-3-para-realizar-pesquisas">http://br-linux.org/linux/unicamp-usa-playstation-3-para-realizar-pesquisas</a>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-28570164127747003712007-08-29T12:38:00.000-07:002007-10-08T13:39:34.527-07:00Image retargetingOutstanding! Dr. Shai Avidan has created a fantastic algorithm to automatically retarget scaled images without distortion! He has done it during his Pos-Doc in Microsoft Research Center<br /><br />See the video bellow!<br /><br /><a href="http://www.youtube.com/watch?v=qadw0BRKeMk">http://www.youtube.com/watch?v=qadw0BRKeMk</a><br /><br />More info in the blog: <a href="http://blogs.adobe.com/jnack/2007/08/holy_crapworthy.html">http://blogs.adobe.com/jnack/2007/08/holy_crapworthy.html</a>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-19147976770898305162007-07-29T06:07:00.000-07:002007-07-29T06:29:39.178-07:00We accept the reality that it is presented to us"We accept the reality that it is presented to us" - that is the big statement from the director of the Truman's show. Unfortunately, we, humans, are prone to cheat ourselves and we would like to see the reality that is more convenient to us. For instance, we accept all the "truths" that the media brings to us... And, along the history, the media not always was the TV... the forth power has been shaped into many forms...<br /><br />But, the question is: what is it for in a scientific blog? It has a simple answer: it also happens in academy... and in scientific work...<br /><br />We just accept or are to see what our "brain" or our "eyes" like to see. For example, there is classical techniques to evaluate a image classifier: ROCs, DETs, Recall-Precision curves, etc. But all of them just measures a glimpse of the classifier in a certain situation! It is just an angle of the reality which, most of the time, can be manipulated by our own intention to show big results!<br /><br />Then, to go through with a research work, it is necessary honesty! To be honest with us and investigate all points of view in order to to present honest results... And also to see further, because even in the scientific world, the view of the mass keeps on and we can look around many people that just "accept the reality that it is presented to us"!!Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-55976729577914418312007-07-16T07:24:00.000-07:002007-10-08T13:40:57.082-07:00PhD Comics<span class="on" style="display: block;" id="formatbar_CreateLink" title="Link" onmouseover="ButtonHoverOn(this);" onmouseout="ButtonHoverOff(this);" onmouseup="" onmousedown="CheckFormatting(event);FormatbarButton('richeditorframe', this, 8);ButtonMouseDown(this);"></span>Searching in the internet and all of a sudden, I found this: <a href="http://www.phdcomics.com">http://www.phdcomics.com</a>!<br /><br />I have to admit that I bought the two first books. Fantastic! The first year is very boring, most probably cause Jorge Cham was just starting... But, the next years he just professionalize it and it really becomes amazing!<br /><br />Just a sample which can be found in the site:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE_vVvRvZVK8ClbnSSYT1aPfzqzBkvrXDGk1kCEON8NbBlDg0-d3A4ICt8Injz-1EL0Ks_d_CUEmZDVTViSH1AgP0bz3mIqMfM2f55sZjIm9P2-S9d9k12y8uupLHRBywemvHf50HbYjQ/s1600-h/phd071307s.gif"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE_vVvRvZVK8ClbnSSYT1aPfzqzBkvrXDGk1kCEON8NbBlDg0-d3A4ICt8Injz-1EL0Ks_d_CUEmZDVTViSH1AgP0bz3mIqMfM2f55sZjIm9P2-S9d9k12y8uupLHRBywemvHf50HbYjQ/s320/phd071307s.gif" alt="" id="BLOGGER_PHOTO_ID_5087801767560505346" border="0" /></a>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com1tag:blogger.com,1999:blog-5469925856162782564.post-67885486134017812082007-06-29T07:34:00.000-07:002007-10-08T13:41:27.848-07:00More photos of the Symposium in ChinaOn the site bellow, more photos can be found about the symposium in Shanghai:<br /><br /><a href="http://cyberc3.sjtu.edu.cn/photo/AEIV07/24AM/index.html">http://cyberc3.sjtu.edu.cn/photo/AEIV07/24AM/index.html</a>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0tag:blogger.com,1999:blog-5469925856162782564.post-31141771054919758412007-05-29T14:00:00.000-07:002007-05-29T14:42:21.516-07:00A meeting in ChinaShanghai... Big city, with all kind of good and bad things! I made a lot of friends and I could see the strength of the chinese people who put all of their efforts pretty much on themselves!<br /><br />I came there for a European-Asian project meeting... A lot of interesting things I could learn there. Still a meeting to put all sub-projects together and do some brainstorms!<br /><br />My demonstration was quite successful and at this time, I really came up with that presentation only in PDF format (smiles).<br /><br />There, interesting things happened: a police cybercar, successful demontrations and good sightseeing in amazing tipical places!<br /><br />I really brought some of the chinese atmosphere...<br /><br />Concerning to my project, I almost decided to use the HOG features as the discriminant features in my object recognition system, they have been shown excelent results!<br /><br />Bellow, some pictures!<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdrX7FQKW7KRbaANqfXK9ozwG0D1nSsXlfUVJ48uXqe9EjxZzmfHyUl2M8dmJZ4piNcFAknakkpiSMB6hBXAR5DpJNtU0uEGQhZNGK_Y3P75BTrN_X6xr5tvzf2bsyzVmruHSfkCFuDZc/s1600-h/dsc02612.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdrX7FQKW7KRbaANqfXK9ozwG0D1nSsXlfUVJ48uXqe9EjxZzmfHyUl2M8dmJZ4piNcFAknakkpiSMB6hBXAR5DpJNtU0uEGQhZNGK_Y3P75BTrN_X6xr5tvzf2bsyzVmruHSfkCFuDZc/s320/dsc02612.jpg" alt="" id="BLOGGER_PHOTO_ID_5070098545506619410" border="0" /></a><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNBNiLS4m7e11-_w748xlnN1d8auQgghhYVpHBgWBDiEPSyzNTLI6CL5H0v7eYd3JdupxrGantbslBvYtG_Se0Yu00PKUHmAxiptU2F_GtVlCEyKRW4zYCYVqX-ochdiUPbhB_LlWKq5M/s1600-h/dsc02519.jpg"><img style="cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNBNiLS4m7e11-_w748xlnN1d8auQgghhYVpHBgWBDiEPSyzNTLI6CL5H0v7eYd3JdupxrGantbslBvYtG_Se0Yu00PKUHmAxiptU2F_GtVlCEyKRW4zYCYVqX-ochdiUPbhB_LlWKq5M/s320/dsc02519.jpg" alt="" id="BLOGGER_PHOTO_ID_5070096999318392834" border="0" /></a>Luciano Oliveirahttp://www.blogger.com/profile/06774077093507244120noreply@blogger.com0