Volvo S60 with pedestrian detection
Volvo has deployed his novel system based on a radar and a camera. The commercial, at least, make it worth. I didn't see any test on this:
This is a scientific blog with information about computer vision (CV) and intelligent transportation systems (ITS). The goal is also to providing topics about biological models concerning human mimic systems.
Volvo has deployed his novel system based on a radar and a camera. The commercial, at least, make it worth. I didn't see any test on this:
Publicada por Luciano Oliveira em 1:01 AM 0 comentários
This article has been taken in http://www.gizmag.com/sentience-adaptive-intelligent-cruise-control-driverless-car/11223/
More and more initiative is coming, demonstrating the future of intelligent vehicles. Now, in intelligent cruise control.
-----------------
March 11, 2009 The driverless car of the future is getting closer every day, as more and more technologies come along that take critical jobs away from the driver and put them in the hands of lightning-fast, all-seeing computers. One of the latest and most ambitious of these systems has just been successfully demonstrated in the UK; the Sentience system is a kind of hyper-intelligent cruise control system designed specifically to minimize fuel consumption and emissions. It calculates the best route for you based on traffic, topography, curves, speed limits and a host of other information, and then actually takes over the throttle and brakes for you for the entire journey. It keeps you strictly within speed limits, slows down for corners, speed bumps and roundabouts, and it even knows when the lights ahead are about to turn red, so you don't waste petrol accelerating towards a stop point. Fuel savings in testing have been between 5% and 24% - a very significant figure - and Sentience is expected to be available on production cars, for a minimal cost, as soon as 2012. Incredible stuff.
Cruise control systems are getting smarter. Adaptive cruise control units can monitor the distance to the car in front of you on the freeway and maintain a safe gap - and they're becoming a fairly common factory option. Other systems are learning to read speed signs, in order to save drivers from draconian automatic speed enforcement systems.
Now, a UK research partnership has demonstrated a system which goes far beyond anything we've seen before. The ominously titled Sentience system runs from a mobile phone that's connected to the car's onboard ECU. It sucks in huge amounts of data as you travel, analyzing your planned route in terms of traffic, gradients, curves, speed limits and even probable speed limiting features such as junctions, crossings, schools, speed bumps, roundabouts and traffic lights. It then manages your acceleration and deceleration in such a way as to deliver maximum efficiency from a hybrid engine, resulting in demonstrated fuel savings of between 5% and 24%, depending on traffic and topology. Scale that out to a large number of vehicles and you're looking at huge benefits, fuel-wise and in terms of emissions.
The Sentience partnership
The Sentience system is the result of a multi-industry research partnership aimed at reducing CO2 emissions and fuel usage in hybrid vehicles. Ricardo and Jaguar-Land Rover brought their knowledge of car electronics and engine management to the table, TRL provided expertise on traffic, traffic signals and road usage patterns, Ordnance Survey contributed a massive breadth and depth of information about the UK road system, including curvature and topograpy information, and Orange Business Services chipped in with their knowledge of mobile phone software and handset connectivity.
Between the five major partners, a system has emerged that acts like a kind of intelligent adaptive cruise control system that knows the roads you're taking, and how exactly to drive them for maximal energy efficiency and minimal emissions. The Sentience test vehicle was demonstrated earlier this month at a UK test track, with representatives of the media invited to drive a car equipped with the system, running through a Nokia N95 mobile phone.
How it works
Based on route information – which could eventually be integrated with a commercial navigation system – the Sentience vehicle will calculate and follow an optimal driving strategy. Its control system adjusts vehicle speed, acceleration and deceleration via its adaptive cruise control and regenerative braking. Using GPS and mapping data it takes into account the speed limits, traffic conditions, the road’s gradient and features including bends and even speed bumps, as well as less predictable road features including roundabouts, to determine the most efficient possible route.
It's also keyed in to traffic light timing, so it will automatically start decelerating if it knows the green light you're approaching is about to turn red. The driver simply keys in a destination, and steers the car without a foot on either pedal, letting the car make the decisions on acceleration and braking. Of course, you'd want to keep your foot close to the brakes to over-ride the system in case of an emergency situation.
The Sentience system also concentrates on getting the most out of hybrid drive systems, by optimizing the regenerative braking strategy for the batteries and increasing the availability of electric-only drive mode where possible.
Towards a driverless future
It's a clear step forward toward a future where your car will do the driving and you'll just be a passenger - and a demonstration of how a computer with detailed route and traffic signal information can make a huge difference to fuel consumption and emissions.
The Sentience system is expected to be made available in new cars as soon as 2012 - and the fuel savings will add up to around UKP500 per year if you spend around UKP50 a week on petrol. Scaled out to a large number of cars across the UK, the system could save between 1.2 and 2.9 million barrels of oil per year.
What are your thoughts?
Assuming it works as well in the real world as it has on the test track, would you sacrifice your total control and a few minutes of travel time if the Sentience system could guarantee you the minimum possible fuel consumption, the most fuel-efficient route and an absolute certainty that you won't be getting a speeding ticket? How do you feel about a future where a large number of cars run this sort of system? Let us know in the comments below.
Loz Blain
Publicada por Luciano Oliveira em 5:00 AM 0 comentários
CitySatety, the new system from Volvo. News taken from http://www.thecarthatstopsitself.com/
Videos with the systems in Volvo Channel in Youtube
--------------------
Ok, so like many of my male friends, we don’t want to give up control of anything to anyone, much less a car. I mean, why would I want some high tech device to stop my car…I’m not good enough? Well probably not. Last summer my wife and I were heading down to Washington, D.C., stuck in world famous I-95 traffic. I’m stopped waiting for some signs of life ahead. I looked over my shoulder at our dog and bumped into the guy in front. I looked up and saw that I was moving….too late. Honestly, wasn’t more than 4 mph. The only damage was to his car, where my license plate bolts scrapped his fender. That cost me $550. That wouldn’t have happened with City Safety.
In April this year I drove an XC60 with City Safety. After I-95, I think City Safety has my vote. When driving the XC60 with City Safety, I was surprised at how quickly and aggressively the system applies brakes, at speeds less than 9 mph it stop our XC60 completely. OK, if there’s ice, snow, rain or whatever makes a road surface slippery, the stopping distance will change. From 10-18 mph the system will reduce vehicle speed and help to reduce vehicle damage and injuries, especially whiplash. I borrowed this from an IIHS report: “Rear-end collisions are frequent, and neck injuries are the most common serious injuries reported in automobile crashes. They account for 2 million insurance claims each year costing at least $8.5 billion. Such injuries aren’t life-threatening, but they can be painful and debilitating.” Guess I helped contribute to that cost next go around.
City Safety works by calculating the distance to the forward vehicle and road speeds relative to each vehicle to determine if your near future life includes bumping into a stranger’s vehicle. Mounted up behind the windshield, near inside rear view mirror is a laser camera. Interesting – all these years we’ve built safety systems that are hidden, like energy absorbing systems, great brakes, collapsing steering columns, whiplash seat, stuff that’s just waiting to be used, but you’ll never see. With City Safety it’s right in sight; guess you could use it to impress the neighbors. Hey, no one has this but Volvo.
I asked the engineer who worked on this, why no red warring light, like a “heads up display?” “By the time you see the light, and we wait for you to apply brakes, you’ll have hit the car in front.” His logic, and he’s right, is the system works faster than you or I. Our Collision Warning System with Automatic Braking does have a nice red led ‘bar’ light to warn you, looks like someone’s brake light right in front – heads up display style, but then in most cases you do have time to react.
Guess we men just have to learn when to bow to something smarter than us. I like City Safety. Well, back to work. James, my counterpart out West is pestering me for help with the Detroit Auto Show, we have press days January 11-13 and public days start the 19th.
Publicada por Luciano Oliveira em 7:11 AM 0 comentários
Amazing news: the Institute of System and Robotics of University of Coimbra (the institute in which I have been taking my PhD), was recently ranked as excellent for an international jury ( More in: All evaluations and ISR evaluation). Bellow an email from ISR-UC's director:
"Dear Researchers
I want to inform you that the International Panel which evaluated the research units in Portugal in the area of Electrical Engineering and Computer Science, gave us then top ranking grade - Excellent. Only one unit in 25 received this grade. I want to thank all of you for your hard work and dedication towards producing first class research results. We want to keep carrying out the same level or even better research work, and for this purpose your commitment will be essential and most appreciated.
I wish you and your families Happy Christmas Holidays, as well as a Very Good New Year."
Prof. Anibal T. de Almeida
Director
ISR - University of Coimbra
Dep. Electrical Engineering , Polo II
3030 Coimbra, Portugal
Publicada por Luciano Oliveira em 3:03 AM 0 comentários
I completely agree with this article bellow! I took it from http://web.mit.edu/newsoffice/2008/computer-vision-0124.html. The question is unsolvable in my point of view, since we don't have any metric to be applied in order to reference how clutter is an image or pointing some difficult characteristics of the image! Read the article bellow!
-------------------------
For years, scientists have been trying to teach computers how to see like humans, and recent research has seemed to show computers making progress in recognizing visual objects.
A new MIT study, however, cautions that this apparent success may be misleading because the tests being used are inadvertently stacked in favor of computers.
Computer vision is important for applications ranging from "intelligent" cars to visual prosthetics for the blind. Recent computational models show apparently impressive progress, boasting 60-percent success rates in classifying natural photographic image sets. These include the widely used Caltech101 database, intended to test computer vision algorithms against the variety of images seen in the real world.
However, James DiCarlo, a neuroscientist in the McGovern Institute for Brain Research at MIT, graduate student Nicolas Pinto and David Cox of the Rowland Institute at Harvard argue that these image sets have design flaws that enable computers to succeed where they would fail with more-authentically varied images. For example, photographers tend to center objects in a frame and to prefer certain views and contexts. The visual system, by contrast, encounters objects in a much broader range of conditions.
"The ease with which we recognize visual objects belies the computational difficulty of this feat," explains DiCarlo, senior author of the study in the Jan. 25 online edition of PLoS Computational Biology. "The core challenge is image variation. Any given object can cast innumerable images onto the retina depending on its position, distance, orientation, lighting and background."
The team exposed the flaws in current tests of computer object recognition by using a simple "toy" computer model inspired by the earliest steps in the brain's visual pathway. Artificial neurons with properties resembling those in the brain's primary visual cortex analyze each point in the image and capture low-level information about the position and orientation of line boundaries. The model lacks the more sophisticated analysis that happens in later stages of visual processing to extract information about higher-level features of the visual scene such as shapes, surfaces or spaces between objects.
The researchers intended this model as a straw man, expecting it to fail as a way to establish a baseline. When they tested it on the Caltech101 images, however, the model did surprisingly well, with performance similar or better than five state-of-the-art object-recognition systems.
How could that be? "We suspected that the supposedly natural images in current computer vision tests do not really engage the central problem of variability, and that our intuitions about what makes objects hard or easy to recognize are incorrect," Pinto explains.
To test this idea, the authors designed a more carefully controlled test. Using just two categories--planes and cars--they introduced variations in position, size and orientation that better reflect the range of variation in the real world.
"With only two types of objects to distinguish, this test should have been easier for the 'toy' computer model, but it proved harder," Cox says. The team's conclusion: "Our model did well on the Caltech101 image set not because it is a good model but because the 'natural' images fail to adequately capture real-world variability."
As a result, the researchers argue for revamping the current standards and images used by the computer-vision community to compare models and measure progress. Before computers can approach the performance of the human brain, they say, scientists must better understand why the task of object recognition is so difficult and the brain's abilities are so impressive.
Publicada por Luciano Oliveira em 7:39 AM 0 comentários