Saturday 31 August 2013

3D maps created in real time could aid disaster search and rescue


Highly detailed 3D maps of indoor and outdoor environments can now been created in real time with no drift thanks to a new mapping algorithm. The maps could be created in the immediate aftermath of a disaster to help search and rescue teams navigate and understand hazardous or unknown environments.

Researchers from MIT and the National University of Ireland used a low-cost Kinect camera to test the algorithm by filming environments and creating richly detailed 3D maps as they went. The camera recognises locations it's seen before and so when it returns to its starting point it forms a closed loop and stitches the images together.

Some of the scenarios in which this technology could be used in include "architecture and surveying-type operations, autonomous robotics settings and disaster management scenarios," Thomas Whelan, a PhD student at NUI, tells Wired.co.uk. "For example, areas of a building could be scanned in real-time by an architect to quickly design, visualise, and evaluate a renovation project. Or, a human operator could scan in parts of a building during a disaster scenario to evaluate the damage after an earthquake."

3D mapping is nothing new, but it has long suffered from a phenomenon known as "drift", which adds up all the small errors in the estimated path taken to create a disjointed map. To generate accurate maps, you have to know which of the millions of points need aligning. In the past this has been tackled by running the data over and over, but this isn't a practical approach to making maps in real time.

"Being able to map in real-time has a number of significant advantages, such as being able to perform decision making regarding the mapped area as it is being explored, which again is important for autonomous robotics and search and rescue type situations," says Whelan.

The new algorithm, however, keeps track of the camera's pose and positioning throughout its journey so that when it returns to the start point, it knows which adjustments to make. A Kinect camera takes images at 30 frames per second, which allows the algorithm to measure the camera's movements between each frame. It can then fix the points where walls and stairways don't meet and untangle the warped pathways, manipulating them so they accurately represent the space they've moved through.

"The density and fidelity of the maps enables the use of advanced object detection and semantic analysis techniques, providing very high level information about the area explored."

Maps of various indoor and outdoor locations in London, Sydney, Germany and Ireland, as well as in MIT itself, have been created using the technique. The team have made a video demonstrating the method which show maps being created and twisted into shape in real time.

The most exciting part of the work is probably the idea that autonomous robots could use the technology to help them make decisions about which direction to travel in and gain a deeper understanding of their environments, but it has the potential to be used in all sorts of situations.

"I have this dream of making a complete model of all of MIT," says Whelan's colleague John Leonard, a professor of mechanical engineering at MIT. "With this 3D map, a potential applicant for the freshman class could sort of 'swim' through MIT like it's a big aquarium. There's still more work to do, but I think it's doable."

Saturday 31 August 2013

http://www.wired.co.uk/news/archive/2013-08/30/real-time-3d-mapping

0 comments:

Post a Comment