Home Page | Photos | Video | Forum | Most Popular | Special Reports | Biz China Weekly
Make Us Your Home Page
Most Searched: CPC  South China Sea  Belt and Road Initiative  AIIB  RMB  

Computer vision system to automate acquiring of detailed building information

Source: Xinhua   2016-07-02 07:51:54

SAN FRANCISCO, July 1 (Xinhua) -- Researchers at Stanford University in northern California have developed a system using 3 dimensional (3-D) sensing technologies and a computer vision algorithm to acquire detailed building information.

Newer buildings often have computerized blueprints and records, including details such as the number of rooms, doors and windows, and the square footage of floors, ceilings and walls. But such information may not exist for older buildings, necessitating the time-consuming and difficult task of collecting these details manually for remodeling or refurbishing purposes.

The new system, presented by Stanford researchers at the Conference on Computer Vision and Pattern Recognition of the Institute of Electrical and Electronics Engineers (IEEE), which started Sunday and ended Friday, is designed to automate the process of getting detailed building information, first by using light to measure every feature of a building' s interior, room by room and floor by floor, to create a massive data file that captures the spatial geometry of any building, and then by feeding the raw data file into a new computer vision algorithm.

The algorithm identifies structural elements such as walls and columns, as well as desks, filing cabinets and other furnishings.

"Renovation projects live and die by the quality of information," according to Martin Fischer, a Stanford professor of civil and environmental engineering.

The new process is the brainchild of Stanford doctoral student Iro Armeni, with interdisciplinary oversight from Silvio Savarese, a Stanford assistant professor in computer science who leads the Computational Vision and Geometry Lab, and Fischer, who heads the Center for Integrated Facility Engineering.

"People have been trying to do this on a much smaller scale, just a handful of rooms," said Savarese. "This is the first time it's possible to do it at the scale of whole buildings, with hundreds of rooms."

Armeni, once an architect on the Greek island of Corfu, used to work on custom renovations on historical buildings hundreds of years old. She and colleagues used tape measures to redraw building plans, a practice that is both time-consuming and often inaccurate.

She began by replacing her tape measure with laser scanners and 3-D cameras, which use light to take measurements with up to millimeter accuracy. When placed inside a building, they send out pulses of light in all directions, bathing every interior surface. By recording precisely how long it takes for a beam of light to hit a given point in the room and bounce back, they create a data file consisting of literally millions of measurements, about specific points where beams of light encountered some surface. This massive data file is called a raw point cloud.

However, humans had to look at the point cloud on a computer screen to identify building elements such as windows, walls, hallways and furniture and then type that information into software tools. To the computer, the point cloud was an indistinguishable mass of data.

The Stanford team's innovation was developing a computer vision system that could analyze the point cloud for a building, distinguish the rooms, and then categorize each element in each room. This automated the second half of the process, the need for humans to annotate the data. While buildings vary in many ways, including room size, purpose and interior decoration, this is where machine learning and computer vision came in.

To train their computer vision system, the researchers collected a great amount of 3-D point cloud data that humans had annotated. These annotations specified all sorts of building features. Armeni managed the task of feeding this annotated point cloud data to the algorithm.

Through repetition, the system "learned" to recognize different building elements. Ultimately, the researchers created an algorithm that can analyze raw point cloud data from an entire building and, without human assistance, identify the rooms, enter each room, and detail the structural elements and furniture.

"This kind of geometric, contextual reasoning is one of the most innovative parts of the project," Savarese was quoted as saying by a news release from Stanford.

Armeni hopes to move and project forward and create an algorithm that can track the whole life cycle of a building - through design, construction, occupation and demolition. "As engineers, we shouldn't lose time trying to find the current status of our building," she said. "We should invest this time in doing something creative and making our buildings better."

Editor: xuxin
Related News
           
Photos  >>
Video  >>
  Special Reports  >>
Xinhuanet

Computer vision system to automate acquiring of detailed building information

Source: Xinhua 2016-07-02 07:51:54
[Editor: huaxia]

SAN FRANCISCO, July 1 (Xinhua) -- Researchers at Stanford University in northern California have developed a system using 3 dimensional (3-D) sensing technologies and a computer vision algorithm to acquire detailed building information.

Newer buildings often have computerized blueprints and records, including details such as the number of rooms, doors and windows, and the square footage of floors, ceilings and walls. But such information may not exist for older buildings, necessitating the time-consuming and difficult task of collecting these details manually for remodeling or refurbishing purposes.

The new system, presented by Stanford researchers at the Conference on Computer Vision and Pattern Recognition of the Institute of Electrical and Electronics Engineers (IEEE), which started Sunday and ended Friday, is designed to automate the process of getting detailed building information, first by using light to measure every feature of a building' s interior, room by room and floor by floor, to create a massive data file that captures the spatial geometry of any building, and then by feeding the raw data file into a new computer vision algorithm.

The algorithm identifies structural elements such as walls and columns, as well as desks, filing cabinets and other furnishings.

"Renovation projects live and die by the quality of information," according to Martin Fischer, a Stanford professor of civil and environmental engineering.

The new process is the brainchild of Stanford doctoral student Iro Armeni, with interdisciplinary oversight from Silvio Savarese, a Stanford assistant professor in computer science who leads the Computational Vision and Geometry Lab, and Fischer, who heads the Center for Integrated Facility Engineering.

"People have been trying to do this on a much smaller scale, just a handful of rooms," said Savarese. "This is the first time it's possible to do it at the scale of whole buildings, with hundreds of rooms."

Armeni, once an architect on the Greek island of Corfu, used to work on custom renovations on historical buildings hundreds of years old. She and colleagues used tape measures to redraw building plans, a practice that is both time-consuming and often inaccurate.

She began by replacing her tape measure with laser scanners and 3-D cameras, which use light to take measurements with up to millimeter accuracy. When placed inside a building, they send out pulses of light in all directions, bathing every interior surface. By recording precisely how long it takes for a beam of light to hit a given point in the room and bounce back, they create a data file consisting of literally millions of measurements, about specific points where beams of light encountered some surface. This massive data file is called a raw point cloud.

However, humans had to look at the point cloud on a computer screen to identify building elements such as windows, walls, hallways and furniture and then type that information into software tools. To the computer, the point cloud was an indistinguishable mass of data.

The Stanford team's innovation was developing a computer vision system that could analyze the point cloud for a building, distinguish the rooms, and then categorize each element in each room. This automated the second half of the process, the need for humans to annotate the data. While buildings vary in many ways, including room size, purpose and interior decoration, this is where machine learning and computer vision came in.

To train their computer vision system, the researchers collected a great amount of 3-D point cloud data that humans had annotated. These annotations specified all sorts of building features. Armeni managed the task of feeding this annotated point cloud data to the algorithm.

Through repetition, the system "learned" to recognize different building elements. Ultimately, the researchers created an algorithm that can analyze raw point cloud data from an entire building and, without human assistance, identify the rooms, enter each room, and detail the structural elements and furniture.

"This kind of geometric, contextual reasoning is one of the most innovative parts of the project," Savarese was quoted as saying by a news release from Stanford.

Armeni hopes to move and project forward and create an algorithm that can track the whole life cycle of a building - through design, construction, occupation and demolition. "As engineers, we shouldn't lose time trying to find the current status of our building," she said. "We should invest this time in doing something creative and making our buildings better."

[Editor: huaxia]
010020070750000000000000011100001354829361