Global detection in cloud2model
This year has started the development of the next generation tools for cloud2model. They imply a radical change in the current concept of automatic detection of BIM objects from point clouds.
The current tools.
Tools like WallByCloud2P and ColumnByCloud are the backbone of the automatic detection funcionality of cloud2model. They are fast, realible and full configurable by the user.
The automatic detection is is near instantaneous. And the configuration options allow the user to create directly the BIM objects as finished as possible. With creation of new types if needed and full control of rounding and orthogonal tolerance.
But the user must create the elements one by one. That means to pick a point for each column when using ColumnByCloud and two points for each wall with WallByCloud2P. This is of course a lot faster that manually create the elements. And even far more if you try to manually set precisely the rounding and orthogonal tolerance.
Would it be possible to maintain all the current advantages of the automatic selection of objects, but without the need of creating them one by one?.
Global detection.
The concept of global automatic recognition of possible BIM objects has been already implemented for some years. Specifically in the MEP field, to assist in the pipe/conduct creation of industrial buildings. But the current implementations of the concept is far from ideal:
It forces the user in a very rigid workflow in which the automatic detection must be done fully in the whole point cloud. It not thought for partial approachs.
The detection is done some times in a separate app, with the result just exported to the normal model environment (Revit).
The processing of the automatic detection can be very long, even of several hours.
Because the whole point cloud is processed, it is very possible that we will end with “false positives”, objects created by error. It is expected that the user will fully review the result. It is important to note that we can have hundreds or thousands of objects to review.
Of course the current applications with global detection are interesting and powerful tools. But they are very rigid to use, forcing the users to follow very strict workflows. And they are very difficult to use just partially or in fluid combination with other kind of tools or workflows.
A different approach.
The research for the next generation of cloud2model detection tools is based in a different approach for global detection. The user will directly define the area or region where to find objects. And this area can be as small or big as the user decides, fitting any kind of partial or global workflow, or even being used for detection of individual objects.
The video bellow is a prototype for global recognition of columns. It is simple test to evaluate the technology, it only draws the rectangular shape, instead of creating the actual Revit BIM column object. But it shows the possibilities of the new system.
The columns are detected as fast as the conventional ColumnByCloud, even when selecting the whole floor plan area. And we can use it as partial as we want, even for individual columns. The text note info. of each column contains the dimensions detected and a percentage related to the reliability of the detection.
The guidelines.
The development of the new cloud2model global tools is based in the next guidelines:
Fully integrated in the normal modelling environment, with the same convenience as the current tools: automatic creation of new types if needed, with direct control of the rounding and orthogonal tolerance.
Flexibility in defining the area where to find objects. The user will directly select the area to check, allowing to be used as partial or global as needed. They must be practical to use for individual elements too.
Fast performance. Except that the area pointed is very large, the automatic detection must be near instantaneous, or at least not more that one second. They must be “snappy” productivity tools.
User feedback about the reliability of the detection. It does not make sense that the user must check any object detected one by one. The tools must show clearly what are the objects with high confidence (only optional to review) and what are the ones with low confidence (recommended to review). The percentage shows in the prototype above is thought to be replaced by a simple color code.