Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2018, Revista Internacional De Metodos Numericos Para Calculo Y Diseno En Ingenieria
https://doi.org/10.23967/J.RIMNI.2018.06.001…
1 file
Digital Photogrammetry and Remote Sensing '95, 1995
Automated digital photogrammetric systems are considered to be passive three-dimensional vision systems since they obtain object coordinates from only the information contained in intensity images. Active 3-D vision systems, such as laser scanners and structured light systems obtain the object coordinates from external information such as scanning angle, time of flight, or shape of projected patterns. Passive systems provide high accuracy on well defined features, such as targets and edges however, unmarked surfaces are hard to measure. These systems may also be difficult to automate in unstructured environments since they are highly affected by the ambient light. Active systems provide their own illumination and the features to be measured so they can easily measure surfaces in most environments. However, they have difficulties with varying surface finish or sharp discontinuities such as edges. Therefore each type of sensor is more suited for a specific type of objects and features, and they are often complementary. This paper compares the measurement accuracy, as applied to various type of features, of some technologically-different 3-D vision systems: photogrammetry-based (passive) systems, a laser scanning system (active), and a range sensor using a mask with two apertures and structured light (active).
This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from on image sequence, more precisely the recovery of the 3-D structure of cylindrical ob-jects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-lime tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme iias been successfully validated in an experimental set-up.
2002
Three-dimensional surface reconstruction using a handheld scanner is a process with great potential for use on different fields of research, commerce and industrial production. In this article we will describe the evolution of a project comprising the study and development of a system that implements the aforementioned process based on two-dimensional images. We will present our current work on the development of a fully portable, handheld system using cameras, projected structured light and attitude and positioning measuring sensors -the Tele-3D scanner.
This paper describes the implementation of a 3D handheld scanning system based on visual inertial pose estimation and structured light technique.3D scanning system is composed of stereo camera, inertial navigation system (INS) and illumination projector to collect high resolution data for close range applications. The proposed algorithm for visual pose estimation is either based on feature matching or using accurate target object. The integration of INS enables the scanning system to provide the fast and reliable pose estimation supporting visual pose estimates. Block matching algorithm was used to render two view 3D reconstruction. For multiview 3D approach, rough registration and final alignment of point clouds using iterative closest point algorithm further improves the scanning accuracy. The proposed system is potentially advantageous for the generation of 3D models in bio-medical applications.
Transactions of the VŠB - Technical University of Ostrava, Mechanical Series, 2015
The paper deals with the problem of object recognition for the needs of mobile robotic systems (MRS). The emphasis was placed on the segmentation of an in-depth image and noise filtration. MS Kinect was used to evaluate the potential of object location taking advantage of the indepth image. This tool, being an affordable alternative to expensive devices based on 3D laser scanning, was deployed in series of experiments focused on object location in its field of vision. In our case, balls with fixed diameter were used as objects for 3D location.
in VISAPP 2006-First International …, 2006
In this paper we describe the development of a Computer Platform, whose goal is to recover the threedimensional (3D) structure of a scene or the shape of an object, using Structure From Motion (SFM) techniques. SFM is an Active Computer Vision technique, which doesn't need contact or energy projection. The main objective of this project is to recover the 3D shape of an object or scene using the camera(s)'s or object's movement, without imposing any kind of restrictions to it. Starting with an uncalibrated sequence of images, the referred movement is extracted, as well as the camera(s) parameters, and finally, the 3D geometry of the object or scene is inferred. Shortly, in the first section of this paper the goals are defined; in the second, the computer platform is presented, as well as some experimental results; in the third and last section, the conclusions relative to the study and work done are drawn and, finally, some perspectives of future work are given.
In this paper we investigate an active vision technique implemented in an embedded system for 3D shapes reconstruction. The main objective of the work is to have a balance in the accuracy of all components in the system where the size and autonomy of such an embedded sensor are hard constraints. This is achieved through the improvement of the pre-processing algorithms by reducing the time needed to compute the spots centers. In addition, lens distortion of the camera is included in the model to increase accuracy when reconstructing objects. Experimental evaluation shows that the size and the time are reduced, precision increased, when the resources spent on processing are relatively acceptable in comparison to the benefits.
3D perception has known impressive advances in the past 3 years; it corresponds to several technological improvements, plus many new development teams providing open sources. First of all, researchers in Robotics and 3D Perception have made profit of the Kinect sensor; some works were already devoted to 3D cameras, using more expensive Time-of-Flight optical devices. Another common way to acquire dense 3D data, is by scanning the environment by a laser range finder (LRF); as for example, the Hokuyo tilting LRF integrated on the PR2 robot by Willow Garage. To build a dense geometrical model of an indoor environment, several sensors could be selected in order to acquire 3D data. This paper aims at giving some insights on this selection, presenting some pros and cons for Kinect, Hokuyo and ToF optical sensors.
2011 15th International Conference on Advanced Robotics (ICAR), 2011
In this paper we present an approach to localize planar furniture parts in 3D range camera data for autonomous robot manipulation, that estimates both their six degree of freedom (DoF) poses and their dimensions. Range cameras are a promising sensor category for mobile robotics. Unfortunately, many of them come with a considerable measurement noise, that leads to difficulties when trying to detect objects or their parts e.g. using canonical methods for range image segmentation. In contrast, our approach is able to overcome these issues by combining concepts of 2D and 3D computer vision as well as integrating intensity and depth data on several levels of abstraction. Therefore it is not restricted to range sensors with high image quality and scales on cameras with lower image quality, too. This concept is generic and has been implemented for elliptical object parts as a proof of concept.
Optics and Lasers in Engineering
Full field optical techniques can be reliably used for 3D measurements of complex shapes by multiview processes, which require the computation of transformation parameters relating different views into a common reference system. Although, several multi-view approaches have been proposed, the alignment process is still the crucial step of a shape reconstruction.
IEEE Transactions on Instrumentation and Measurement, 2015
IEEE Computer Graphics and Applications, 1998
IEEE Transactions on Robotics and Automation, 1991
International Conference on …, 2005
International Journal of Engineering Research and, 2020
Proceeding on Electronic Imaging Science and …
Journal of Physics: Conference Series, 2011
Videometrics and Optical Methods for 3D Shape Measurement, 2000
Sensors (Basel, Switzerland), 2017
ipi.uni-hannover.de, 2011
Scientific Bulletin of Valahia University - Materials and Mechanics