3D Video

3D Video

Lucas, Laurent
Rémion, Yannick
Loscos, Céline

108,58 €(IVA inc.)

While 3D vision has existed for many years, the use of 3D cameras and video–based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century. The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide the necessary elements for understanding the underlying computer–based science of these technologies. They consider applications and perspectives previously unexplored due to technological limitations. This book guides the reader through the production process of 3D videos; from acquisition, through data treatment and representation, to 3D diffusion. Several types of camera systems are considered (multiscopic or multiview) which lead to different acquisition, modeling and storage–rendering solutions. The application of these systems is also discussed to illustrate varying performance benefits, making this book suitable for students, academics, and also those involved in the film industry. Contents Part 1. 3D Acquisition of Scenes 1. Foundation, Laurent Lucas, Yannick Remion and Céline Loscos. 2. Digital Cameras: Definitions and Principles, Min H. Kim, Nicolas Hautière and Céline Loscos. 3. Multiview Acquisition Systems, Frédéric Devernay, Yves Pupulin and Yannick Remion. 4. Shooting and Viewing Geometries in 3DTV, Jessica Prévoteau, Laurent Lucas and Yannick Remion. 5. Camera Calibration: Geometric and Colorimetric Correction, Vincent Nozick and Jean–Baptiste Thomas. Part 2. Description/Reconstruction of 3D Scenes 6. Feature Points Detection and Image Matching, Michel Desvignes, Lara Younes and Barbara Romaniuk. 7. Multi– and Stereoscopic Matching, Depth and Disparity, Stéphanie Prévost, Cédric Niquin, Sylvie Chambon and Guillaume Gales. 8. 3D Scene Reconstruction and Structuring, Ludovic Blache, Muhannad Ismael and Philippe Souchet. 9. Synthesizing Intermediary Viewpoints, Luce Morin, Olivier Le Meur, Christine Guillemot, Vincent Jantet and Josselin Gautier. Part 3. Standards and Compression of 3D Video 10. Multiview Video Coding (MVC), Benjamin Battin, Philippe Vautrot, Marco Cagnazzo and Frédéric Dufaux. 11. 3D Mesh Compression, Florent Dupont, Guillaume Lavoué and Marc Antonini. 12. Coding Methods for Depth Videos, Elie Gabriel Mora, Joël Jung, Béatrice Pesquet–Popescu and Marco Cagnazzo. 13. Stereoscopic Watermarking, Mihai Mitrea, Afef Chammem and Françoise Prêteux. Part 4. Rendering and 3D Display 14. HD 3DTV and Autostereoscopy, Venceslas Biri and Laurent Lucas. 15. Augmented and/or Mixed Reality, Gilles Simon and Marie–Odile Berger. 16. Visual Comfort and Fatigue in Stereoscopy, Matthieu Urvoy, Marcus Barkowsky, Jing Li and Patrick Le Callet. 17. 2D–3D Conversion, David Grogna, Antoine Lejeune and Benoît Michel. Part 5. Implementation and Outlets 18. 3D Model Retrieval, Jean–Philippe Vandeborre, Hedi Tabia and Mohamed Daoudi. 19. 3D HDR Images and Videos: Acquisition and Restitution, Jennifer Bonnard, Gilles Valette, Céline Loscos and Jean–Michel Nourrit. 20. 3D Visualization for Life Sciences, Aassif Benassarou, Sylvia Piotin, Manuel Dauchez and Dimitri Papathanassiou. 21. 3D Reconstruction of Sport Scenes, Sébastien Mavromatis and Jean Sequeira. 22. Experiments in Live Capture and Transmission of Stereoscopic 3D Video Images, David Grogna and Jacques G.Verly. About the Authors Laurent Lucas currently leads the SIC research group and is in charge of the virtual reality platform of the URCA (University of Reims Champagne Ardenne) in France. His research interests include visualization and co–operation between image processing and computer graphics, particularly in 3DTV, and their applications. Céline Loscos is Professor at the URCA, within the CReSTIC laboratory, and teaches computer science at the University Institute of Technology (IUT) in Champagne Ardenne, France. Yannick Remion’s research interests include dynamic animation, simulation and co–operation between image processing and computer graphics as well as 3D vision. INDICE: Chapter 1. Fundamentals 1.1. Introduction 1.2. A short history 1.2.1. Pinhole model 1.2.2. 3D and binocular vision 1.2.3. Reconstruction 1.3. Stereopsis and 3D physiological aspects 1.4. 3D computer vision 1.5. Conclusion 1.6. Bibliography Chapter 2. Digital cameras: definitions and principles 2.1. Introduction 2.2. Acquiring light: physics fundamentals 2.2.1. Radiometry and photometry 2.2.1.1. Scene illumination 2.2.2. Wavelengths and color spaces 2.3. Digital cameras 2.3.1. Optical components 2.3.1.1. Camera optics 2.3.1.2. Errors and corrections 2.3.2. Electronic components 2.3.2.1. Camera sensors 2.3.2.2. Digital noise and noise removal algorithms 2.3.3. Main camera functions and control 2.3.3.1. Autobracketing 2.3.4. Image storage formats 2.4. Camera, human vision and color 2.4.1. Adapting optics and electronic to human perception 2.4.2. Color control 2.4.2.1. Camera response 2.4.2.2. Color characterization 2.5. Outperforming 2.5.1. HDR imaging 2.5.2. Hyperspectral acquisition 2.6. Conclusion 2.7. Bibliography Chapter 3: Multiview acquisition systems 3.1. Introduction: what is a multiview acquisition system? 3.2. Binocular systems 3.3. Lateral or directional multiview systems 3.4. Surrounding or omnidirectional multiview systems 3.5. Hybrid systems: RGBZ and TOF 3.6. Conclusion 3.7. Bibliography Chapter 4: Shooting and viewing geometry for 3D TV  4.1. Introduction 4.2. Output geomerty of imaginary relief 4.2.1. Description 4.2.2. Possible modelling 4.3. Capture geometry of imaginary relief 4.3.1. Type of geometry to be used 4.3.2. Possible modelling 4.4. Link between output and capture geometry 4.4.1. Geometric characterization of imaginary relief experience 4.4.2. Distortion models 4.5. Methodology for specifying multiscopic acquisition 4.5.1. Controlling relief distortion 4.5.2. Perfect relief effect 4.6. Implementation in OpenGL 4.7. Conclusion 4.8. Bibliography Chapter 5: Geometric and colorimetric calibration and rectification 5.1. Introduction 5.2. Camera calibration 5.2.1. Introduction 5.2.2.Camera model 5.2.3. Calibration with a target 5.2.4. Automatic methods 5.3. Radial distortion 5.3.1. Introduction 5.3.2. When should distortion be corrected? 5.3.3. Radiale distortion correction models 5.4. Image rectification 5.4.1. Introduction 5.4.1.1. Problematics 5.4.2. Image–based methods 5.4.3. Camera–based methods 5.4.4. Rectification of more than 2 images simultaneously 5.5. Camera colorimetric aspects 5.5.1. Applyed colorimetry 5.5.2. Camera colorimetric calibration 5.5.2.1. Estimation of F(k) and S(k) 5.5.2.2. In practice 5.6. Conclusion 5.7. Bibliography Chapter 6: Feature points detection and image matching 6.1. Introduction 6.2. Feature points 6.2.1. Points detectors 6.2.1.1. Differential operators: Autocorrelation, Harris and Hessian 6.2.1.2. Scale invariance using multi–scale analysis 6.2.1.3. Corner intensity model 6.2.2. Contours and feature points detection 6.2.2.1. Shapes detectors 6.2.2.2. Curvature and scale space 6.2.3. Stable regions: IBR, MSER 6.3. Feature point descriptors 6.3.1. Scale–invariant feature transform: SIFT 6.3.2. Gradient Local Orientation Histogram: GLOH 6.3.3. DAISY descriptor 6.3.4. Speeded Up Robust Features: SURF 6.3.5. Multi–scale Oriented PatcheS: MOPS 6.3.6. Shape context 6.4. Image matching 6.4.1. Descriptors matching 6.4.2. Estimation of the geometric transform: matches grouping 6.4.2.1. Generalized Hough Transform 6.4.2.2. Graph matching 6.4.2.3. RANSAC and variants 6.5. Conclusion Chapter 7: Multi and Stereoscopic matching, depth and disparity 7.1. Introduction 7.2. Difficulties, primitives and density of stereoscopy matching 7.2.1. Difficulties 7.2.2. Primitives and density 7.3. Simplified geometry and disparity 7.4. Description of stereoscopic and multiscopic methods 7.4.1. Algorithms of local and global matching 7.4.2. Principal constraints 7.4.3. Energy costs 7.5. Methods with explicit consideration of occultations 7.5.1. Stereoscopic local method – propagation of seeds 7.5.1.1 Initialization of germs 7.5.1.2 Approach by propagation 7.5.1.3 Regulation by region sounding 7.5.2 Multiscopic global method 7.5.2.1 Formulation of multiscopic matching 7.5.2.2. Energy function and constraint of geometric consistency 7.5.2.3. Global selection and partition construction 7.5.2.4. Results 7.6. Conclusion 7.7. Bibliography Chapter 8: Multiview reconstruction 8.1. Problematics 8.2. Visual hull–based reconstruction 8.2.1. Methods to extract visual hulls 8.2.2. Reconstruction methods 8.2.3. Improving volume reconstruction 8.2.3.1. Voxel Coloring 8.2.3.2. Space Carving 8.3. Industrial implementation 8.3.1. Hardware acceleration 8.3.2. Results 8.4. Temporal structuration of reconstructions 8.4.1. Extraction of a generic skeleton 8.4.2. Computation of motion fields 8.5. Conclusion 8.6. Bibliography Chapter 9: Synthesis of intermediate views 9.1. Introduction 9.2. Interpolation/extrapolation view synthesis 9.2.1. Direct and inverse projections 9.2.1.1. Equations of direct projection 9.2.1.2. Direct projection artefacts 9.2.1.3. Inverse projection inverse 9.2.2. Limiting view synthesis artefacts 9.2.2.1. Cracks 9.2.2.2. Ghost outlines 9.2.2.3. Open zones 9.2.3. View interpolation 9.2.3.1. Fusion of virtual views 9.2.3.2. Detection and smoothing of interpolation artefacts 9.2.3.3. Float textures 9.2.3.4. View extrapolation 9.3. Open zone filling 9.3.1. State of the art on 2D inpainting techniques 9.3.1.1. Diffusion–based methods 9.3.1.2. Similarity–based methods 9.3.2. 3D inpainting 9.3.2.1. Crimini et al. [CRI 04] extension to 3D context 9.3.2.2. Global optimisation–based inpainting 9.4. Conclusion 9.5. Bibliography Chapter 10: Encoding multiview videos 10.1 Introduction 10.2 Compression of stereoscopic videos 10.2.1 3D formats 10.2.1.1 Frame compatible 10.2.1.2 Mixed Resolution Stereo 10.2.1.3 2D–plus–depth 10.2.2 Associated coding techniques 10.2.2.1 Simulcast 10.2.2.2 MPEG–C and H.264/AVC APS 10.2.2.3 H.264/MVC Stereo Profile 10.3 Compression of multiview videos 10.3.1 3D formats 10.3.1.1 MVV and MVD 10.3.1.2 LDI and LDV 10.3.1.3 DES 10.3.2 Associated coding techniques 10.3.2.1 H.264/MVC Multiview Profile 10.3.2.2 LDI–dedicated methods 10.4 Conclusion 10.5 Bibliography Chapter 11: 3D mesh compression 11.1. Introduction 11.2. Background on coding: The rate–distortion theory 11.3. Multi–resolution coding of surface meshes 11.4. Topological and progressive coding 11.4.1. Mono–resolution compression 11.4.2. Multi–resolution compression 11.4.2.1. Connectivity–driven approaches 11.4.2.2. Geometry–driven approaches 11.5. Mesh Sequences Compression 11.5.1. Definitions 11.5.2. Spatio–temporal prediction methods 11.5.3. Segmentation based methods 11.5.4. Transformation based methods 11.6. Quality assessment: classical and perceptual metrics 11.6.1. Classical metrics 11.6.2. Perceptual metrics 11.7. Conclusion 11.8. Bibliography Chapter 12: Depth Video Coding Technologies 12.1 Introduction 12.2 Analysis of a depth map characteristics 12.3 Depth video coding tools 12.3.1 Tools that exploit the inherent characteristics of depth maps 12.3.1.1 Above block–level coding tools 12.3.1.2 Block–level coding tools 12.3.2 Tools that exploit the correlation with the associated texture 12.3.2.1 Prediction mode inheritance / selection 12.3.2.2 Prediction information inheritance 12.3.2.3 Spatial transforms 12.3.3 Tools that optimize depth video coding for the virtual views quality 12.3.3.1 View synthesis optimization 12.3.3.2 Distortion models 12.4 Conclusion 12.5 Bibliography Chapter 13. Stereoscopic watermarking 13.1. Introduction 13.2. Stereoscopic watermarking constraints 13.2.1. Theoretical framework 13.2.2. Properties 13.2.2.1. Transparency 13.2.2.2. Robustness 13.2.2.3. Data payload 13.2.2.4. Computational cost 13.2.3. Corpus 13.2.3.1. Design criteria 13.2.3.2. Processed corpora 13.2.4. Conclusion 13.3. State–of–the–art on stereoscopic watermarking 13.4. Comparative study 13.4.1. Transparency 13.4.1.1. Subjective evaluation 13.4.1.2. Objective evaluation 13.4.2. Robustness 13.4.3. Computational cost 13.4.4. Conclusion 13.5. Conclusion and perspectives 13.6. References Chapter 14: 3D HD TV and autostereoscopy 14.1.Introduction 14.2.Technological principles 14.2.1.Stereoscopic devices with glasses 14.2.2.Autostereoscopic devices 14.2.3.Optics 14.2.4.Mesurements of autostereoscopic display 14.3. Mixing filters 14.4.Generating and enterlacing views 14.4.1.Virtual view generation 14.4.2.Enterlacing views 14.5. Futur developments 14.6.Conclusion 14.7.Bibliography Chapter 15:  Augmented and/or mixed reality 15.1. Introduction 15.2. Real–time pose computation 15.2.1. Requirements for pose computation 15.2.2. Model/image feature matching 15.2.2.1. Iterative tracking methods 15.2.2.2. Recognition methods 15.2.2.3. Real–time constraint 15.2.3. Pose computation: the main PnP algorithms 15.2.3.1. Reprojection error minimization 15.2.3.2. Direct methods 15.2.4. Pose computation and planar surfaces 15.3. Model acquisition 15.3.1. Offline modelization 15.3.2. Online modelization 15.4. Conclusion 15.5. Bibliography Chapter 16. Visual comfort and visual fatigue for stereoscopic restitution 16.1. Introduction 16.2. Visual comfort and fatigue: definition and evaluation 16.2.1. Visual fatigue 16.2.2. Visual comfort and discomfort 16.2.3. Assessment and evaluation of fatigue and discomfort 16.3. Symptoms and signs of fatigue and discomfort 16.3.1. Ocular and oculomotor fatigue 16.3.2. Cognitive fatigue 16.3.3. Symptoms and signs of discomfort 16.4. Sources of fatigue and discomfort 16.4.1. Ocular constraints 16.4.2. Cognitive constraints 16.5. Application to 3D displays and contents 16.5.1. Comfort zone 16.5.2. Restitution defects 16.5.3. Accommodation and blur 16.5.4. Visual attention 16.5.5. Null or erroneous motion parallax 16.5.6. Exposure duration and training effects 16.6. Predicting visual fatigue and discomfort: emerging models 16.7. Conclusion 16.8. Bibliography/references Chapter 17: 2D to 3D conversion 17.1.Introduction 17.2. 2D–3D conversion workflow 17.3.Content preparation for conversion 17.3.1.Depth script 17.3.2. Video advantage on fix images 17.3.3. Automatic conversion decoy 17.3.4.Special cases of automatic conversion 17.3.5.Optimal content for 2D–3D conversion 17.4. Conversion steps 17.4.1. Segmentation step 17.4.2.Depth map, computation and propagation 17.4.3.Missing image generation 17.5.3D–3D conversion 17.6.Conclusion 17.7.Bibliography Chapter 18. 3D–model retrieval 18.1. Introduction 18.2. General principles of shape retrieval 18.3. Global 3D–shape descriptors 18.3.1. Shape descriptor histogram 18.3.2. Spherical harmonics 18.4. 2D–view based methods 18.5. Local 3D–shape descriptors 18.5.1. 3D–shape spectrum descriptor 18.5.2. 3D–shape context 18.5.3. Spin–images 18.5.4. Heat cernel Signature 18.6. 3D–shape similarities 18.6.1. Reeb graphs 18.6.2. Bof–of–Words 18.7. 3D–shape retrieval in 3D–videos 18.7.1. Action recognition in 3D–videos 18.7.2. Facial expression recognition in 3D–videos 18.8. Performance evaluation of shape retrieval methods 18.8.1. Statistic tools for evaluation 18.8.2. Benchmarks 18.9. Applications 18.9.1. Browsing in a collection of 3D–models 18.9.2. Modeling by example 18.9.3. Decision aid 18.9.4. 3D–face recognition 18.10. Conclusion 18.11. References Chapter 19: 3D HDR images and videos: acquisition and restitution 19.1. Introduction 19.2. HDR and 3D acquisition 19.2.1. Subspace 1D: HDR images 19.2.2. Subspace 2D: HDR videos 19.2.3. Subspace 2D: 3DHDR images 19.2.3.1. Stereo matching for HDR reconstruction 19.2.3.2. Discussion on color data consistency 19.2.4. Extension to the whole space: 3DHDR videos 19.3. 3D HDR rendering 19.3.1. Rendering on a 3D–dedicated display 19.3.2. Rendering on an HDR–dedicated display 19.4. Conclusion 19.5. Bibliography Chapter 20: 3D TV visualization for life sciences 20.1.Introduction 20.2.Scientific visualization 20.2.1. 3D construction 20.2.2.Interactivity 20.2.3.3D visualization 20.3.Medical imaging 20.3.1.Volumic visualization 20.3.2.3D medical imaging 20.3.2.1.Teaching 20.3.2.2.Diagnostic 20.3.2.3.Therapy 20.4. Molecular modeling 20.4.1.Classical modes of visualization 20.4.2.Molecular modeling in relief 20.5.Conclusion 20.6.Bibliography Chapter 21: 3D reconstruction of sport scenes 21.1.Introduction 21.2.Automatic selection of region of interest 21.2.1.Region of interest role and caracteristics 21.2.2.Color space segmentation 21.2.3.Spacial consistency 21.3.Primitive extraction by HOUGH transform 21.3.1.Ellipsoid segment detection 21.4.Primitive/model matching 21.4.1.Line beams 21.5.Conclusion 21.6.Bibliography Chapter 22: Experimental, live retransmissions in stereoscopic 3D (S–3D) 22.1.Introduction 22.2.Show retransmissions 22.3.Surgery retransmissions 22.4. Steadicam magazine retransmissions 22.5.Transatlantic video–presentation retransmission 22.6.Bicycle competition retransmissions 22.7.Conclusion 22.8.Bibliography

  • ISBN: 978-1-84821-507-8
  • Editorial: ISTE Ltd.
  • Encuadernacion: Cartoné
  • Páginas: 326
  • Fecha Publicación: 25/10/2013
  • Nº Volúmenes: 1
  • Idioma: Inglés