Hi! I’m an Applied Research Scientist at Apple. I work primarily in the field of 3D scene understanding, using deep learning methods. My previous research involved adressing the scarcity of manually labelled 3D data through alternative learning approaches. This resulted in several publications, developing novel weakly and self-supervised learning pipelines for image, point cloud and geometric mesh data. Prior to Apple I was a Senior Research Scientist at Fujitsu Research of Europe.

I completed a Ph.D in 3D computer vision at UCL, London, where I was supervised by Dr. Jan Boehm and collaborated closely with Prof. Tobias Ritschel. I spent the summer of 2021 at Adobe as an intern in the Creative Intelligence Lab, London. Whilst there I worked on geometrically-driven single-image relighting, supervised by Dr. Julien Philip.

Download CV



OutCast: Outdoor Single Image Relighting with Cast Shadows

David Griffiths, Tobias Ritschel, Julien Philip

EuroGraphics 2022

We address the problem of single image relighting. Our work shows monocular depth estimators can provide sufficient geometry when combined with our novel 3D shadow map prediction module.

Curiosity-driven 3D Object Detection without Labels

David Griffiths, Jan Boehm, Tobias Ritschel

International Conference on 3D Vision (3DV) 2021

A novel method for self-supervised monocular 3D object detection. This is achieved through differentiable rendering and a GAN-like critic loss.

Semantic Segmentation of Terrestrial LIDAR Data Using Co-Registered RGB Data

Erick Sanchez, David Griffiths, Jan Boehm

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci

A pipeline which demonstrates that Terrestrial Laser Scanning (TLS) 3D data can be automatically labelled using off-the-shelf 2D semantic segmentation networks. With only a simple projection of a panoramic image, strong results can be generated with no additional training.

SynthCity: A Large Scale Synthetic Point Cloud

David Griffiths, Jan Boehm

arXiv preprint

We release a synthetic Mobile Laser Scanning (MLS) point cloud named SynthCity. Every point has a per-class and per-instance classification, along with colour, return intensity, end-of-line indicator and time.

Weighted point cloud augmentation for neural network training data class-imbalance

David Griffiths, Jan Boehm

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci

A key issue when training deep neural networks for outdoor point clouds is the inevitable large data imbalance. For example, a typical street scene will contain orders of magnitudes more ground points than street furniture. We develop a novel solution to apply a weighted augmentation to reduce the class-imbalance.