Deep Learning Generative Modeling Computer Vision UI Design Robotics Architectural Design

Our aim has been to create a tactile design tool which a user may generate synthetic landscapes from a sand-crafted topography. In this process, we integrate physical depth sensing with a neural network trained to infer landscape imagery from real topography. The outcome of this project demonstrates the novelty and capability of using neural networks in a design process. Our approach challenges the control oriented approach of many computerized architectural applications and proposes a dialogue of development between the machine and the designer.
The project consists of two parts: training the autoencoder and programming the interface loop using an XBOX 360 Kinect to scan topography data from a sandbox and fed to the trained autoencoder. Training the autoencoder required data which could be readily obtained using the Google Maps API. Corresponding heightmap data and satellite imagery of the Baden-Wurttemberg region were extracted per geo-coordinate specified square, each of size 5 km x 5 km.


