Visuospatial Cities: Interdependencies between Visual Perception and Urban Environment

From Design Computation
Jump to: navigation, search

DC I/O 2020 proceeding by PIYUSH PRAJAPATI.


Abstract

” ... the visual modes of representation have been replaced by sensorial-neuronal modes of simulation ...” - The Posthuman [Braidotti.R, 2013]


There is an evident dissatisfaction in understanding visual environments since past few decades. For years, Rudolf Arnheim, a theorist and perceptual psychologist had argued persuasively, that educators have failed to recognize the most powerful aspect of human cognition in visual thinking. It has led to unexplored visuospatial potential in the field of Architecture and Urban Design. The research examines the methodologies from the present and past decades, where Human and digital mode of visual perception is quantified. The study highlights the important missing link between spatial environment and visual perception, which are due to difference in testing platforms. With this paper, I aim to simulate the quantitative information of the built environment and the visual perception, to produce qualitative knowledge as a whole.

The psychological understanding, the mind and the world of information processing are not confined by the skin. Cyberception, which is understanding visibility through Machines, has opened up new horizons to learn in-depth relationships between Visual perceptions and its quality in spaces. The case studies such as On Broadway, Photo-trials by Manovich.L and YOLO, which is a convolutional neural Algorithm, illustrates a similar methodology. It is studied not only as a tool to quantify-qualify Visibility but also as to identify the gapping knowledge between visual representation and its simulation.

The study then emphasises on human navigation as a bridging relationship. Cognising this exhibits the physical - perceptual characteristics, which are then unified with the urban environment using Deep Learning Methods. The Visuospatial Intelligence hence perceived is the cumulative result of human interaction, visual preferences and Spatial Built forms. The design study has helped to turn research into an active design project. The project analyses visual perception by fetching in resources from Google Street Maps and Social media Platforms; and by simulating visual navigation using GPS impressions and Humanoid Agents. The resultant end prototype is the qualitative result, encompassing the characteristics of simulated Built environments and Visuospatial Perception.

Keywords

Visuospatial, Visual perception, Visual Quantification, Cyberception, Deep Learning, Neural Networks, Machine learning, Human Navigation, Big Data.

Keyphrases

visual perception (210), neural network (150), human navigation (130), deep learning (120), visual characteristic (120), social medium (100), social medium image (95), urban layer (90), built environment (90), visuospatial intelligence (90), google street view (79), visual understanding (70), visual connectivity map (63), visuospatial understanding (60), street view (60), spatial environment (60), visuospatial city (50), design output (50), built form (50), urban environment (50), visual quality (50), visual preference (50), activation function (50), hidden layer (50), human vision (50), spatial intelligence (50), deep learning method (47), spatial perception measurement (47), building height map (47), google street (45)

Topics

Adaptive ML, DigitalOps, Architecture, Artificial Intelligence in Design, Assisted Design Decision Making, Calculation and Design Analysis, Computational Creativity, Data Visualization and Analysis for design, Design Cognition, Responsive computer-aided design, Urban Design.

Reference

DOI: https://doi.org/10.47330/DCIO.2020.WLUY7415

Video Presentation: https://youtu.be/Zuo9m8IZNuc

Full text in: Maciel, A. (Ed.), 2020. Design Computation Input/Output 2020, 1st ed. Design Computation, London, UK. ISBN: 978-1-83812-940-8, DOI:10.47330/DCIO.2020.QPRF9890