Visuospatial Cities: Interdependencies between Visual Perception and Urban Environment
Contents
Visuospatial Cities: Interdependencies between Visual Perception and Urban Environment
by PIYUSH PRAJAPATI.
” ... the visual modes of representation have been replaced by sensorial-neuronal modes of simulation ...”
- The Posthuman [Braidotti.R, 2013]
There is an evident dissatisfaction in understanding visual environments since past few decades. For years, Rudolf Arnheim, a theorist and perceptual psychologist had argued persuasively, that educators have failed to recognize the most powerful aspect of human cognition in visual thinking. It has led to unexplored visuospatial potential in the field of Architecture and Urban Design. The research examines the methodologies from the present and past decades, where Human and digital mode of visual perception is quantified. The study highlights the important missing link between spatial environment and visual perception, which are due to difference in testing platforms. With this paper, I aim to simulate the quantitative information of the built environment and the visual perception, to produce qualitative knowledge as a whole.
The psychological understanding, the mind and the world of information processing are not confined by the skin. Cyberception, which is understanding visibility through Machines, has opened up new horizons to learn in-depth relationships between Visual perceptions and its quality in spaces. The case studies such as On Broadway, Photo-trials by Manovich.L and YOLO, which is a convolutional neural Algorithm, illustrates a similar methodology. It is studied not only as a tool to quantify-qualify Visibility but also as to identify the gapping knowledge between visual representation and its simulation.
The study then emphasises on human navigation as a bridging relationship. Cognising this exhibits the physical - perceptual characteristics, which are then unified with the urban environment using Deep Learning Methods. The Visuospatial Intelligence hence perceived is the cumulative result of human interaction, visual preferences and Spatial Built forms. The design study has helped to turn research into an active design project. The project analyses visual perception by fetching in resources from Google Street Maps and Social media Platforms; and by simulating visual navigation using GPS impressions and Humanoid Agents. The resultant end prototype is the qualitative result, encompassing the characteristics of simulated Built environments and Visuospatial Perception.
Keywords
Visuospatial, Visual perception, Visual Quantification, Cyberception, Deep Learning, Neural Networks, Machine learning, Human Navigation, Big Data.
Topics
Adaptive ML, DigitalOps, Architecture, Artificial Intelligence in Design, Assisted Design Decision Making, Calculation and Design Analysis, Computational Creativity, Data Visualization and Analysis for design, Design Cognition, Responsive computer-aided design, Urban Design.
Reference
[https://www.designcomputation.org/dcio2020 Maciel, A. (Ed.), 2020. Design Computation Input/Output 2020, 1st ed. Design Computation, London, UK. ]