Spatial Computing: Creating Intersubjectivity Between Humans & Machines

Matterless
5 min readMay 30, 2023

--

Between all of us lies a network of invisible threads

Imagine a reality where humans and machines engage in a profound dance of understanding, seamlessly communicating and collaborating in real-time. The interplay between carbon and silicon-based participants is no longer the stuff of science fiction; it is a new type of intersubjectivity made possible by the transformative force of spatial computing.

Spatial Computing: AI’s Perception of Environments

Spatial computing blends the physical world with the virtual world, enabling machines to understand and interact with the physical space around us.

Imagine wearing special glasses or using a smartphone to project digital images onto the real world. With spatial computing, you can see and interact with virtual, matterless objects and information as if they were part of your physical surroundings, and digital becomes real.

An intersubjective reality transcending human senses

For example, you could battle your friends on the living room floor, each controlling a digital spacecraft shooting lasers at the other. You could see digital information displayed on walls or objects in your environment, like a supermarket, a shopping mall, or a museum.

Spatial computing allows us to interact with digital information and experiences naturally and intuitively, placing additional context into physical space around us.

And it is precisely this naturalness and immersiveness which will help us unlock the true potential of AI, AR, and robotics. Spatial computing is the conduit through which machines can perceive and act within our world, forever changing how we interact with technology.

Spatial computing could enable AI to better perceive and understand and interact with the physical environment. This would rely on a combination of computer vision, sensing, and connectivity, which combine an image of a physical space, track your movements, and overlay digital content seamlessly onto the material world.

Decoding the Invisible: How Computer Vision Parses the World

Computer vision extracts vivid insights from complex visual data

Computer vision helps machines “see” and make sense of the visual world. CV is a technology that enables computers to understand and interpret visual information like humans do with their eyes. As it delves into the realm of pixels, colors, and shapes, it can analyze images or videos and extract essential details from visual data.

First, the computer gets the visual data from sources like cameras or existing images. Then, it processes the data to improve quality and remove unwanted noise. After that, it looks for specific features in the images, like edges, colors, shapes, or movements. These features help the computer recognize objects, understand scenes, and classify images into different categories.

Computer vision can do more than identify objects in pictures. It can track moving objects and even help robots or self-driving cars see and navigate their surroundings.

Connectivity and Sensing: Two Sides of the Spatial Computing Coin

For machines to become intersubjective with us, they must first connect and sense each other and the world.

Computer vision lets machines make sense of the world visually. But there has to be a layer that enables machines to sense and connect with each other. Connectivity is vital for devices to connect and interact with each other and the broader ecosystem using Wi-Fi, Bluetooth, and cellular networks. Sensing is the capability of devices to gather data from the environment or users.

One company recognized the potential of spatial computing a long time ago. A pioneer in this realm, Apple has recognized the significance of both fields mentioned above by placing both under the purview of engineer Ron Huang, their VP for Connectivity and Sensing.

Inter-device connectivity allows users to share files wirelessly through features like AirDrop and synchronize data using iCloud, and the HomeKit framework enables users to control compatible smart home devices through their Apple devices. Biometric sensing like Touch ID and Face ID provide secure authentication. Environmental sensors, such as LiDAR’s active pulsed laser beams and motion sensors, optimize the device experience based on the surrounding environment.

The world is connected in more ways than we can intuit

By bridging the gap between connectivity and sensing, Apple aims to create an all-encompassing ecosystem where machines seamlessly interact with the world, empowering users with intuitive and context-aware experiences.

The Fabric of Shared Perception

In the social sciences, intersubjectivity refers to the shared understanding and meaning that emerges through social interaction between individuals. It refers to the mutual recognition and interpretation of subjective experiences, beliefs, and intentions, leading to a collective understanding of the social world.

Redefining intersubjectivity through a spatial computing lens, we can consider it as the collaborative construction of shared virtual experiences and perceptions within a mixed reality environment. Spatial computing facilitates the creation of immersive and interactive digital spaces where individuals can interact and communicate with virtual objects and environments. In this context, intersubjectivity extends beyond traditional social interactions to encompass the shared interpretation and co-creation of virtual realities, bridging the physical and virtual realms.

How immense is the potential for human-computer interaction?

The inherent link between AI, AR, and robotics lies at the heart of this paradigm shift. These three transformative technologies are no longer independent entities. Very soon, they will be interdependent collaborators harnessing the potential of spatial computing in ways that will, from today’s perspective, feel like magic.

Through collaborative interactions, individuals can collectively shape and give meaning to their digital surroundings, influencing and being influenced by the experiences and perspectives of others. This redefinition highlights how spatial computing technology enables the co-construction of shared virtual realities, fostering new intersubjective understanding and social interaction that transcend physical limitations.

As this field progresses, the potential applications of intersubjectivity in human-computer interaction are vast. From virtual assistants that adapt to our moods and personalize their responses to intelligent systems that engage in more natural and empathetic conversations, the boundaries between human and machine understanding are gradually blurring.

Intersubjectivity is key to creating more meaningful and satisfying interactions with technology, enhancing our everyday experiences, and fostering a deeper connection between humans and machines. By bridging the gap between our subjective world and the realm of digital intelligence, we are shaping a future where computers truly understand us in ways that were once reserved for the realm of science fiction.

Soon, it will be real.

— Damir, Auki Labs Head of Communications, Matterless co-founder

--

--

Matterless

Matterless is building digital toys and companions in shared augmented reality. Play with magic.