Spatial Computing: A Primer
Setting the terms of a new world
One of the most important tasks in shaping a new pattern of computing is to define its parameters. With so many new technologies in development, it’s vital to establish set terms. With solid guideposts in place, we can dive deeper into how these technologies are being used in the real world — and how far they can take us into the future.
Spatial computing is a pattern of computing where a computing system has spatial awareness of a physical environment, things within the environment, and the interactions and behaviors in the physical environment. The computing system uses sensors to gain spatial acuity, which informs the system’s processing and output, which may feed back into the physical environment or the behaviors of the things in the environment. The concept originates from the 2003 Ph.D. thesis of Simon Greenwold at MIT’s Program in Media Arts and Sciences.
Originating from the labs at Xerox PARC in the 1980s, ambient or ubiquitous computing is a concept that challenges a computer’s form and purpose. The idea was born largely from a desire to break a computer’s dependency on a screen and, as a result, be less disruptive and more integrated into the physical environment. Output is less pixel-based and excites other human senses such as sound, smell, taste, or touch (haptics). Sight-based output is still common, but, for example, it might express as a blinking light or mechanical based-output like triggering a fan to blow. With ambient computing, anything and everything can be a computer and can work together to create an emergent computing experience.
Augmented Reality is the overlay of digital content on the physical world. Users (humans) view this overlay through another device or medium such as a smartphone or mobile device, a head-mounted display (HMD), or projected light. The origins of AR begin with Ivan Sutherland creating the first HMD in the late 1960s, but it was Steve Mann (one of the original MIT cyborgs) in the early 1980s who made the most significant expression and contribution to AR as it is today.
Mixed Reality expands on Augmented Reality. It originates from a paper by Paul Milgram, et al. in 1994 where MR describes a range of experiences that mix virtual and physical environments. However, in the last decade, the term has become more specific in describing experiences in the middle of Milgram’s original continuum. In today’s version of mixed reality, digital content is aware of things (humans, animals, objects, etc.) in the physical world and is reactive to these things, whereas, in AR, digital content is not. For example, digital content might behave according to real-world physics or “walk” around a physical object, so it doesn’t “bump” into it. Often, MR incorporates Spatial Computing to make the digital content feel more integrated into the real-world environment.
Virtual Reality is where the digital environment is the base or foundation of the experience. In AR and MR, the experience is based in the physical world. In VR, all elements of the experience are digitally created. Users indirectly interact with the digital environment through peripheral devices or sensors. Jaron Lanier is widely considered the original pioneer of today’s VR for his work in the 1980s, but the first known use to describe illusionary (and non-digitally specific) experiences is by French playwright Antonin Araud in 1938. The most common representation of VR is a head-mounted device that blocks the user’s sense of their physical environment and draws complete focus on the digital environment to create as deep of an immersive feeling as possible.
The term and concept originate from Neil Stephenson’s sci-fi novel Snow Crash. Inspired by the book’s imagery, technologists and fans are attempting to create the virtual environment where much of the story takes place.
Recently, Matthew Ball authored an exposé on the movement to build a real Metaverse. Ball presents a thorough definition of the Metaverse, which will likely become the definitive definition if it hasn’t already become so.
“A massively scaled and interoperable network of real-time rendered 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.” — Matthew Ball, The Metaverse
However, Ball details extensively throughout his book that there are significant and likely insurmountable obstacles to building a system that satisfies this definition.
Placefulness is a concept that originated here at argodesign. The idea is that these new interaction modes allow the physical space you are in to participate in computing. Digital information is viewed in context next to physical objects, in your physical place, combining the real world with the digital. It has a sense of place and a communion with the physical environment.
Placefulness removes multiple steps and adds an abundance of intuition. It allows for digital workflows with very low friction over their immersive counterparts. It aligns the metaphor and interface of computing with humanity. Most significantly, it makes for a more natural (for the human) human-computer interaction and promises to expand the ability of computing to amplify our lives.
Digital Twins are digital representations or reproductions of physical objects or systems where the digital entity has dynamic behaviors stimulated by real-world data or virtual environment simulation. The fidelity of details (across characteristics such as form, behavior, intelligence, autonomy, etc.) may only be a subset of the physical entity. The difference between a Digital Twin and a digital model is that the twin is dynamic, as opposed to the static nature of a model. This dynamism replicates behavior and states, whereas a model statically recreates the entity’s form or shape.
Digital Twins are used as part of digital visualization or simulation. Once the entity is in a digital form, it may stay connected or depend on the original. In a connected relationship, the digital entity updates its state to mirror the physical entity. In contrast, a disconnected twin independently changes or evolves by simulating the original entity’s behaviors in a virtual environment. The purpose of a Digital Twin is to monitor the physical entity’s state; or to simulate, test, detect, prevent, predict or optimize behaviors of the physical entity.
Suppose the original entity is a physical object. In that case, the digital entity form can be created by 3D scanning the physical object or from an existing digital model, especially if the physical entity was produced using the digital model as the source. If the original entity is more abstract, like a system or a process acted out or invoked in the physical world, a human manually creates the digital entity’s digital form.
As new patterns of computing continue to evolve, there will be fidelity and nuance to add to its terms. Establishing these baselines gives us a common shorthand on which to build the new digital world at hand.