Friday, May 10, 2013

Modeling the Game World: Sensory Systems

An often overlooked aspect of game systems, whether they're meant for simulation or entertainment, is the means by which agents or entities perceive the environment or game model.


In this model, the agent is both a controlling intelligence and an entity in the environment. Percepts are the information that the entity can interpret, from which the agent may evaluate and select from its actuators how it will change the environment. From a logical point of view, this is just a way of mapping percepts to action- input to output.
 
While this is a popular abstraction for AI, it doesn't provide us with the whole picture. There are important abstractions within this that can allow us to compartmentalize the perception and decision-making process. We can start by making a distinction between Emissions and Sensory Systems. An emission is information left by an entity in the environment that may be perceived by other entities. It could be the reflection of light, leaving a scent trail, or singing a song. The Sensory System then describes the manner in which another agent acquires the raw data from the emission. This provides the agent with a sensation, or data that is almost ready to be mapped to an actuator.
 
I came up with this abstraction by reverse engineering the logic used in the real world. So to make sense of it all, here's a breakdown of how human sensory systems and sensations work.

The five human Sensory Systems:
Vision - Sense of sight
Taction - Sense of touch
Audition - Sense of sound
Olfaction - Sense of smell
Gustation - Sense of taste

These Sensory Systems detect emissions for the following human Sensations,

Photoreception - Brightness and Color of light. (Vision)
Chemoception - Chemicals. (Olfaction, Gustation)
Nociception - Pain (All)
Electroreception - Electrical signals (All, not trusted by instinctual brain)
Mechanoreception - Physical interaction, including sound (Audition, Taction)
Thermoreception - Temperature (Taction)
Proprioception - Kinesthetic sense (Taction)
Equilibrioception - Balance (Audition)
Magnetoreception - Magnetic fields (Vision, not trusted by instinctual brain)
Chronorception - Time and circadian rhythms (All via Zeitgebers, mainly Vision via daylight)

Note: It's interesting that our pleasure from food is derived from the intersection of chemoception from our olfactory and gustatory systems-- If both the taste and the smell converge, we know that it is safe to eat, but if they diverge, it may not be and we find it unpleasant.
 
The Sensory Systems filter the Emissions from the Environment to determine what Sensations the agent experiences. In this way, the Sensory Systems describe the ranges of information that an Agent can receive. After Emissions are filtered, we can think of the information locally in terms of Sensations. Abstractly, it doesn't really matter what Sensory Systems and Emissions we use to describe our game world. Most games rely on a one-to-one mapping based on Line of Sight, but even a simple game has usefulness for such an abstraction-- effects that limit line of sight or cause blindness, for example, are directly managed in an entity's sensory systems- there is always a clear location for where these sort of things need to happen.
 
We're not quite done yet. We need to know how an agent evaluates the Sensation prior to making a decision. For a player, this more or less describes what information is sent to the UI, inclusive of any entity-specific evaluations. For example-- whether or not our Entity can distinguish the difference between rotting eggs and a small volcanic sulfur gas vent, or a carbon monoxide leak (assuming there's sulfuric gas mixed in). Additionally, evaluation modes are what tell us whether or not we can recognize whether we're sick, how much HP we have, and perhaps the current state of wellness of another entity. This includes predictive knowledge like knowing what chance to hit and range of damage we could deal to an opponent. I've considered three realistic evaluation modes,

Cognition - Conscious analysis of data.
Intuition - Subconscious inference of data.
Instinction - Unconscious mapping of data to evaluation.
Which more or less correspond to the variety of familiar psychological trinities.

While the quality of input can't be improved (except by artificial means), our processing of that data can be trained. It may not be important to have more than one evaluation mode for a game, but they help to rationalize certain elements of the game. One possible application may involve the perception of Ghosts. A ghost may provide little raw data to analyze or react to, but we may be able to intuit it regardless. Not by how strong our senses are, but by how sensitive we are to processing subtle influences. A few examples to emphasize the distinction:

Sympathy - Cognitive. We consciously rationalize how another person's state would feel (sensation reasoned within our imagination).
Empathy - Intuitive. We feel what another person's state actually is (sensation mapped to emotion).
Fear Sense - We can innately detect, on a continuum, how afraid a person is through a mapping from our sensory input. Doesn't provide information, just automatic reactions (sensation mapped to reaction)- chill down the spine, or a surge of hormones.

These concepts overlap in application, but different evaluations and weights may supply information that provides incentive to behave in a particular way. Since the player's avatar is the aspect of the player Agent that is acquiring information from the game world, it's also partially the Agent's job to interpret that data. From a UI point of view, providing the player with raw sensation data makes them responsible for interpreting the meaning. While this can be very interesting, we typically want the Agent's abilities, stats, skills, qualities, etc. to provide some of that information. It could be a lot of fun to play a character that is manically afraid of all things, especially if the player doesn't know this. IE. Instead of a peaceful townsfolk, you suspect that they want to dismember you in a ritual sacrifice to their bloodthirsty deity-- okay, that may be TMI, but the avatar could fail to properly interpret their friendliness and trick the player into slaughtering them all. Evaluation includes misinformation-- which can be a lot of fun.

Another more sense-related example may be understanding why a sensation exists. The Aurora Borealis is the result of particle radiation (primarily from the sun) sneaking by the earth's magnetosphere and interacting with the atmosphere. Suppose our Avatar sees a flash of light- how is the avatar to evaluate that information for the player to make reasonable decisions from? The player will always be able to guess, but a well-designed game will not provide too great of an advantage to an experienced player (we don't want features to become arbitrary at different player skill levels). Is it a magical spell? Divine judgment? A flash of magnesium? Bright Light may have special meaning relative to that Agent. This can become a beautiful part of the player's interpretation of the Agent's unique narrative.

Telepathy could be rationalized without the addition of new senses or sensations, but as a property within an intuitive evaluation mode. IE- suppose that Telepathy is derived from Magnetoreception. The fields emitted by cognitive beings (higher wavelengths, or something) may have subtle effects on the ambient magnetic fields and, with enough training, we might be able to create an intuition about these subtle fluctuations- thereby inferring the presence of nearby entities. We may be able to develop a way to further evaluate this information to deduce what these entities are thinking. In many ways- cognition, intuition, and instinction just describe different facets of the same ideas.

Creating meaningful evaluation modes really just depends upon how senses, sensations, and other factors are all mapped together. I probably wouldn't ever try to implement a realistic sensory model- but I thought the abstraction may be useful to others. Evaluation modes have much more to do with what emission patterns correspond to properties of creatures and objects-- that is, the evaluation modes are what tells us it's a humanoid or a goblin, friendly or hostile, dangerous or pathetic- etc.

To summarize:
Entities leave emissions in the game world, emissions are detected by the sensory systems of other entities in the form of sensations, which are then rationalized by evaluation modes and presented to the UI or AI for decision making. Sensations that can't be completely rationalized are provided as raw data to the UI in a form that is relevant to that Sensory System (IE. if we hear something but don't know what it is, we might notify the map-- either as a direction, a specific location, or a general location-- maybe the type and intensity of the sound as well-- if we hear it over time, we may store information or selectively filter the emissions to improve the specificity of the evaluations).and

Whipped this up in google drawing-
At which point, the Agent will consider what it already knows about the environment before going through something like subsumption architecture and then mapping it all to an actuator.


On the most mundane level, this is trivially implemented with singletons. As a model, it's both very simple and extensible. Most games won't find a need to implement complex networks of sensory systems, but it's something that hasn't been well-explored in game design. It's typically looked at as a part of the UI, rather than a part of the narrative. An idea that can bring a whole new level of immersion to video games that we haven't really seen outside of fun gimmicks.

I'm designing a component-based framework that will utilize something like this in conjunction with a similarly modular system of AI that can hopefully be used to achieve some pretty interesting player experiences. I'm unfortunately spreading myself thin with other projects, so hopefully somebody finds this idea useful.

No comments:

Post a Comment