I promised him that it would be used in a series of interactive essays illustrating some dynamics in a particular abstract model of communication that is both pedagogically useful and intellectually interesting.
I had not yet made good on that promise.
This is the first step.
I call this formal framework for communication the meaning-action-observation framework. It is not new. In fact, it is a similar abstraction to that used in a number of fields such as Shannon-Weaver information/communication theory and Bayesian pragmatics) to formalize both human and machine communication. What is at least partially new is the way in which we will interpret this model, and how we will leverage the particular properties of this abstraction in order to gain insight into a number of features of communication that are otherwise difficult to see and to understand. We will make use of this visualization library to illustrate these dynamics, so as to pull them out of pure abstractia and into directly perceivable reality.
Before delving into the details of the dynamics, we first have to explore the nature of this model and build some common ground about concepts and terminology. We will do so by directly leveraging the visualization suite, which will allow us to avoid math for the most part and make use of the human proclivity for interpreting visual information. This post will be limited to this introduction, and it will serve as the groundwork for future ventures involving visualizing and understanding communication.
The first component of the meaning-action-observation framework is of course “meaning.” I put it in quotes because I want to identify that the sense of “meaning” being used here is distinct from the slew of disparate colloquial usages of the word. By “meaning” I simply mean a piece of communicata. It is both a thing that one might want to communicate and an interpretation of someone’s attempt at communication. In the most basic version of the meaning-action-observation framework, meanings are atomic units drawn from a set of possible meanings. In more complicated realizations of the framework, a meaning will take the form of a structured unit that might be more akin to a complex concept than a simple item. Let’s ground this a bit more by putting it into pictures.
In the graphic below, the meanings are represented by the symbols on the left and right sides of the graph. The items on the left represent the set of meanings that the speaker might wish to convey to the listener, and the nodes on the right represents the set of meanings that the listener might interpret the speaker as trying to convey. While this post will only include cases where these speaker and listener meaning sets are equivalent and directly correspondent to one another, future ones will not hold to this constraint, and we will make use of this flexibility in order to capture the complex relationships between structured speaker intentions and listener interpretations. In the same graphic, the actions and observations are coincident and are represented by the middle layer of nodes. In this example, these actions are utterances that the speaker might produce and the observations are these same utterances that the listener might observe. This set need not be limited to linguistic utterances, and may include any action that a communicating agent (a “speaker”) might choose to take. Additionally, the actions and observations need not be equivalent. Certain models such as noisy channel models of communicative perception do not make this assumption. However, for the most part we will keep to the assumption that the listener perceives exactly what the speaker produces, as it does not cause loss of generality for the purposes of the present series of articles.
Now for the first visualization! Press the play button to try it out! (The slider bar at the top controls the speed of the simulation.)
This simple simulation shows the speaker’s intended meaning being translated into an utterance choice (an action), which the listener then observes and uses to generate his inferred interpretation. Simple, isn”t it?!
The visualizations also double as representations of conditional probability distributions, where the choice of each layer is probabilistically dependent on the outcome of the layer beforehand. They are equivalent to a certain subclass of Bayesian networks, which we might call layer-based Bayesian networks. While this first visualization has deterministic mappings from the first layer to the second layer and from the second to the third, we will see that this need not always be the case.
In the last visualization, the speaker was equally likely to intend to mean any of the meanings from the speaker meaning set. This does not have to be the case, however, since some meanings are more likely than others, both a priori and in context. Let’s illustrate that in a similar visualization with the size of the meaning nodes representing the prior probability of the speaker wanting to convey each meaning.
In addition to modulating the prior probability of each meaning, we can also have a system where the speaker doesn’t always produce the same signal to communicate the same meaning. Such a system is equivalent to adding noise to the conventional manner of communication. It is also possible to imagine a system where a listener has a noisy interpretation process as well. The following visualization incorporates both speaker production and listener interpretation noise.
So far we have only looked at what might be called conventionalized systems of communication. In such systems, there is a communicative norm, or a conventional manner of taking an action to communicate a meaning. Mirroring this speaker’s production convention is the listener’s interpretation convention, which uses knowledge of this speaker’s norm to map observations to interpretations and therefore to reconstruct what the speaker meant by what she said. Though these conventionalized systems pervade human communication, they are certainly not the only kind of communicative system. As we will see in a later post, it is actually an important philosophical question as to how speakers and listeners can–without seeing each others’ “meanings”–come to have such a conventional system at all! The following example shows a communicative system with no convention, in contrast with the conventionalized systems we have seen so far.
Finally, we can have a system with a misaligned convention, or where the listener’s beliefs about the convention that the speaker is using to produce language are incorrect. Such a system can cause irreconcilable miscommunication, as there may be no way to observe the speaker’s actual convention in order to repair the convention. In real human communicative processes, such robust misalignments seem to be very rare, as there are always other factors available to help repair the system. Though we will investigate this issue further in a later post on conventionalization, learnability, and interactive inference, it is worth illustrating an example of misalignment here so that we can get a feel for what it looks like when the presumption of convention goes wrong.
What we have seen so far are just forward models. They illustrate the communication process from what you might call the “objective perspective,” where it is possible to see all the dynamics,even though they are never all visible to any single person. Because we are highlighting the flow of communication from intended meaning to interpreted meaning, we have also ignored illustrating the dynamics of how a listener actually derives his interpretation model from beliefs about the speaker. These interpretation inferences, which can be shown more readily in an inverse model, make up a large component of the contemporary study of linguistic pragmatics. You can read more about them in my in-prep paper, in the literature, or in my next blog post on communication, which I will be devoting to the illustration of pragmatic interpretation.