Augmented Perception. Circunventing the Augmented Reality Metonymy
Augmented Perception. Circunventing the Augmented Reality Metonymy
David "DeMO" Martínez Oliveira

Augmented Reality is once again a hot topic. Everybody is talking about it. However, the term itself no longer matches its literal meaning. In this short dissertation we are going to revisit the AR concept and overcome the semantic corruption of the term that had happened during the last years.
If you check any of the augmented reality concepts/terms in Wikipedia (Augmented Reality, Mixed Reality, Mediated Reality, Augmented Virtuality,...) all of them are, nowadays, related to visualization. However, Reality is actually a bit more than that.

Probably, because of our heavy dependency on the visual system, it is easier to understand the reality as what we see, but as we said above and, as you surely also know, reality is full of stuff we cannot see. Actually, are visually impaired people living in an Unreal world, just because they cannot see?. In a sense, we are taking a whole (the Reality) and we are substituting it by a part of what it really is (what we see).

This linguistic phenomenon is know as a Metonymy. For different reasons, a word or phrase slowly drifts it literal meaning into a different (but related) semantic concept... In the specific case of Augmented Reality, the term itself is no longer related at all to the augmentation of the reality, but it refers to the augmentation of, let's say, the visual reality. Just for illustration let's reproduce here what wikipedia says:

“Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data.”

Sure, the definition mentions sound but not in the sense of augmented our hearing sense but in the sense of using sounds to augment visual information. An example of this could be a navigation system providing voice instructions to get to a restaurant.

However, the concept itself is a lot bigger and I will try to elaborate this a little bit below.


Let's start defining a simple model for us, humans. As far as I know, we do not really fully know how a human works, and, for sure, I myself do not know those details. Is for that reason that I'm going to use a black box model to represent human beings. In a black box model, the system you want to simulate (the human) is a black box and you do not know what is inside it. You only know what gets into the box and what comes out of it. Nothing else.

So our first model approach is to have a black box surrounded by a huge bunch of reality.

That is pretty accurate, but we can actually do it much better.

The inputs to our human black box model, for the time being and until somebody demonstrates something else, are just five. Reality can only get into the black box through our sensory system. So, our inputs are.. whatever our 5 senses delivers to the human black box. We should note that, strictly speaking, what it actually gets into the black box is not reality. This is the Perception our senses can deliver to us of the Reality, whatever it is. This perception is actually our Reality because we cannot perceive the reality in any other way.

So when we see something RED, what is actually happening is that, our sensors, in this case our eyes, are capturing a small piece of Reality and transforming it in something our Human Black Box can understand. In general, every different persons perceives a different RED because every single eye is different and also the underlying brain structures. But we all kind of agree on what RED means, or at least, most of the people does.


Based on what we had just described, from a human point of view, reality is actually the outputs of our sensory system and.... know what?. We do have 4 senses in addition to our sight. This is the reason why only considering the sight when talking about Augmented Reality is quite inaccurate.

Unfortunately, technology is way behind nature in this area. We have some good sensors for visual and audio data and we are starting to get some stuff related to touch (the so called Haptic technology). For those senses we can even think on actual augmentation. For instance, we have thermal or UV sensors to "see" electromagnetic radiation that, in principle, our sensors (the eyes) cannot process.

On the other hand, we are mostly in the dark with regards to taste and smell sensors. Being optimistic, we are more or less mid-way towards Augmented Reality, because we can roughly manage 2.5 senses out of 5. In other words, we have means to augment only halve of the Reality we perceive.


Well, Augmented Reality seems to be a quite established term by now. Everybody had agreed on this Metonymy and it does not make much sense to change it. However, a term to refer to the more general concept that involves all of our senses and maybe other things we do not know yet will actually be handy.

A good candidate term could to be Augmented Perception (AP). This term specifically refers to our senses which, at the end, are the only way to get in touch with the real world. As with the original Augmented Reality concept, the words themselves do not specifically refers to visual augmentation, and it is also consistent with technologies also dealing with all other senses.


This was what I wanted to say Today. Just to finish this post, let's present a couple of AP applications that would fit in the definition we had just introduced.

  • A translation application. We had already seen these videos about an AR application that can translate written messages into a different language. Its audio counterpart would be good AP application candidate.
  • We can consider audio translation in a very wide sense. For instance, translation of Morse code into words, or music scores getting automatically played into music,...
  • Touch sensors that can, for instance, take a drop of blood and perform analysis reporting with a vibration, for instance, if some substance (cholesterol, glucose,..) is too high or too low. Or reporting this result as audio instead of as a number in an invasive/obtrusive head-mounted display
  • Hypothetical taste transducer (an electronic tongue) able to taste food for us and produce, for instance, a sweet taste in our mouth if the food is in good state or an acid or salty taste if it poisoned or transgenic.
  • A smell transducer able to convert a dangerous odorless gas in a nasty smell that will make us move away quickly and will keep us continuously informed, via the intensity of the smell of the concentration as we move, helping to get far or close to the source of the gas source depending on what we want to do (save our lives or fix the problem).
  • Those smell transducers will even enable smelling toxic substances without actually smelling them and removing any chance of being in danger while working on a dangerous place, but keep us continuously aware of the current concentration of the substance in the surrounding air.
Yes, a whole universe of possibilities whenever we start thinking on Augmented Perception, instead of just augmented reality.

Tu publicidad aquí :)