One of the columns missing from the last post was UI. That's mainly because I don't know enough about its history to divide it up into generations. But I think there are two general trends, namely in expressiveness (blinking lights → black-and-white → color → really big color, holographic, etc.), and in ubiquity (giant fixed → giant unfixed → portable → wearable). It's hard to say which of these will win out in the end, or indeed if they're in competition or rather that one is a lagged version of the other; but there is tension between a “Matrix-like” mind-machine interface and a “networked world” tool to access data. The question is whether the display takes an active or a passive role; does it modify the user's perceptions, or does it occupy the perceptual space?
This tension is probably most obvious in audio; the space is well-understood. A human has two audio inputs, a left ear and a right ear. Given a sufficiently expensive sound system, we can set those inputs to anything we like. We are given an audio environment, say at a music concert. What do we play? One trend says to play from the source: give the music as the music creator hears it, immersed in the mood. If the creator hears the flutes from the left, the synthesizers on the right, so be it, even if no such instruments are present when the recording is listened to. The other says to consider the context: a jogger is listening to the music merely as a distraction, not as inspiration for her next music composition. Therefore a simple playback is needed, as though the music were coming from a miniature music box in front.
Perhaps the distinction I am trying to make is a false one; I certainly hope so. It is really a question of how the content is intended to be presented, either by its creator or by its remixer. Every playback of a recording is really the creation of a new song, a mixing of the audio filtered through the quality of the playback system and then mixed with the ambient background noise. A perfect playback system does not remove the need to consider mixing multiple audio sources together or how to selectively mute or master them. But who makes these choices? How do we enable security, e.g. so that a user is not lulled into thinking there is a floor where in reality a sheer cliff exists?
Collaboration has a natural analogy from real space to virtual space; two people in a room whether that room is real or virtual. But the analogy is not true; communications have latency, people are not always there, and you may not even know who or what the other person is. In the end, we are all just bunches of neurons, approximately 100 billion, whereas there are only 7 or 8 billion people. So one could conceivably devote a neuron to each person in the world, and network them together using a brain-computer interface and high-speed networking; such experiments have apparently been done on a small scale with rats (unfortunately I've lost the link; it used to be on Wikipedia) and shown to give a limited form of telepathy; they could coordinate and “swarm” more effectively than they could indiviually. Would this be the “ultimate” in collaboration? Perhaps not; maybe we would need to link more neurons or come up with other tools to communicate. But it seems like a good goal to strive for, in concept if not in implementation.
That, I think, satisfies ubiquity; what interface is more usable than that of your body? It is not so clear that it satisfies expressiveness; perhaps there are concepts that cannot be communicated by thought but must be written down. Would poetry have meaning to beings that feel each other's emotions? Humans do not record their senses like video cameras do; on the contrary, their skill lies in ignoring most of the outside world until it becomes necessary to act. So whatever the interface, there must still be some way to share reality as well as humanity; 3d holographic smell-o-vision, as an old teacher of mine used to say. From looking at a few sites, smell seems a little impractical, but 3d is on its way in commercially and real-time holographic displays are in the this-is-cool-but-how-can-we-mass-produce-it stage, i.e. the technology is there and proven but still too expensive (and big) for the average consumer. Haptics are progressing from vibrations to tactile surfaces, but again, nothing consumer-ready. Matrix-like direct interfaces that override the body's senses still seem to be science fiction, unfortunately; I guess people don't like being cyborgs.
No comments:
Post a Comment