Theory and Practice
Week 6, 7- February
23 - March 2, 1999
Critique of character
- Scott Fisher,
"Virtual Interface Environments"
the dawning of human consciousness, man has used technology to enhance
sensory awareness and the cognitive abilities of the mind and body. Scott
Fisher's work at NASA's Virtual Interface Environment Workstation (VIEW)
project during the 1980s extended this notion into realms that could not
have been imagined prior to his research. He essentially unlocked virtual
worlds that could be seen, touched, and heard by interfacing our sensory
system to new digital "organs," a cyberborgian vision of the
future in which the human body is literally coupled with the machine.
Fisher is one of those new breed of artist-engineers who came out of the
MIT Center for Advance Visual Studies, the venerable art and technology
think tank that evolved into the Media Lab headed up by Nicholas Negroponte
in 1986. Fisher's scientific work is informed by the inquisitive, aesthetic
sensibilities he brings to his VR experiments from his training as an
artist. During the period in the late 1980s, when virtual reality began
to receive mainstream press, and much of the hype centered around digital
guru Jaron Lanier’s vision of a virtualized society of cybernauts, it
was Scott Fisher at his NASA research lab in Mountain View, who was quietly
inventing the medium that came to be known as virtual reality.
Lanier actually coined the term ;virtual reality' around 1989 resulting
from his interest in its mind altering possibilities, Fisher prefers the
term 'virtual environment,' "to emphasize the ability to completely
immerse a subject in a simulated space with its attendant realities."
Fisher was never so concerned with the psychological and paranormal ramifications
of cyberspace exploration. Rather, his work has always focused on the
pragmatics of sending the human being into virtual space and giving him
the tools to interact with these digitally constructed worlds.
work evolved from his clear sense of historical precedents in the field
of immersive simulation. While a student at MIT in the 1970s, he worked
closely with Negroponte and Michael Naimark on the Aspen Movie Map project,
in which the viewer-participant navigates a representation of Aspen, Colorado
via laserdisc. This form of environmental manipulation, dubbed by Naimark
as 'surrogate travel,' was a strong influence on Fisher’s subsequent work.
Drawing in part from such curious experiments as the 19th Century Zoetrope,
the Nickelodeon, the short-lived Cinerama, and Morton Heilig's Sensorama
arcade, Fisher developed his system based on the seminal research in head-mounted
displays conducted by Ivan Sutherland in the late 1960s. It was from this
panoply of diverse influences that Fisher formulated his broader vision
of the capabilities of immersive simulation and the idea of 'telepresence.'
Fisher developed the VIEW system based on prior research of Sutherland
and predecessors at NASA, he also worked closely with his contemporaries.
An important exchange took place in collaboration with Lanier and his
company VPL Research, Inc., where the key component known as the 'dataglove'
was created. This extraordinary device serves as an artificial limb that
literally extends the 'immersant's' reach into the immaterial realm of
cyberspace, through which digital objects can be touched and manipulated.
Lanier, who often combined his talents as musician and computer scientist,
had originally invented the dataglove as a means to perform virtual musical
Fisher had implemented sight (head-mounted display) and touch (dataglove),
he achieved a more total sense of immersion by adding sound (binaural
headphones) and speech recognition (microphone). He began to experiment
with such ideas as full-body tracking, in which viewers would interact
with "life-sized representations and electronic persona." Far
from the notion of interface we associate with the graphical user interface
of the personal computer, he had achieved an advanced degree of simulation
that would literally transport the viewer beyond the interface, through
the screen, or as Fisher describes it, "a human interface that disappears
– a doorway to other worlds."
the early 1990s, once the hype of virtual reality had subsided and the
medium had assimilated into the mainstream, its impact on contemporary
life could be felt in numerous ways. Almost overnight, amusement parks
and video arcades had incorporated immersive simulations; films such as
Lawnmower Man dramatized the dangers of virtual reality; and the Guggenheim
Museum proclaimed VR as a new avenue for artistic expression with its
"Virtual Reality: An Emerging Medium" exhibition in 1993. Today,
just as Fisher had envisioned, even the field of medicine is undergoing
a transformation through the incorporation of virtual surgery.
in a changing world; one is which the barrier between the real and the
virtual is rapidly eroding. The research of such pioneers as Morton Heilig,
Ivan Sutherland, and Scott Fisher has opened doors to new forms of representation
that bring into question the very foundation of our sense of reality,
epistemological concerns that will assuredly confound future generations.
Or perhaps we are embarking on a form of communication that will allow
us to decode the cryptic paintings mysteriously constructed on the walls
of Lascaux some 17,000 years ago, where human consciousness first made
itself felt in the immersive environs of those dimly lit caves.
The History of Virtual
Morton Heilig's Experience
Theater : Sensorama (1960)
"I became fascinated
by Cinerama after reading about it. I was already involved with cameras
and film and the technological influences in film. On a visit back to
new York I went down to see Cinerama. this was a pivotal experience in
my life. The narrator described the scene, the curtain swept back, revealing
a screen four times bigger than normal and they showed a roller coaster
ride. you no longer identified with some actor who has having your
experience, you had the experience yourself." -- Morton Heilig
Cinerama excited a young cinematographer named Morton Heilig, who believed
the future of cinema lay in creating films that could present a total
illusion of reality. He diagrammed the various elements he felt were necessary
to create that total illusion, such as the brain's sensory channels and
the body's motor network. He called his end product "experience theater."
Here is a passage from Heilig's "The Cinema of the Future,"
first published in 1955. "When a primitive man desired to convey
to another man the emotional texture of an experience, he tried to reproduce,
as closely as possible, the elements that generated his own emotions.
His Art was very simple, being limited to the means provided by his own
body... With time, a specific word-sound became associated with the impressions,
objects, and feelings in man's experience. Words were useful in conveying
the general structure of an event... [but] even then not a thousand of
his choicest words could convey the sensation of yellow better than one
glance at yellow, or the sound of high-C better than listening for one
second to high-C. And so side by side with verbal language they evolved
more direct forms of communication, painting, sculpture, song and dance...
For all the apparent variety of the art forms, there is one thread uniting
all of them. And that is man, with his particular organs of perception
and action. Art is like a bridge connecting what man can do to what he
Heilig's research led to
"Sensorama," a VR-type arcade attraction he designed and patented
in 1962. Sensorama simulated all the sensory experiences of a motorcycle
ride by combining 3-D movies, stereo sound, wind, and aromas. By gripping
the handlebars on a specially equipped motorcycle seat and wearing Viewmaster-type
goggles, the "passenger" could travel through scenes including
California sand dunes and Brooklyn streets. Small grills near the viewer's
nose and ears emitted breezes and authentic aromas. Sensorama was extremely
complex for the arcade environment, and funding never materialized for
the simplified version Heilig later developed, but his version of a medium
that combined multisensory artificial experiences became a reality in
the 1990's. It was outfitted with handlebars, a binocular-like viewing
device, a vibrating seat, and small vents that could blow air, stereophonic
speakers and a device for generating odors. One of the rides was a motorcycle
trip through Brooklyn.
techniques such as three-dimensional (3-D) movies and stereophonic sound
that developed in Hollywood during the early 1950's also influenced VR's
future. Cinerama, one of these technologies, sought to expand the movie-going
experience by filling a larger portion of the audience's visual field.
Three cameras, shooting from slightly different angles, were used to film
each scene in a Cinerama movie. The film was then synchronized and projected
onto three large screens that curved inward, wrapping around the audience's
peripheral visual field. Cinerama's technology proved too costly to be
embraced by most commercial theaters, but the theory of visual immersion
became an important VR element.
Ivan Sutherland (1938
The Head-Mounted Display
"We live in a
physical world whose properties we have come to know well through long
familiarity. We sense an involvement with this physical world which gives
us the ability to predict its properties well. For example, we can predict
where objects will fall, how well-known shapes look from other angles,
and how much force is required to push objects against friction. We lack
corresponding familiarity with the forces on charged particles, forces
in nonuniform fields, the effects of nonprojective geometric transformations,
and high-intertia, low-friction motion. A display connected to a digital
computer gives us a chance to gain familiarity with concepts not realizable
in the physical world. It is a looking glass into a mathematical wonderland."
-- Ivan Sutherland, 1965, from the article "The Ultimate Display."
In the early 1960's a graduate student named
presented a Ph.D. thesis in this area that demonstrated a new way to interact
with computers by using graphics. Sutherland believed that display screens
and digital computers could offer a means of gaining familiarity with
concepts not realizable in the physical world by providing a window, or
looking glass of sorts, into the mathematical wonderland of a computer.
Sketchpad launched the field of computer graphics, Sutherland's groundbreaking
interactive software system developed at MIT.
Sutherland next focused on developing technology that would allow computer
users to actually enter the world of computer-generated graphics. In 1965,
with support from the Department of Defense's Advanced Research Projects
Agency (ARPA) and the Office of Naval Research, Sutherland unveiled the
head-mounted display (HMD), which took users inside a three-dimensional
world by limiting visual contact to the displays shown by small computer
screens mounted in binocular glasses. It became a cornerstone of VR technology.
display earned the nickname the sword of Damocles due to the mass
of hardware that was supported from the ceiling above the user's head.
The weight of the HMD was too much to bear without some additional support.
A mechanical apparatus determined where the viewer was looking, and monoscopic
wire-frame images were generated using two small cathode-ray tubes (CRT's)
mounted alongside each ear.
Optics focused the image onto half-silvered mirrors placed directly in
front of the eyes. The mirrors allowed the computer-generated images to
overlay the view of the world (in contrast, most of today's VR systems
obscure the view of the outside word). Users of the system viewed a wire-frame
cube floating in space in the middle of the lab. By moving their head
around they could see different aspects of the glowing cube and determine
its size and placement.
HMD - Flight Simulator (1979)
early as 1979, the military was experimenting with head-mounted displays.
If an effective one could be built, it would significantly reduce the
expense and physical size of the simulation system. by projecting the
image directly into the pilotís eyes, bulky screens and projection systems
could be eliminated. One of the first of these, McDonnell Douglasís VITAL
helmet used an electromagnetic head tracker to sense where the pilot was
looking. Dual monchromatic cathode-ray tubes were mounted next to the
pilotís ears, projecting the image onto beam splitters in front of his
eyes. This allowed the pilot to view and manipulate mechanical controls
in the cockpit, while seeing the computer-generated image of the outside
world. Problems with bulky headgear and the unnaturalness of viewing through
beam splitters, however, limited the acceptance of these early head-mounted
displays. For over 20 years, America's armed forces have been manufacturing
realities in order to improve the effectiveness of training their personnel.
This military development of flight simulation had a significant impact
on the future of arcade and later computer games.
Michael Naimark -
Aspen Movie Map (1979) - Surrogate Travel
MIT Architecture Machine Group
In 1978, Michael
Naimark, under the direction of Andrew Lippman in the MIT Architecture
Machine Group pushed hypertext into the hypermedia arena where it combined
photographic image and text together, when he directed the production
of the Aspen Movie Map. Aspen was a simulated application that allowed
the user to drive through the city of Aspen on a computer. The system
was implemented using hypermedia database (i.e. non-linear). User simulated
the driving of a car by means of joy stick. User indicated the turn of
a specific direction by pointing the joy stick toward that direction and
a link to a specific photograph or film segment would be activated. One
of the applications of this simulation hypermedia experience is in the
area of training e.g. pilot training or driving training.
used a set of videodisks containing photographs of all the streets of
Aspen, Colorado. Recording was done by means of four cameras, each pointing
in a different direction, and mounted on a truck. Photos were taken every
3 meters. The user could always continue straight ahead, back up, move
left or right. Each photo was linked to the other relevant photos for
supporting these movements. In theory the system could display 30 images
per second, simulating a speed of 200 mph (330 km/h). The system was artificially
slowed down to at most 10 images per second, or 68 mph (110 km/h).
Scott Fisher and
the VIEW-Virtual Environment Workstation Project (1985)
NASA/AMES - Mountainview, CA
In the mid-1980's,
the different technologies that enabled the development of VR converged
to create the first true VR system. Researchers at NASA's Ames Research
Center in Mountain View, California, charged with creating an affordable
pilot training system for manned space missions, developed the Virtual
Interface Environment Workstation. It was the first system that combined
such standard elements as computer graphics and video imaging, 3-D sound,
voice recognition and synthesis, and a head-mounted display. A data glove,
based on an invention designed to play air guitar, completed the system.
In 1981 Michael McGreevy began a program of research
in spatial information transfer at NASA Ames, emphasizing the interpretation
of 3-D displays. Aware of the pioneering work by Sutherland in HMD, McGreevy
put forth a proposal in 1984 to craft a similar system for NASA called
a virtual workstation.
Building on a helmet display system from the air force (VCASS) used for
pilots, he built a small inexpensive display that could be worn on the
head. Black and white hand held TVs, based on LCD technology (Watchman).
The displays were mounted on a frame similar to a scuba mask, special
optics in front of the displays focused and expanded the image so it could
be viewed and dubbed the display Virtual Visual Environment Display (VIVED).
Two video computers were then mounted to create independent left and right-eye
images, or stereo pairs. Their first production was a walking tour from
NASA's human factors lab, through the offices of the division, and on
to the hanger. Finally they patched together a Picture System 2 graphics
computer from Evans and Sutherland, two 19-inch display monitors, a DEC
PDP-11/40 host computer, and a head tracker. The Evans and Sutherland
graphics system generated separate (stereo) wide-angle perspective images
on each of the two display monitors.
Fisher attended the Massachusetts Institute of Technology, where he
held a research fellowship at the Center for Advanced Visual Studies from
1974 to 1976 and was a member of the Architecture Machine Group from 1978
to 1982. There he participated in development of the `Aspen Movie Map'
surrogate travel videodisc project and several stereoscopic display systems
for teleconferencing and telepresence applications. He received the Master
of Science degree in Media Technology from MIT in 1981. His research interests
focus primarily in stereoscopic imaging technologies, interactive display
environments and the development of media technology for representing
`first-person' sensory experience.
to 1990, Mr. Fisher was Founder and Director of the Virtual Environment
Workstation Project (VIEW) at NASA's Ames Research Center in which the
objective was to develop a multisensory virtual environment workstation
for use in Space Station teleoperation, telepresence and automation activities.
The VIEW Project pioneered the development of many key VR technologies
including head-coupled displays, datagloves, and 3-D audio technology.
In 1990, he co-founded Telepresence Research to continue research on first-person
media, and to develop Virtual Environment and Remote Presence systems
and applications. Prior to the Ames Research Center, Mr. Fisher has served
as Research Scientist with Atari Corporation's Sunnyvale Research Laboratory
and has provided consulting services for several other corporations in
the areas of spatial imaging and interactive display technology. His work
has been recognized internationally in numerous invited presentations,
professional publications and by the popular media.
Attached to the HMD was a mall mike that allowed you to give simple voice
commands to the computer. fisher had simply purchased a commercially available
voice-recognition package and had it connected to the system. voice input
was important because, once you put the HMD on, you could no longer use
the keyboard or find any buttons to control your environment.
By the end of 1986, the NASA team had assembled a virtual environment
that allowed users to issue voice commands, hear synthesized speech and
3-D sound sources, and manipulate virtual objects directly by grasping
them with their hand. Cybernauts
venturing into NASA's virtual worlds had to outfit themselves with a collection
of gear that a scuba diver might recognize, particularly because the original
design used a scuba-mask frame to mount the LCD displays. Instead of a
glass window into the undersea world, the displays were glass windows
into the virtual world.
lifeline was a series of cables that led from the headgear and DataGlove
to an array of computers and control boxes. Just as early divers used
compressor pumps and airhoses, virtual explorers were similarly connected
to their reality-generating machines. In their exploration of these new
virtual environments, cybernauts were like divers descending alone into
the undersea realm. Holding up your gloved hand in front you, you would
see a simple, blocky, wire-frame, and the disembodied hand would mimic
the motion. Using fiberoptic sensors to measure the flex of each finger
joint and an additional position and orientation sensor, the computer
knew exactly where your hand was and what movements your fingers made.
Scott Fisher also
worked on another virtual reality system known as the "Boom,"
in which the viewer is mounted on a stand, much like a microphone boom
stand. This system frees the viewer from being encumbered by the head-mounted
display, allowing greater movement and flexibility. Fake Space Labs: BOOM
system - high-resolution mobile VR. Shown here is a composite image, demonstrating
the view the user would see in the monitor. In VR systems, a 3D display
device mounted on a counter-weighed arm (boom) that can be manipulated
in space by the VR participant. An alternative to head-mounted displays
or EyePhones, boom-mounted displays can carry heavier, higher-resolution
CRT monitors. With position sensors attached to the shafts of the boom
itself, latency or frame-lag (the disparity between the participant's
movement and a corresponding movement of the virtual world) can be kept
to a minimum. Boom-mounted VR displays provide a useful halfway-house
between desktop VR and full sensory immersion.
Jaron Lanier and
From there, it was only a matter of time before
VR programs began appearing in settings ranging from virtual reality theme
parks to operating rooms, largely aided by products developed by Jaron
Lanier, whose programming language operated the first data glove at
the NASA research center. Lanier and his company, VPL Research, Inc.,
were at the forefront of the VR industry, designing the DataGlove used
in many virtual reality applications. VPL also developed VR software for
clients ranging from automobile manufacturers to entertainment companies.
Lanier is probably best known for his work in Virtual Reality. He coined
the phrase 'Virtual Reality' in 1989, and helped found the VR industry.
A data input device developed by Jaron Lanier
and Thomas Zimmerman of VPL Industries, the DataGlove was designed to
translate the movements of the hand and fingers into a code that is readable
by a computer. The DataGlove has absolute position sensors attached to
it, and is lined with fibre-optic cables that run along the love fingers.
these transmit light from an electronic light source at one end of the
cable, which is read by an electronic photosensor at the other end. when
the hand is flexed, light is released from precisely calibrated incisions
at each knuckle. the amount of light released corresponds to the degree
that the finger is crooked. The DataGlove opened up the possibility of
a whole new range of gestural interfaces for interactive multimedia and
Seeing the representation of your hand suddenly
changes your perspective. You now have a perceptual anchor in the virtual
world. You're actually inside the computer because you can see your hand
in there. To move about the tinker-toy world, you simply point with one
gloved finger in the appropriate direction and the angle of your thumb
controls the speed of your flight. The computer had been taught to recognize
that gesture as the desire for movement. Other gestures were possible;
for example, closing your fist caused you to grab any object that your
hand intersected. As long a you kept your hand closed, the object stayed
stuck to it. this allowed you to move objects around. Opening your hand
released the object.
Assignments for this
complete Fisher and
Novak readings for next week (Krueger is delayed)
Complete character development and begin character home page
(due March 9th)