Trevor Harris (University of West Virginia) '''Putting things in their place: augmented reality, real-time spatially embedded media, and Enhanced Location Based Services'''''
Trevor spoke about his use of:
· virtual reality
· augmented reality
Virtual reality: Moving the emphasis away from space, which is where GIS has traditionally been situated, to think about elements of place. Linking phenomenology to a hard-core, positivist technology.
Looking at linking the visible and invisible narratives of place – connecting human emotion and the physical landscape. Questioning how we tell a spatial story, moving towards a sensuous and reflective GIS that draws on elements other than the visual - “qualitative” GIS.
Emphasis upon geo-visualisation: Trevor’s earlier work involved creating dynamic virtual environments. On this project, Trevor is moving more towards the element of embodiment, emersion, experience, of trying to position ourselves in the imagery of the information being presented.
Work in the cave environment – using stereo-glasses to create an interactive environment. Realms of the historical; using maps to draw up and create 3D virtual worlds of an immersive environment– user is immersed in the map, rather than separated from it.
Using GIS to portray historical information relevant to certain spaces.
Have used work of novelists, trying to capture their sense of place within the virtual world (e.g. adding sound, smell). Capturing emotion within a GIS – beating heart, gets faster when certain areas are approached. Using the senses to create different interpretations of the landscape, culture and environment other than just physical representation.
Can also link this to real time census (e.g. displays tied to weather conditions in real time) – populating the virtual world with a sense of reality, pulled through via real time census.
Superimposing elements onto the real world, via the use of augmented reality – trying to incorporate historical elements within a real world, rather than virtual world, environment.
Concept of a virtual geo-point could be seen through the glasses; geo-reference sent back to the server – images, photographs etc can be tagged onto this geo-point.
Issue – how to use vision, and perhaps a finger-mouse, to interact with the objects on this geo-point?
User-generated content can also be sent back to the server.
Additional value of visual wear – can put a camera on it. Can use pattern-recognition, facial-recognition; linking the database to this – could be used by border patrol officers etc.