Where next for virtual worlds? A look at some current technology developments which will impact on the use of virtual worlds in higher education - or present challenges as we try to integrate a wider range of technologies with current web and 3D learning environments.
Daniel Livingstone's presentation from the Eduserv workshop "Where next for virtual worlds"
(See notes for text to accompany the presentation)
20. Invariants These notes I copied 10 years ago will do Movable type will never replace illuminated manuscript
Editor's Notes
No, this is not the SLOODLE final report – but that is now available from www.sloodle.org !
By way of brief background, I've been working for the past few years on a project to integrate learning activities and data across web-based and 3D learning environments. And it is in light of this work that I will try and consider future developments – so in what follows Im largely thinking about how virtual worlds might merge with other systems, other technologies across varying time scales.
A wide range of tools have been created for this, a few shown here. So we can run archive virtual world chat in the web-based VLE and quizzes in a virtual world using web-based authoring and gradebooks, We can link a student's avatar to their user account on the institutional IS And we can pass various bits of information back and forward; use the VLE to provide alternative access to 3D activities, or as a development environment for activities that will take place in the 3D virtual world. But there are some current trends in education technology leading away from VLE's altogether...
The notion of the Personal Learning Environment has been gaining a lot of traction in recent years – and while there are some disagreements about what a PLE is precisely, a basic definition is that someone's PLE encompasses all the technologies that they use in their learning. This example diagram here is quite a good one – note the very conspicuous presence of a lot of web 2.0 technologies. So we can see that this is probably the PLE of a technologist or educational researcher. There is also a range of non-digital items in the environment – notes, and books, as well as alternative digital devices – mobile phone is there. And in a small corner at the top is all the stuff that comes from University – the VLE would be somewhere in that box. But this doesn't mark the end for the VLE yet – I attended the “The VLE is dead' panel at last years ALT-C, and I felt that what came through was a challenge for VLE's to adapt to this cloud based web 2.0 world, where student work and student learning can happen anywhere out there on the web.
And VLE's are responding – Moodle 2.0 for example is introducing a new repository API – this is an explicit recognition that a great deal of useful material – including student work – lives on the open web, outside of the VLE, and helps reposition the VLE as a useful component of a student's PLE, one that helps the student (and helps staff) collate and organise resources and work from a wide variety of places online. And this is one way in which VLEs are responding now to changes in technology over the past few years. But there are more changes coming. How will VLE's respond to changes in technology that are happening now?
Lets consider some of the changes currently in the pipeline for HTML 5 – the common language of the WWW. These are currently being written, but even so it is possible to download browsers which implement parts of this today – such as the current 3.6 version of firefox
Some of the most widely noted features of HTML5 relate to the canvas – this will enable the native playing of video content without plugins, and with the WebGL API, 3D hardware-accelerated graphics in the browser – without a need for flash or other plugin. Alternative approaches to speeding up 3D in the browser include Google's O3D, which is more powerful that bare WebGL, but is in effect another plug-in. Unity is another lightweight cross-platform plug-in technology that is proving popular for bringing accelerated 3D graphics to browsers – with several virtual world projects using Unity, including at least one attempt to build a browser based OpenSim client (from ReactionGrid)
The HTML5 sockets API has received less attention. Currently, browser apps use a mixture of methods to give the appearance and effect of having a persistent web-connection – although AJAX apps rely on a bag of tricks for this, tricks that may be less than suitable for creating shared 3D virtual worlds that run natively in the browser. Sockets bring the capability to create fixed internet connections to javascript running in a web-browser, with significant potential for improving client server connections. Together, these incremental improvements to HTML will mean that users could connect to virtual worlds – even second life itself – without having to install any software at all – not even a browser plugin. One problem facing some educators – that of the locked down PC environment might become a distant memory. And we might finally be able to access virtual worlds from any machine, anywhere. Assuming that the firewall doesnt block the sockets connection of course... :-) But away from the browser, what else is happening...
Well, there are a number of mobile phone based virtual world projects. VW's built specifically for mobile phones include bobba (from the makers of Habbo Hotel) and TibiaME, plus a few projects which provide limited access to SL via phone apps. Bobba (as seen here) has clients for a range of phones – iPhones, iPod Touch and a large number of phones using the Symbian Series 60 OS. So virtual worlds might soon fit (natively) in the browser, but already they can fit in the pocket. Though I have to say, this is not neccessarily a comfortable fit – a social virtual world which relies on text chat is not so easy when you have a standard phone keyboard, and the small screen gives (in my experience) a much less immersive experience. I found bobba less immersive than a text based virtual world – its hard to get drawn in when you have to concentrate hard on data entry. TibiaME is a phone based hack'n'slash MMORPG – running since 2006 its more successful than Bobba – but still feels very casual compared to desktop and browser based VW. But what if we can break away from the tiny mobile screen?
Some of you will have seen the amazing TED talks video with Patti Maes and Pranav Mistry where they demonstrate the SixthSense project – using mobile phones, cameras and projectors to explore how mobile computing might develop in coming years. I think it interesting to imagine how we might travel inside a virtual world as we travel through physical space, and project that virtual world over the physical world. Its amazing what sixthsense has achieved already, and wonderful to think where it might lead... but how far out is this? LightTouch was demonstrated earlier this month at CES, and this is a product for system builders to develop consumer products – for e.g. portable computers that project their screen onto any surface, and where that projected screen is itself the input device for interacting with the computer.
And the time finally seems right for augmented reality – which after many years of obscurity in the computer lab has become the hyped technology of the moment. In part because technology has finally reached a point where AR apps can be run on easily accessible hardware, and middleware is there to make it easier to develop those applications. So as you stand in abbey road looking towards the zebra crossing, your phone might show you where the Beatles once walked, with virtual images overlaid ontop of the view from the phones camera. And the Layar app makes it easy to develop your own mobile AR application (And the much rumoured Mac tablet may turn out to be a killer device for this kind of app)
This actually put me in mind of a different application, a different form of data layering. We can overlay digital data on top of the already digital. So we can take a web-page
And with Diigo, we can annotate it and overlay additional content. And this ability to create 'hidden' data overlying web-pages has been used in the creation of PMOG's – persistent MOGs, such as The Nethernet, turn the web into a multiplayer game. With character classes and powers that include the ability to build paths for other players to follow, this starts to hint at how a virtual reality can be layered on what is already a virtual substrate Building your own missions for other players is a core part of the Nethernet game, and hints at the educational uses of PMOGs, a distinct class of virtual world. As ever this can be taken further...
as this image from opensource obscure, an Italian Sler demonstrates – we can layer virtual realities inside virtual worlds. This is perhaps taking things toooo far... But as many museum projects and a small number of education projects have shown, there is a lot of potential in AR as an educational technology – but how best to blend this with VW is perhaps a little more open.
As a different example, this is a physical virtual patient – a rubber patient in an authentic pretend hospital ward. The equipment is real, the procedures are real, the patients are fake and the professionals are students. Clinical simulation widely used in medical education, virtual worlds increasingly used; These are generally not well integrated with each other inor with VLE's, but could be. Could benefit from more integration. What could AR bring here? More realism, more action, more depth to the simulation. What would the world be like where persistent virtual worlds played out not only when we sat at a desktop to immerse ourselves, but whenever we browsed the web or (thanks to AR) walked down the street viewing our surrounds via some mobile device?
Perhaps a bit like the world envisaged in the Science fiction thriller 'Halting State' by Charles Stross. A world where the available technology & information and navigation feeds provided by AR have made augmented viewing of the world the norm, with layers of digital information and entertainment available – allowing people to select a reality first thing in the morning.
now this again is stretching things, isnt it? Getting quite far fetched. But lets consider an analogy for a moment 150 years ago, if you wanted to listen to music you had one option – you had to be somewhere where music was being played.
then about 150 years ago, the first device able to record sounds to a physical medium was invented – the phonautograph. Twenty years later a device was invented which was able to play back sounds that had been recorded. In the past 40 years, audio technology became personal thanks to transistors, with radios and tape players small enough to carry. Today music players are so small, they sometimes disappear altogether. Music can be an ever present layer ontop of the physical reality that we can carry around with us whereever we go. AR is offering the possibility of making access to layers of information and entertainment as ever present as music is today.
More modestly – tapped in reimagines the VLE as a (text based) virtual world. How would the VLE look as a fully integrated 3D + text (+AR?) virtual world
But there are some things that take longer to change - This is a picture of lecture from around 1350. Almost 700 years ago. We can see the students at the back chatting and sleeping, some at the front pretending to pay attention and the lecturer droning on at the front. What is the lecturer thinking? This is what I think he's thinking. As much as we complain about IT support or institutional leadership blocking and slowing down the adoption of virtual worlds, the truth is its the faculty who are responsible. If enough lecturers and academics really wanted to use virtual worlds (or other technology) then it would happen.