Thursday, September 27, 2007
We have been looking at what is possible with Virtual Worlds but for the next few minutes let us step out of the domain what is possible now and explore what could become possible in the next couple of years. From engineering, let us step into the domain of imagineering -- which is what this blog is all about !
Why are we constrained to 2D displays ? We are inherently 3D animals ... and the world that we are simulating virtually is supposed to be 3D. So why should we stick to traditional computer displays that render 2D images ? There are display technologies available that create 3D hologram style images ... that one can 'almost' walk around and see ( though not quite touch as yet ) Go to google and search for 3D displays and you will see stuff like what you see above and below.
Just imagine what your SecondLife, or whatever 3D world that prefer to live in, will look when you view them on a display screen like this. And interestingly enough, the cost of these display devices is not astronomical. They are obviously more expensive when compared to the standard flat screen monitors ... but certainly very affordable.
Yet the technology is not quite rocket science. We have had 3D movies using polarised light for decades. Please see the diagram below. Modern technology has used the same principles of physics and made the devices both affordable and convenient to use.
But our next question is do you need a display at all ?
Recent advances in medical science, exploring ways to make the blind see again, have created what are known as Bionic eyes. If you look at the structure and mechanics of the human eye, you would notice that optical signals ( or OK, electromagnetic energy ) are sensed on a photosensitive surface -- that is the human retina -- and converted to electrochemical energy at one end of the optic nerve. The other end of the optic nerve is connected to the human brain which can sense the electrochemical signals that are transmitted down the nerve.
Then the cognitive process of the human brain interprets these electrochemical signals and causes the person to perceive a vision of what lies in front of the retina.
Can this not be replicated using a known technology ?
Of course it can be done. All that you need is a camera that captures the image. Converts it into a series of electrical impulses and then sends it down the optical nerve. Simple ?
Not quite ?
There is a huge amount of image and signal processing involved ! Each kind of shape, colour, texture creates a different pattern of signal -- but what causes what ? This is not quite known as yet .. and so when we do it for the first time, the brain cannot make sense of the signals that it is being fed. But this is a matter of time. Currently we have systems that allow the brain to recognise the presence and absence of light and vague fuzzy shapes. But even this is of great benefit to those who are completely blind. I am sure it is a matter of time before the image processing software becomes sophisticated enough so that the signals are parsed and formed in a manner that the brain can make sense of and hence recognise a range of shapes, sizes and colours.
The problem is difficult but not intractable. No known laws of physics are being violated nor are astronomical amounts of energy required. It will happen ... and it will happen soon.
Which leads us to the first level of convergence ... that is between 3D display technology and bionic eyes.
When a 3D monitor displays an image, what is that that it actually does ? The computer program generates a pattern of signals that is converted to a pattern of lights ( electromagnetic radiation ) that travels across the distance between the screen and the user. This light is then converted back to electrochemical signals in the optic nerve either (a) through the human retina or (b) the camera of the bionic eye. So there are two conversions : electrical signals => optical signals => electochemical signals. Question ? Do we need the intermediate optical signal at all ? What value is it adding to the process ? Can we do away with this totally ? See the figure below ..
What we suggest is that the 3D display can be done away with it ! But not the technology that 'renders' the scene in 3D. That is still required to create a set of electrical patterns that represent the virtual world in all its exquisite detail ... but just that it is not converted to ( or 'shown' as ) light signal. Instead the electrical signals are fed into the processing unit of the bionic eye which is led to believe that the signal has come from the camera of the bionic eye!
So it processes these signals ( and this is no easy processing, mind you ... this is heavy duty stuff ) and passes it on to the optic nerve .. which in turn is led to believe that the signals have originated from the living retina !! This is layers and layers of deception ... but all for a good and noble cause ...
and what is that cause ? Total Immersion ..
Total immersion means that the human brain has lost the ability to distinguish between electrical signals that originate from a computer and optical signals that originate from the environment. Like the Turing test where you claim that Artificial Intelligence has been attained when you cannot distinguish between the responses from a human and those from a machine .. this Total Immersion is when you cannot distinguish between stimuli from machines or from the real environment.
The line between the real and the virtual is becoming increasingly blurred !!
But why should signals move in only one direction ? Why not the reverse ? Why can signals originating from the brain not be used to control the environment ? This is thought control .. we are talking about !!! Remember the novel / movie Firefox ... not the browser, but the thought controlled fighter aircraft that was developed by the USSR and stolen by the US ? That was science fiction in 1982 .. but it can be come a fact in 2012 ..
Consider the following ..
This is again a piece of technology from the domain of medicine ... that is designed to allow paralysed people or quadriplegics to move .. by allowing them to control their wheel chairs with their thought. First thought-controlled wheel chairs, then we will have thought-controlled fighter aircraft !
Again the principles are astonishingly simple though the implementation is and could be fiendishly difficult. When you want to move an arm or a finger, there is a signal that is generated in the brain that travels down a specific nerve as an electrochemical impulse and causes a movement of the limb.
All that we are trying to do is to sense the same signal and cause a electro-mechanical device to move and do the same thing as a limb would do ... for example move a joystick ! and if you can do that you have a thought controlled device.
But again there are implementation issues. The signal has to be picked up from a probe inserted into the brain -- which can be uncomfortable, and then heavy duty signal processing software has to used to distinguish irrelevant signals ( or noise ) from the actual signal. If this does not happen .. then the intention to move a finger can be misinterpreted to move a leg ... or perhaps not understood at all.
Obviously more research is needed but again the principles that we are dealing with do not violate any known laws of physics though they could be computationally intensive. So it is a matter of time indeed before we have ...
As technology moves forward, the intrusive, painful brain probes can be replaced with simpler and more comfortable cap-based sensors of the kind shown ( and demonstrated ) above.
So now we have four pieces of technology ... namely
- Virtual Worlds like SecondLife
- 3D Display technology -- both hardware and software -- that can create a near perfect illusion of solid objects
- Bionic Eyes that allow the display to be replaced with technology that allows total immersion of the user inside the Virtual World
- Thought sensors that can "read" thoughts and make things happen in the Virtual World
So what do you get when you assemble all these technologies ? Why The Matrix of course !
Friday, June 29, 2007
Movies created without a camera or human actors is nothing new. From animated cartoons by Walt Disney to dinosaurs in Spielberg’s
All this however pales into insignificance when we consider the immense potential of virtual worlds technology – as implemented in environments like Second Life and Active Worlds. Movie making as we know it today is set to change beyond recognition as producers and cinematographers realize the disruptive impact that this is going to have in the future.
Virtual worlds have their origin in interactive computer games of the category that are commonly referred to as Massively MultiUser Online Role Playing Games. Technology that first appeared in games like Everquest and World of Warcraft is now being used to create virtual worlds, or Multiuser Online Collaborative Platforms, like Second Life. Individuals and corporates alike are joining participating in these platforms that form the basis of 3D Internet.
A virtual world is one that exists as a 3D simulation of a familiar physical, or more often than not a fantasy, world to which individuals connect to – in a way that one connects to mundane chat server – and then ‘emerges’ inside it as an avatar : a 3D representation of his or her persona that can interact with the environment or with other avatars that represent other individuals who too have connected to this world at the same time. Controlled by the human being on the keyboard, the avatar can perform a range of activities that include but is not limited to walking, flying, making gestures, talking to other avatars, picking up and manipulating ‘solid’ objects … the list can go on. And capturing all this frantic activity is possible not with a traditional optical camera but with a low cost screen capture device that can store all this for posterity in any of the digital movie formats like mpeg, avi or wmv. That in a nut-shell is machinima, which stands for both (a) the process of creating movies in virtual worlds as well as (b) the actual movie itself. Machinima as a concept is not very new, but the process of creating realistic movies with significant dramatic content throws up some challenges. Let us see how these will be overcome in the very near future.
First : The characters seem rather wooden today. While physical appearances are infinitely customizable – height, body bulk, shape of head, colour of hair, and even ‘skins’ that can create near look-alikes of any real person, and a wide variety of dresses are available for purchase, the behavior is still rather wooden. Avatars move stiffly and have a limited repertoire of gestures – which may be fine for dedicated gamers but would be a put-off for a movie viewer who is more interested in the dramatic content and less in the esoteric technology behind it.
However the evolution of artifacts called ‘animations’ and small bundles of these ‘animations’ arranged in a sequence called ‘gestures’ can create a fairly smooth sequence of movements like [smile] + [wave] + [say ‘hello’] + [handshake]. Using an inventory of animations one can potentially create a virtually infinite collection of gestures, most of which can be unique to an individual avatar and with some deftness on the keyboard, these can be played out in a manner that would be very, very realistic.
Animations can be purchased, but learning to assemble them into personalized gestures is the digital equivalent of going to an
Next is creation of sets. In real life, sets are built of wood, paper, bricks and what not. Or one goes to a film studio where these are already built and the real life actors go through their motions inside these sets. In a virtual world, building sets – houses, rivers, trees, cars, bridges and so on – is simple 3D modeling with some embedded scripted programs that do things like cause doors to open and rivers to ripple and flow and bridges to collapse and fall. In real life, good sets need effort and money. So is it in virtual worlds – except that the cost is far, far less ! Something as big as a huge hotel can be built, down to the last detail, by 4 people in just about 2 – 3 weeks. Building sets needs virtual land, which can be bought or leased at a nominal cost.
With actors, actresses and sets in place, the next piece of the puzzle is the actual recording, which is again very intuitive. The avatar representing the ‘camera-operator’ must be present inside the set where the avatars of the actors are playing their roles and all that is visible to the avatar can be captured in digital file using available screen capture technology. In fact, here virtual worlds are far superior to real worlds since the ‘camera-operator’ can fly in the air or change his viewpoint from a close-up to a long-shot at the roll of a mouse ! Multiple avatars representing multiple camera-operators can be present to capture various angles as is the case in real life cinematography
And all this is possible without anyone – actors, set builders, camera-operators – ever leaving their homes in real life ! All can work from home, or a standard office environment, as long as they have computers, with the free virtual world client and a broadband connection to the internet ! Imagine how convenient all this is to the producers budget !
Movies however need more than actors, sets and camera-operators to be successful – you need a good script, smart direction and tight editing. These requirements continue whether the movie is shot in real world or in the virtual world. But by significantly reducing the cost and the physical effort required to create movies, creativity in the real sense will flourish. Directors would be able to do what they had always dreamt of but were held back by the irritating constraints of the real world.
And may be from 2009 onwards the Filmfare awards will have additional categories for the best film shot in Virtual Worlds, for the best male and female avatar in a lead role, along with the best supporting male and female avatar in a supporting role .. the possibilities are endless. James Cameron, the director of Titanic and other
Monday, June 25, 2007
The obvious answer in the domain of entertainment. But how ? Today's Business Standard carries an article that explains how game developers are planning to work with the Mumbai film industry ( painfully referred to as Bollywood !) to develop interactive games based on actual movies ... and possibly using the the names and images of well known film stars.
This is good but this has been done before with Angelina Jolie and some of her movies but the real challenge is to take it to the next level ...
Why not shoot real movies, that is movies that will be shown in real life, using settings and actors in virtual worlds ?
The movie 300 has been in the news recently because of the extensive use of digital technology for creating the sets but the actors have been real people, who have played out their roles in a bleak and empty aircraft hanger. Subsequently, their images were layered on to the digital sets using fairly advanced technology.
The next frontier is when the actors themselves will be represented by their avatars in the virtual world. Can this technology be used to create full length movies without ANY optical camera at all ? Certainly, if you consider the following ..
- The Avatars can be made to look extremely realistic and lifelike. Today, most avatars have a doll-like look but that is a matter of choice not necessity. It is not at all difficult to create 'skins' that look like very real people, if not specific individuals, like Amitabh Bachchan or Madhuri Dixit.
- Gestures and animations are already available and a clever use of these can be made to make avatars shake hands, dance and do many other human like activities.
- More importantly, tools have emerged to display emotions like anger and smile. The Mystic HUD gizmo that I have recently bought for my avatar gives me a two key-press access to many of these emotions.
But going forward, we can anticipate the arrival of professional actors in Second Life. What are the characteristics that they must possess ?
- Unlike Real Life, they need not look good but they should have either bought or developed excellent 'skins' that make them look as grand and magnificent as any real life actor or actress
- Instead of going to the gym to keep their bodies muscular or otherwise attractive, they should be knowledgeable enough to 'edit' their avatars to achieve the right physique. In fact they can also hire professional 'avatar editors' in Second Life to edit their bodies ... just as we have professional hair dressers and make up men in real life
- They should acquire a good inventory of gestures, animations and emotions and have these available in their inventory .. so that they can create a range of emotions as and when the situation demands.
- Finally, these people should have the dexterity to quickly press the right keys so that the right emotions appear on their avatars in the right sequence. This is analogous of going to School of Acting or School of Dancing and learning the correct steps.
Going forward, we can envisage the entire movie industry getting metamorphosed into Second Life where we will have a full cast and crew of
- Actors and actresses .. who will play out their roles using ONLY the keyboard. This will include not only the lead players but also the junior artists ( or extras)
- Support crew like make up artists and set designers who will not work in real life but instead work through their avatars in second life to design dresses, hairstyles and the virtual sets where the action will take place
- Photographers who will not use 'optical light' at all ! So they cannot really be called photographers. Instead they will use non-optical moving image capture devices like screen grabbers .. like they do today when they create machinimas
And in the competition for the Oscar for the Best Actor and Best Actress, we will have nominations from people in Real Life as well as avatar's in SecondLife ...
And may the best candidate ( person or avatar ) win !!
Thursday, May 31, 2007
Read the full article here.
Friday, April 20, 2007
Individual organisations maintain their own physical servers and these can be accessed through the TCP/IP protocol.
Currently, on these servers, we run the same HTTP application server (the 'web server') .. and anyone anywhere in the world can connect to the HTTP application server through the HTTP client ( the 'browser')
Similarly, going forward, organisations can run their own SL servers on their own hardware and and allow ( or disallow ) individuals to connect their SL clients .. and this 'visit' SIMs ( just as we visit websites today)
On current HTTP application servers we run fancy stuff like java applets, Flash animations, RealAudio and YouTube style specific applications ... provided they comply with correct protocols and clients must have the required plug-ins
Similarly on our SIMs we can run fancy stuff ( not sure what ?) and as long as they comply with protocols and the SL clients are configured to access them.
One major difference is that browsing on the web is an anonymous exercise ... the server has no way of knowing who am i ... also when i am browsing, artefacts that belong to me ( cookies etc ) remain on the client machine ..
In SL that is different .. we need an identification and also a place to store our assets ...
So there has to be a central identification management agency that will ensure uniqueness of avatarIDs
In certain countries, the SSN could be a source of uniqueness (though revealing that will be a big blow to our privacy ) but that is not universal. So it is likely that there will be a parallel ID system that will be created ( do i see the beginnings of a global SSN ? )
The concept of a central identity management mechanism is an intriguing possibility ... going forward, avatars will have a global ID and they will also need a global "warehouse" where they can store their inventory of artefacts .. and i suppose there will be competition from different agencies to act as the "warehouse" ... just as banks compete with each other to be the custodians of our cash.
Who will run this central identity management service ? would it something central ? or would it something heirarchical like the DNS service ? with a core group of identity servers ? would our avatarIDs become something like prithwis.ibm.sl ( provided by our employers ) and would there be people like Yahoo and Google who will tempt us with (free ?) identities like BigBoss.yahoo.sl or SmartOne.gmail.sl ? and will these link to our current names like Calcutta Cyclone ?
The possibilities are enormous and extremely exciting ...
Sunday, April 01, 2007
The original page is no more available but you can see a cached version of the page here.
Given the confusion with transient links ... here is a 'permanent' image of the article.