Thursday, September 27, 2007

Beyond Virtual Worlds : Patterns of the Future


We have been looking at what is possible with Virtual Worlds but for the next few minutes let us step out of the domain what is possible now and explore what could become possible in the next couple of years. From engineering, let us step into the domain of imagineering -- which is what this blog is all about !




Why are we constrained to 2D displays ? We are inherently 3D animals ... and the world that we are simulating virtually is supposed to be 3D. So why should we stick to traditional computer displays that render 2D images ? There are display technologies available that create 3D hologram style images ... that one can 'almost' walk around and see ( though not quite touch as yet ) Go to google and search for 3D displays and you will see stuff like what you see above and below.



Just imagine what your SecondLife, or whatever 3D world that prefer to live in, will look when you view them on a display screen like this. And interestingly enough, the cost of these display devices is not astronomical. They are obviously more expensive when compared to the standard flat screen monitors ... but certainly very affordable.

Yet the technology is not quite rocket science. We have had 3D movies using polarised light for decades. Please see the diagram below. Modern technology has used the same principles of physics and made the devices both affordable and convenient to use.



But our next question is do you need a display at all ?

Recent advances in medical science, exploring ways to make the blind see again, have created what are known as Bionic eyes. If you look at the structure and mechanics of the human eye, you would notice that optical signals ( or OK, electromagnetic energy ) are sensed on a photosensitive surface -- that is the human retina -- and converted to electrochemical energy at one end of the optic nerve. The other end of the optic nerve is connected to the human brain which can sense the electrochemical signals that are transmitted down the nerve.

Then the cognitive process of the human brain interprets these electrochemical signals and causes the person to perceive a vision of what lies in front of the retina.

Can this not be replicated using a known technology ?



Of course it can be done. All that you need is a camera that captures the image. Converts it into a series of electrical impulses and then sends it down the optical nerve. Simple ?

Not quite ?

There is a huge amount of image and signal processing involved ! Each kind of shape, colour, texture creates a different pattern of signal -- but what causes what ? This is not quite known as yet .. and so when we do it for the first time, the brain cannot make sense of the signals that it is being fed. But this is a matter of time. Currently we have systems that allow the brain to recognise the presence and absence of light and vague fuzzy shapes. But even this is of great benefit to those who are completely blind. I am sure it is a matter of time before the image processing software becomes sophisticated enough so that the signals are parsed and formed in a manner that the brain can make sense of and hence recognise a range of shapes, sizes and colours.

The problem is difficult but not intractable. No known laws of physics are being violated nor are astronomical amounts of energy required. It will happen ... and it will happen soon.


Which leads us to the first level of convergence ... that is between 3D display technology and bionic eyes.

When a 3D monitor displays an image, what is that that it actually does ? The computer program generates a pattern of signals that is converted to a pattern of lights ( electromagnetic radiation ) that travels across the distance between the screen and the user. This light is then converted back to electrochemical signals in the optic nerve either (a) through the human retina or (b) the camera of the bionic eye. So there are two conversions : electrical signals => optical signals => electochemical signals. Question ? Do we need the intermediate optical signal at all ? What value is it adding to the process ? Can we do away with this totally ? See the figure below ..


What we suggest is that the 3D display can be done away with it ! But not the technology that 'renders' the scene in 3D. That is still required to create a set of electrical patterns that represent the virtual world in all its exquisite detail ... but just that it is not converted to ( or 'shown' as ) light signal. Instead the electrical signals are fed into the processing unit of the bionic eye which is led to believe that the signal has come from the camera of the bionic eye!

So it processes these signals ( and this is no easy processing, mind you ... this is heavy duty stuff ) and passes it on to the optic nerve .. which in turn is led to believe that the signals have originated from the living retina !! This is layers and layers of deception ... but all for a good and noble cause ...

and what is that cause ? Total Immersion ..


Total immersion means that the human brain has lost the ability to distinguish between electrical signals that originate from a computer and optical signals that originate from the environment. Like the Turing test where you claim that Artificial Intelligence has been attained when you cannot distinguish between the responses from a human and those from a machine .. this Total Immersion is when you cannot distinguish between stimuli from machines or from the real environment.

The line between the real and the virtual is becoming increasingly blurred !!

..............................

But why should signals move in only one direction ? Why not the reverse ? Why can signals originating from the brain not be used to control the environment ? This is thought control .. we are talking about !!! Remember the novel / movie Firefox ... not the browser, but the thought controlled fighter aircraft that was developed by the USSR and stolen by the US ? That was science fiction in 1982 .. but it can be come a fact in 2012 ..

Consider the following ..

This is again a piece of technology from the domain of medicine ... that is designed to allow paralysed people or quadriplegics to move .. by allowing them to control their wheel chairs with their thought. First thought-controlled wheel chairs, then we will have thought-controlled fighter aircraft !

Again the principles are astonishingly simple though the implementation is and could be fiendishly difficult. When you want to move an arm or a finger, there is a signal that is generated in the brain that travels down a specific nerve as an electrochemical impulse and causes a movement of the limb.

All that we are trying to do is to sense the same signal and cause a electro-mechanical device to move and do the same thing as a limb would do ... for example move a joystick ! and if you can do that you have a thought controlled device.

But again there are implementation issues. The signal has to be picked up from a probe inserted into the brain -- which can be uncomfortable, and then heavy duty signal processing software has to used to distinguish irrelevant signals ( or noise ) from the actual signal. If this does not happen .. then the intention to move a finger can be misinterpreted to move a leg ... or perhaps not understood at all.

Obviously more research is needed but again the principles that we are dealing with do not violate any known laws of physics though they could be computationally intensive. So it is a matter of time indeed before we have ...


As technology moves forward, the intrusive, painful brain probes can be replaced with simpler and more comfortable cap-based sensors of the kind shown ( and demonstrated ) above.



So now we have four pieces of technology ... namely
  • Virtual Worlds like SecondLife
  • 3D Display technology -- both hardware and software -- that can create a near perfect illusion of solid objects
  • Bionic Eyes that allow the display to be replaced with technology that allows total immersion of the user inside the Virtual World
  • Thought sensors that can "read" thoughts and make things happen in the Virtual World
And mind you all this with technology that is "almost" available today ! At the risk of sounding repetitious, I need to point out once again that the technology to do all this does not violate laws of physics or need huge amounts of energy. Nor does it require any deep and difficult to understand models of human cognition -- as is the case of Artificial Intelligence. All it needs is some powerful image and signal processing algorithm and some powerful hardware to crunch through all that data -- both of which lie well within the domain of feasibility.

So what do you get when you assemble all these technologies ? Why The Matrix of course !

3 comments:

Calcutta said...

Brain Computer Interface in Second Life is almost here .. see

http://www.virtualworldsnews.com/2007/10/brain-computer-.html

Calcutta said...

Thought-controlled MMORPG games will be available very soon. Follow this link to learn more :
http://www.physorg.com/news124723221.html

Robert A Vollrath said...

I believe this could limit our imaginations more than expand it.

This would make new tools for our imaginations and save time and materials but what would be the down side to all this?