Dec 1, 2010

Loyola Marymount University in #SecondLife

When: Thursday, December 2nd @ 7:45PM EST


Where: Loyola Marymount University SLURL


Why: I’ll be attending as a guest speaker


Notice: The following post is quite long. If you’re not into reading today, then feel free to stop at the first question marker in the post and call it a day. Pictures for this post were taken at LMU Psychology Island in Second Life using Kirstens S20 (42), all shaders enabled except Global Illumination. (Depth of Field is also enabled)


LMU Psychology Auditorium - All Shaders 1


Sometimes the future is a scary thing to think about. Especially when we take into consideration that this Thursday I’ll be the guest speaker at Loyola Marymount University in Second Life. Just imagine a room full of college students from various disciplines, all eagerly awaiting their turn to ask me questions about various technology topics and virtual environments.


Yeah, I had to stop and think about that too. Is it really a good idea to let somebody like me play a part in shaping young minds? I really thought about this when writing the book chapter as well, and the best answer I could muster was “As long as these students have the idea to think for themselves”. As an aside, the book seems to have finally been released (I noticed it available on Amazon recently).


It’s one thing to be an academic for a class at a university or giving lectures for business, but something always made me uneasy about the prospect of influencing future generations of young minds. Nobody really knows the future, and the best we can ever hope for is an educated guess. This is why I sincerely hope that the students attending on Thursday do not take all that I say as gospel and are willing to challenge and push further on their own.


Dr. Richard Gilbert (Professor of Psychology at LMU) is a really interesting guy to say the least. He’s the head of the P.R.O.S.E. Project at LMU (Psychological Research on Synthetic Environments) but even more interesting is that this man has a Grammy Award for co-writing a song in the movie Flashdance (1984). Naturally, this is the same guy behind the SLAnthem.com contest and I can only sit and wonder what sort of life this guy has led to bring him through such accomplishments.


This is the man who approached me about being a guest speaker for one of his classes, and I gladly accepted (as a bonus the college is offering an honorarium for the time).


The prospect of speaking for this class didn’t seem too out of sorts when I accepted, but then I began to really think about it. Being the professor of the class, Dr. gilbert would naturally assign homework and some research to the class prior to Thursday in order that they prepare questions and topics to converse with me about. That alone is what got me…


Trying to wrap my head around the fact that a class full of students from various related subjects are busy, as I speak, doing homework related assignments centered around my being the guest speaker on Thursday. I can imagine twenty or more students (maybe) sitting in their dorm rooms tonight and researching ideas and questions with me on their mind.


Maybe it’s a bit of empathy to be putting myself in their shoes?


This is, after all, college. So chances are that most of those students are probably drunk and partying right now. (laughs).


LMU Psychology Island with All Shaders


Dr. Gilbert has, thankfully, provided me with a list of expected topics that will be covered over the course of the two hour class. As an addition, the class will break for a short recess about halfway through (at least that is what I was told). I don’t know how well I’d hold up with two hours straight of students barraging me with questions and conversation, but to be honest I seemed to handle well at Friday Night Talk Show (which went on for nearly 4 hours of audience conversation).


Let’s take a moment to go over the topics presented to me for Thursday nights class:


1. Issues of Server architecture, so you can address the cascading structure you advocate.


This topic stems from my advocating of a hybrid decentralized server structure in order to properly handle load and bandwidth through massive, parallel fabric computing. On the surface, it sounds a lot like I’m suggesting everything to be done on the Cloud, but it’s a bit more than that.


With Cloud Computing (as we’d normally expect) there is still a centralized datacenter someplace. The only thing that’s really changed is the hardware and how it is utilized, in so much as that the software is entirely executed server side and streamed to the user via a client. Cascading Architecture is like an evolution of Cloud Computing, because it assumes that not only is the central data center involved, but each individual user in the system is also a repository and relay.


Each virtual environment user has redundant information, called Cache. This information can be readily passed along to others nearby in a virtual space without the need to ask a central server for that redundant information. There is also the idea of telling a central server that you have moved, in order that the server can tell 50 other users near you that you’ve moved (which to be honest seems silly).


Could we not connect via a cascading architecture in virtual peer clusters, thus informing each other of actions which do not need authorization? Surely 100 simultaneous users in an area are capable of relaying this information to each other, not to mention sharing their redundant cache data as well.


Better yet, why do we construct simulation systems in a manner that requires brute force and a lot of bandwidth centrally? Surely by now we would have realized this will ultimately fail to scale.


2. A status report on the quest for a universal format for 3D, ala HTML and JavaScript for 2D


Being part of the IEEE Virtual Worlds Standards Group, I can say that the closest thing that has been agreed on for a universal 3D Format has been Collada files. Past that, I have yet to see anything else solidly proposed.


My younger brother (who is in college) proposed once that files like Collada can be reduced algorithmically to a single number and decimal, to be decoded and expanded with resolution via reversing the algorithm. While this would require brute force computation on the part of the user in order to utilize those files, it does offer an interesting glimpse into procedural methodologies for indefinite fidelity.


We see today such things with Allegorithmic.com and their procedural textures system. 2MB of texture data in 2KB. I’ve seen some interesting things from DirectX 11 and Tessellation algorithms to increase the fidelity of 3D models farther than what the model data had stored. I really think these procedural methods for fidelity are the future.


3. Graphical Developments and progress toward photorealism


Which, of course, leads us into this topic. As I had said, I truly believe that procedural methods will win out in the end. Interesting enough, shortly after I had mentioned the push for photo realism in Second Life, I saw a video post by @oobscure on twitter about the Depth of Field viewer in the development channel. It looks really nice in the video, and first hand it’s just as stunning when combined with the shadows, lighting, SSAO, and Mesh abilities, as seen in the snapshots in this post from Kistens S20(42).


However, there is still quite a lot of progress to be made in the photo realism department. I will obviously cite that higher realism requires more computational power, and this doesn’t change. I believe, though, that we are quickly reaching a point whereby older methodologies for graphical abilities must be approached in a more intelligent manner.


Static images aren’t as good for fidelity as procedural dynamic textures, and what we think is high definition today at 1024x1024 or 2048x2048 resolution in a PNG or TGA is really very low resolution comparatively. I do understand that graphics cards aren’t really meant to handle 10,000x10,000 resolution textures, but who is to say that they actually have to handle the entire image all at once?


.debris Procedural Demo | 177kb Executable Size


Therein is why I believe Procedural methods will work in the end. You can essentially have a 32,000x32,000 resolution texture in 10kb seamless, but the algorithm involved knows only to show you the highest resolution you can comfortably see on your graphics card, and only the area you can actually see (as in, not trying to load the entire grass texture for the State of California all at once).


It’s all about intelligently streaming only the information we need at any given moment.


4. Developments in shared media and integrating 2D applications into immersive settings


I’ll be leaving this one to answer in class, but you can assume I’ll talk a bit about the developments with the koios media center in Second Life.


5. Current Comparisons between SL and other platforms


I’d say Second Life is a median system. Kind of like choosing Mario in SMB2 when you have Luigi, Princess and Toad at your disposal. Graphically it’s starting to catch up to things like BlueMars without having to go into graphical overkill. It’s fairly powerful as a platform, open enough to do many things, and it has average strengths. For the time being, Second Life is the all around solution I’d recommend for virtual environments.


However, this doesn’t mean I’d split hairs and differentiate between SecondLife and OpenSim, InWorldz, SpotOn3D, etc. It’s all essentially based on the same underlying technology despite the bells, whistles and pinstripes painted on the sides.


Other technologies I’ll cover in class.


6. Your projections for the near and moderate term future for SL and the wider field of 3D worlds.


Concerning Second Life, I’d say I believe that the open source community will probably make many more strides to push the technology forward than Linden Lab will. That’s not necessarily a bad thing, as I really do like Open Source software and crowd sourcing. As for the wider field of 3D Worlds… I’ll cover that in class.


I will say, however, that I don’t believe that virtual worlds on their own have a future. Like any good technology, it matures and becomes ubiquitous. What the future is, concerning virtual worlds, is not virtual worlds in the sense that we know of them today.

0 Comments:

Post a Comment