NVIDIA GTC 2010 Wrapup

by Ryan Smith on October 10, 2010 12:56 AM EST

Today we’re wrapping up our coverage of last month’s NVIDIA GPU Technology Conference, including the show’s exhibit hall.  We came to GTC to get a better grasp on just where things are for NVIDIA's still-fledging GPU compute efforts along with the wider industry as a whole, and we didn’t leave disappointed. Besides seeing some interesting demos – including the closest thing you’ll see to a holodeck in 2010 – we had a chance to talk to Adobe, Microsoft, Cyberlink, and others about where they see GPU computing going in the next couple of years. The GPU-centric future as NVIDIA envisioned it may be taking a bit longer than we hoped, but it looks like we may finally be turning the corner on when GPU computing breaks in to more than just the High Performance Computing space.

Scalable Display Technologies’ Multi-Projector Calibration Software: Many Projectors, 1 Display

Back in 2009 when we were first introduced to AMD’s Eyefinity technology by Carrell Killebrew, he threw out the idea of the holodeck. Using single large surface technologies like Eyefinity along with video cards power enough to render the graphics for such an experience, a holodeck would become a possibility in the next seven years, when rendering and display technologies can work together to create and show a 100 million pixel environment. The GPUs necessary for this are still years off, but it turns out the display technologies are much closer.

One of the first sessions we saw at GTC was from Scalable Display Technologies, a MIT-spinoff based in Cambridge, MA. In a session titled Ultra High Resolution Displays and Interactive Eyepoint Using CUDA, Scalable discussed their software based approach to merging together a number of displays in to a single surface. In a nutshell, currently the easiest way to create a single large display is to use multiple projectors to each project a portion of an image on to a screen. The problem with this approach is that calibrating projectors is a time-consuming process, as not only do they need to be image-aligned, but also care must be taken to achieve the same color output from each projector so that minute differences in projectors do not become apparent.

Scalable however has an interesting solution that does this in software, relying on nothing more on the hardware side than a camera to give their software vision. With a camera in place, their software can see a multi-projector setup and immediately begin to calibrate the projectors by adjusting the image sent to each projector, rather than trying to adjust each projector. Specifically, the company is taking the final output of a GPU and texture mapping it to a mesh that they then deform to compensate for the imperfections the camera sees, and adjusting the brightness of sections of the image to better mesh together. This rendered mesh is used as the final production, and thanks to the use of intentional deformation, it cancels out the imperfections in the projector setup. A perfect single surface, corrected in the span of 6 seconds versus minutes and hours for adjusting the projectors themselves.

Along with their discussion of their technology, GTC Scalable is showing off a custom demonstration unit using 3 720P projectors to project a single image along a curved screen. Why curved? Because their software can correct for both curved and flat screens, generating an image that is perspective-correct even for a curved screen. The company also discussed some of the other implementations of their technology, where as it turns out their software has been used to build Carrell’s holodeck for a military customer: a 50 HD projector setup (103.6MPixels) used in a simulator, and kept in calibration with Scalable’s software. Ultimately Scalable is looking to not only enable large projection displays, but to do so cheaply: with software calibration it’s no longer necessary to use expensive enterprise-grade projectors, allowing customers to use cheaper consumer-grade projectors that lack the kind of hardware calibration features this kind of display would normally require. Case in point, their demo unit uses very cheap $600 projectors. Or for that matter it doesn’t even have to be a projector – their software works with any display type, although for the time being only projectors have the ability to deliver a seamless image.

Wrapping things up, we asked the company about whether we’d see their software get used in the consumer space, as at the moment it’s principally found in custom one-off setups for specific customers. The long and the short answer is that as they’re merely a software company, they don’t have a lot of control over that. It’s their licensees that build the final displays, so one of them would need to decide to bring this to the market. Given the space requirements for projectors it’s not likely to replace the multi-LCD setup any time soon, but it’s a good candidate for the mancave, where there would be plenty of space for a triple-projector setup. We’ve already seen NVIDIA demonstrate this concept this year with 3D Vision Surround, so there may very well be a market for it in the consumer space.

Micoy & Omni-3D

The other company on hand showing a potential holodeck-like technology was Micoy, who like Scalable is a software firm. Their focus is on writing the software necessary to properly build and display a 3D environment on an all-encompassing (omnidirectional) view such as a dome or CAVE, as opposed to 3D originating from a 2D surface such as a monitor or projector screen. The benefit of this method is that it can encompass the entire view of the user, eliminating the edge-clipping issues with placing a 3D object above-depth of the screen; or in other words this makes it practical to render objects right in front of the viewer.

At GTC Micoy had an inflatable tent set up, housing a projector with a 180° lens and a suitable screen, which in turn was being used to display a rolling demo loop. In practice it was a half-dome having 3D material projected upon it. The tent may have caught a lot of eyes, but it was the content of the demo that really attracted attention, and it’s here where it’s a shame that pictures simply can’t convey the experience, so words will have to do.

I personally have never been extremely impressed with 3D stereoscopic viewing before – it’s a nice effect in movies and games when done right, but since designers can’t seriously render things above-depth due to edge-clipping issues it’s never been an immersive experience for me. Instead it has merely been a deeper experience. This on the other hand was the most impressive 3D presentation I’ve ever seen. I’ve seen CAVEs, OMNIMax domes, 3D games, and more; this does not compare. Micoy had the honest-to-goodness holodeck, or at least the display portion of it. It was all-encompassing, blocking out the idea that I was anywhere else, and with items rendered above-depth I could reach out and sort-of touch them, and other people could walk past them (at least until they interrupted the projection). To be quite clear, it still needs much more resolution and something to remedy the color/brightness issues of shutter glasses, but still, it was the prototype holodeck. When Carrell Killebrew talks about building the future holodeck, this is no doubt what he has in mind.

I suppose the only real downside is that Micoy’s current technology is a tease. Besides the issues we listed earlier, their technology currently doesn’t work in real-time, which is why they were playing a rolling demo. It’s suitable for movie-like uses, but there’s not enough processing power right now to do the computation required in real-time. It’s where they want to go in the future, along with a camera system necessary to allow users to interact with the system, but they aren’t there yet.

Ultimately I wouldn’t expect this technology to be easily accessible for home-use due to the costs and complexities of a dome, but in the professional world it’s another matter. This may very well be the future in another decade.

Taking Care of Business: PCIe x16 For HPC & Quadro 3D
Comments Locked

19 Comments

View All Comments

  • adonn78 - Sunday, October 10, 2010 - link

    This is pretty boring stuff. I mean the projectors ont eh curved screens were cool but what about gaming? anything about Nvidia's next gen? they are really falling far behind and are not really competing when it comes to price. I for one cannot wait for the debut of AMD's 6000 series. CUDA and PhysX are stupid proprietary BS.
  • iwodo - Sunday, October 10, 2010 - link

    What? This is GTC, it is all about the Workstation and HPC side of things. Gaming is not the focus of this conference.
  • bumble12 - Sunday, October 10, 2010 - link

    Sounds like you don't understand what CUDA is, by a long mile.
  • B3an - Sunday, October 10, 2010 - link

    "teh pr0ject0rz are kool but i dun understand anyting else lolz"

    Stupid kid.
  • iwodo - Sunday, October 10, 2010 - link

    I was about the post Rendering on Server is fundamentally, but the more i think about it the more it makes sense.

    However defining a codec takes months, actually refining and implementing a codec takes YEARS.

    I wonder what the client would consist of, Do we need a CPU to do any work at all? Or would EVERYTHING be done on server other then booting up an acquiring an IP.

    If that is the case may be an ARM A9 SoC would be enough to do the job.
  • iwodo - Sunday, October 10, 2010 - link

    Just started digging around. LG has a Network Monitor that allows you to RemoteFX with just an Ethernet Cable!.

    http://networkmonitor.lge.com/us/index.jsp

    And x264 can already encode at sub 10ms latency!. I can imagine IT management would be like trillion times easier with centrally managed VM like RemoteFX. No longer upgrade every clients computer. Stuff a few HDSL Revo Drive and let everyone enjoy the benefit of SSD.

    I have question of how it will scale, with over 500 machines you have effectively used up all your bandwidth...
  • Per Hansson - Sunday, October 10, 2010 - link

    I've been looking forward to this technology since I heard about it some time ago.
    Will be interesting to test how well it works with the CAD/CAM software I use, most of which is proprietary machine builder specific software...
    There was no mention of OpenGL in this article but from what I've read that is what it is supposed to support (OpenGL rendering offload)
    Atleast that's what like 100% of the CAD/CAM software out there use so it better be if MS wants it to be successful :)
  • Ryan Smith - Sunday, October 10, 2010 - link

    Someone asked about OpenGL during the presentation and I'm kicking myself for not writing down the answer, but I seem to recall that OpenGL would not be supported. Don't hold me to that, though.
  • Per Hansson - Monday, October 11, 2010 - link

    Well I hope OpenGL will be supported, otherwise this is pretty much a dead tech as far as enterprise industries are concerned.

    This article has a reply by the author Brian Madden in the comments regrading support for OpenGL; http://www.brianmadden.com/blogs/brianmadden/archi...

    "For support for apps that require OpenGL, they're supporting apps that use OpenGL v1.4 and below to work in the VM, but they don't expect that apps that use a higher version of OpenGL will work (unless of course they have a DirectX or CPU fallback mode)."
  • Sebec - Sunday, October 10, 2010 - link

    Page 5 -"... and the two companies are current the titans of GPU computing in consumer applications."

    Current the titans?

    "Tom believes that ultimately the company will ultimately end up using..."

Log in

Don't have an account? Sign up now