Some illusions...

At first, I was pretty confident about mental ray's futur. Each new version add somme little cool features and the enemy to be defeated was that Autodesk which was unable to make a correct integration of this engine in they softwares. You had to tinker to use the latest features but you could get by. (Read, the very informative history of the integration of mental ray in Maya by one of its devs).

I saw many active threads about some "heads" leaving the project or the futur of mental ray following the purchase of Nvidia.

Nvidia had Gelato and, peoples making the project quitting it, found themselves without any GPU rendering solution under hand, hence the acquisition of mental image to retrieve devs of mental ray and make them work with CUDA.

Nvidia_Cuda_logo.jpg

At first, everyone thought, eyes full of stars, that mental ray went "CUDA accelerated" but just think for two seconds to realize that the problem was actually more complicated... :tuComprendRien:

Paolo Berto (jupiterjazz), an influent member working for the jupiter jazz group (a THE group of vfx dev), and obviously well informed wrote this:

No, mental ray won't be ever ported on cuda, there is no intention and it simply can't. Only some parts could, and if so it will be done just for marketing reasons.

A member ask him about that and he give a little more details:

So, to make things work on cuda you need to have a lot of *coherent* computations to perform: the problem of mental ray is that it is casting one ray then calling a native C shader to shade the intersection then maybe the shader casts another batch of rays and so on. Very non coherent.
This is just not suited for gpu computation unless if a massive rewrite/refactoring of the pipeline is done by mental images, which will not happen soon enough (I am speaking about years).
Some specific task like fast AO could be definitely done though (and also this won't happen soon).

Also another problem of mental ray is that the codebase is 15 years old (and it's not a Whiskey), native C for the CPU, full of pointers mem alloc and other shit, so to use an euphemism is a fcking mess.

It is quite clear:

  • Not GPU thinked.
  • Too old.
  • "Stop dreaming!"

Everyone wondered what Nvidia was going to do of a full CPU render engine that did not seem to be able to change...

All indulged in some speculations but until then, there was nothing let guessing what would happen.

My doubts really began in 2009 at the Nvidia's GPU Technology Conference. I saw one of the single "old dev" of mental ray and I learned that he had worked "several years" on the thing we all expected (irony): iRay and Reality Server.

Maybe I was the only one but I immediately thought that putting the last "heads" (there is not many) on a project full CUDA that will be used to sell entire plants which are more for "Arch and Viz" than anim/vfx did not bode well for mental ray.

...to desillusions

mental_image_logo.png

And that's what happened. Nvidia announced, very politely it dismantled reorganized mental image to, I quote :

integrating it into our other activities focused on software solutions for design professionals

It follows:

The combined group brings together mental images with related efforts in the Quadro group focused on the world’s most demanding design professionals, from feature film artists to architects and product designers.

Basically, they bring closer mental picture from "Quadro Group" to "focus" on the needs of film artists... :septic:

Mhum... Personally, the more you avoid Quadros, CUDA and other proprietary technologies which are a misery to deploy and very expensive, the more I am satisfied ...

So for this time, it's completely missed... I think many big CAD companies invest in there. But a VFX studio? Seriously?

Nvidia left mental image alone for a while (about four years during which I kept hope), to let them keep their commitments (this is often what happens for big acquisitions) but now they resume the reindeer and it hurts:

Mental images is gone. Rumor has it ~30 people were laid off, management was dispersed. The corporate bullshit version: http://blogs.nvidia.com/2011/05/nvi...

As we are in the rumor, it seems that the mental ray team has not been dismantled (although there were some voluntary departures). But insofar Nvidia is a very opaque compagnie you can't have more informations.

That said, I would be in bad faith if I said that the latest mental ray features list was empty, far from it!

Mental ray is a very good render engine (I suspect to be faster than Vray in pure raytracing...) but the big studios do not make prods on a "good engine" but rather "an engine they considers to be the best". What mental ray is no longer.

If you have some times, read this four messages's short thread... It summarizes perfectly in what mental image is sinking.

In short, a person asks a simple argued question to iray's devs about OpenCL (it will include reference to Chaos Group). No dev will respond (it's pretty rare, they often take the time to answer, even briefly to such issues) but a person, who appears to be more a Nvidia's commercial than a dev answer a marketing pitch promoting CUDA face OpenCL:

iray uses C for CUDA because it needs the highest performance and greatest capabilities available to it.

While NVIDIA leads the industry in the broadest OpenCL support, the language is several years behind C in both capabilities and tools, and it advances at the speed of open standards. A CPU fallback is unnecessary for iray as it supports x86 directly - far more efficiently than a fallback could. In using C for CUDA, iray ensures you have the very latest GPU capabilities as soon as they come online, while having direct influence on its evolution.

With C for CUDA, there are over 1/2billion NVIDIA GPUs that can increase iray performance. I believe you would find the %increase from AMD to be quite small as their OpenCL support is limited to their latest offerings.

As for CPUs, iray runs as well on AMD as Intel, taking full advantage of multiple cores and sockets.

- Phil NVIDIA

I think, for a support forum, it's stain. Especially as the defenders of CUDA on the Chaos Group forum debit the same kind of nonsense to finally take a big setback from Vlado, senior member of the Chaos Group forum:

- (Membre): I had a feeling CUDA is gonna kick OpenCL in the ass and it is ! I hope that OpenCL will be able to share memory too at some point...
- (vlado): We do have a CUDA version of V-Ray RT that we use internally, so if we see that there are significant benefits of going this way, it is certainly something that we would do without too much hesitation.
- (Membre): Wow... Release it ploxxxxxxxxx, isn't it a lot more responsive(refresh speed) then opencl btw?
- (vlado): No, not really. In some of the last tests, OpenCL was a tad bit faster.

So here we are: For Chaos Group, released a CUDA version of V-Ray RT is useless because it is less efficient...

And while we're talking about that: The arrival of Vray for Maya could had be done without too much noise... It was expected by many but not necessarily the VFX industrie at that time. Chaos Group seems to have understood that the needs of Maya users are not quite the same as 3dsMax users where Vray was considered as the rendering engine "for archviz"...

Who would have thought, with the release of Vray for Maya that he would serve Digital Domain to make Tron Legacy's shots:

Indeed, this version is very "prod oriented". They did not try to "copy" their render engines from 3dsMax to Maya. They didn't try to uniform it, they adapted it. And very cleverly adapted...

Conclusion

In short, the purpose of this post was not to praise Vray but to explain why I can't believe in mental ray anymore...

The fact it's still an excellent render engine and it's integrated to Maya and 3ds Max leaves him a bright future and he will not go away at once. But I think the studios will turn away slowly, as though faithful for years, I began to do so.