Username:
Password:
Remember Me?
   Lost your password?
Search



Nvidia's new chief scientist talks

Nvidia's Chief Scientist Bill Dally about GPU technology, DirectX 11 and Intel's Larrabee

PC Games Hardware had the chance to conduct an extensive Interview with Nvidia's new chief scientist Bill Dally. He talks about his view on GPU technology, necessary improvements, DirectX 11 and the possible threat to Nvidia by Intel's Larrabee project.

Nvidia's Chief Scientist Bill Dally took the time during a visit to Nvidia's office in Munich, Germany, to talk to PC Games Hardware about his vision for future developments in the GPU area, the threat that Larrabee might pose to the comparably traditional GPUs Nvidia is making and some more topics revealed only in the whole text.

The following interview was conducted during a visit in Nvidia's local office in Germany before a roundtable talk with key representatives of the press from all over Europe. It is a transcript of a live conversation, so please pardon any grammatical errors we may have made.


Nvidias Chief Scientist Bill Dally
 
Nvidias Chief Scientist Bill Dally [Source: view picture gallery]
PCGH:Bill, thank you for taking the time to give us this interview and welcome to Germany! You have taken over the position of chief scientist at Nvidia in January I think.
Bill Dally: That's correct.

PCGH: So, could you please introduce yourself to our readers and tell us a bit about your background?
Bill Dally: My name is Bill Dally and up until January I was chairman at the computer science department at Stanford University, I've been in the academic world since the mid-eighties, I was a professor at MIT in Boston from 1986 to 1997 and then I joined the Stanford faculty in 1997 and been there ever since.

My expertise is mostly in the area of parallel computing. I've built a number of experimental parallel machines over the years: The J-Machine, the M-Machine, the Imagine Stream Processor and most recently a processor called ELM. I've also done a lot of work on interconnection networks, designing a number of networks that were used in Cray supercomputers in the nineteen-nineties. And more recently coming up with the Flattened-Butterfly and Dragonfly-topologies.

At Nvidia I'm chief scientist, which [really] involves three pieces of the job. One is to set technical direction[ing] and consult with the product groups to influence their technologies that make our products better going forward. The other is to lead Nvidia research which has a goal of looking 5 - 10 years ahead, identifying challenges and opportunities and developing strategic technologies to make Nvidia products more competitive. Then finally is an outreach component where I meet with customers and partners and university researchers, evangelize GPU computing and the CUDA programming language.

PCGH: What were your main reasons to leave the academic field and join Nvidia. What do you think, you can do at Nvidia, what you couldn't do in the academic field or at another company.
Bill Dally: Good question. I thought it was a real compelling opportunity to influence the future of computing. I think, we're at this critical juncture in computing, where we're going from serial computing to parallel computing. And Nvidia is uniquely positioned I think, to play a very important role, as the bulk of computing becomes parallel. And I thought it was a great opportunity to be a part of that.

PCGH: Well, you already told us a bit about being chief scientist. Does that also have something to do with actual microprocessor design or are you more like the visionary for Nvidia research and say "Well, let's head in that direction".
Bill Dally: It's a little bit of both. It's sort of more on the visionary end and I can sort of think the big thought. Nvidia has a very large number of extremely talented engineers and engineering managers that do the detailed design. I do get involved with them though when there are ideas that I want to see incorporated into future generation products or if there's a particular area where I think I can lend some expertise to help make the product better.

PCGH: You also mentioned that you have a strong background in parallel computing and you already headed more than one company producing actual microprocessors like Stream Processors Inc. and Velio Communications.
Bill Dally: Yes, I've had several start-up-companies over the years.

PCGH: Do you think that is going to be a major factor of influence on your work at Nvidia? Especially the network component between distributed processing nodes maybe?
Bill Dally: I think so. You know, you learn something from every company you're involved with and every job you have in your career and so I'm bringing everything I've learned to Nvidia. And try to make the best use of that to make Nvidia products better.

PCGH: Ok, but you're not saying, I'm only concentrating on building a very fast network of highly specialized processors to sort of replace GPUs in the traditional sense?
Bill Dally: No, I mean that would be like a solution looking for a problem. I like to look at it the other way. Our products are very good right now. And so you want to make any changes, you have to be careful that those changes are strategic and well thought out. We don't just want to apply technology because the technology is there, we want to see where is the need, how can we make our products better and what is the appropriate technology to meet that need.

PCGH: You have replaced David Kirk as a chief scientist. How do you think you will be a different kind of chief scientist. What will you do different than your predecessor.
Bill Dally: David is an Nvidia fellow now. And I think that i complement him well. I mean David is a real expert on graphics, in particular on ray tracing and I am a real expert at parallel computing. And so I think - both graphics and parallel computing are very important areas for Nvidia. And so I think, I'm gonna continuing many of the things David did. I think he did a wonderful job as chief scientist. I'm gonna try to expand our research in particular into a lot of areas that have to do with how we implement our GPUs.

He created Nvidia Research, and he focused it very much on the application end of things - how to deliver better graphics and a number of things having to do with GPU computing and in particular on ray tracing. We're continuing those activities and actually still consider them as really important areas. But also in our mission of looking 5 -10 years forward, it's very important to us to look at things like VLSI Design, computer architecture, compilers and programming systems. And so I'm expanding Nvidia Research into those directions so we can have a long-term view, identify strategic opportunities there as well as in the application spaces.

PCGH: Now you've mentioned it two times: Ray tracing. I'll have to go into that direction a bit. Intel made a lot of fuzz about ray tracing in the last 18 months or so. Do you think that's going to be a major part of computer and especially gaming graphics in the foreseeable future, until 2015 maybe?
Bill Dally: It's interesting that they've made a big fuzz about it while we've had a demonstration of real-time ray tracing at Siggraph last year. It's one thing making a fuzz, it's another thing demonstrating it running real-time on GPUs.

But to answer that, what I see as most likely for game graphics going forward is hybrid graphics. Where you start out by rasterizing the scene and then you make a decision at each fragment, whether that fragment can be renderer just with a shader calculating using local information or if it's a specular surface or if it's transparent surface or if there's is a silhouette edge and soft-shadows are important. Then you may need cast rays to compute a very accurate and photo realistic color for that point. So I think it's gonna be a hybrid version where some pixels are rendered conventionally and some pixels involve ray tracing and that gives us the most efficient use of our computational resources - using ray tracing where it does the most good.

PCGH: That doesn't sound like the traditional hybrid approach where you have the overhead of the sparse octree and doing geometry stuff on the ray tracing side of the engine and the pixel stuff on the rasterization side. You said, you're going down to fragment level - which sounds like running CUDA kernels for each fragment.
Bill Dally: We do that today. It's called pixel shaders. [laughs]

PCGH: Yes, but a different kind of pixel shader.
Bill Dally: Right, one thing a pixel shader could do is to cast a ray and only then when you go through space partition tree, which is the acceleration structure.

PCGH: So you would like to move the decision from the game developer to the compiler/driver level?
Bill Dally: No, the game developer can write the shader and the shader is making this decision.

PCGH: While we're at it. Intel also made a big fuzz about Larrabee.
Bill Dally: M-hm.

PCGH: They are aiming for a mostly programmable architecture there. They state that they have only 10 percent dedicated to graphics of the whole die, the rest being completely programmable according to Intel. And still they want to compete in the high-end with your GPUs. Do you think that's feasible right now?
Bill Dally: First of all, right now, Larrabee is a bunch of View-graphs. So, until they actually have a product, it's difficult to say how good it is or what it does. You have to be careful to read to much into View-graphs - it's easy to be perfect, when you have to do is be a View-Graph. It's much harder when you have to deliver a product that actually works.

But to the question of the degree of fixed function hardware: I think it puts them at a very serious disadvantage. Our understanding of Larrabee, which is based on their paper at Siggraph last summer and the two presentations at the Game Developers Conference in April, is that they have fixed function hardware for texture filtering, but they do not have any fixed function hardware either for rasterization or compositing and I think that that puts them at a very serious disadvantage. Because for those parts of the graphics pipeline they're gonna have to pay 20 times or more energy than we will for those computations. And so, while we also have the option of doing rasterization in software if we want - we can write a kernel for that running on our Streaming Multiprocessors - we also have the option of using our rasterizer to do it and do it far more efficiently. So I think it puts them at a very big disadvantage power-wise to not have fixed function hardware for these critical functions. Because everybody in a particular envelope is dominated by their power consumption. It means that at a given power value they're going to deliver much lower performance graphics.

I think also that the fact that they've adopted an x86-instruction set puts them at a disadvantage. It's a complex instruction set, it's got instruction prefixes, it only has eight registers and while they claim that this gives them code compatibility, it gives them code compatibility only if they want to run one core without the SIMD extension. To use the 32 cores or use the 16-wide SIMD extension , they have to write a parallel program, so they have to start over again anyway. And they might as well have started over with a clean instruction set and not carry the area and power cost of interpreting a very complicated instruction set - that puts them at a disadvantage as well.

So while we're very concerned about Larrabee, Intel is a very capable company, and you always worry, when a very capable company starts eating your lunch, we're not too worried about Larrabee at least based on what they disclosed so far.




--
Author: Carsten Spille (Aug 11, 2009)






Advertisement

Comments (5)

Comments 2 to 5  Read all comments here!
lagathy Re: Nvidia's Chief Scientist Bill Dally about GPU technology, DirectX 11 and Intel's Larrabee
Senior Member
14.08.2009 02:36
great interview...shame he didnt talk a bit about the upcomming gt300(or whatever it ends up being called),though i suppose that's the pr department's job.
Overwatch_UA Re: Nvidia's Chief Scientist Bill Dally about GPU technology, DirectX 11 and Intel's Larrabee
Junior Member
12.08.2009 09:11
Awesome interview! Thank you! More of this kind of stuff would be great in the future. Very interesting read.

Copyright © 2014 by Computec Media GmbH      About/Imprint  •  Terms/Conditions