Page 1 of 2 12 LastLast
Results 1 to 15 of 19
  1. #1
    Joined
    Nov 2007
    Posts
    1,520

    Dr. Kirk is right, IMO.

    Once again, i seem to go with the less popular opinion.

    I've always been unimpressed with ray-tracing and especially being done by the cpu, personally.

    What do you think? should rendering be the task of the cpu like it was back in the day, or do you think gpu's are still the best way to go?

    I'm kind of skeptical of how much better things will be with 8 cpu cores. My skepticism may turn out to be wrong, but I think otherwise.

    Games are almost always more dependent on a good gpu rather than your cpu. There's exceptions (supreme commander), but they're few and far between.

    Anyway, to conclude, cpu-based ray tracing will probably win out since that's what I don't want=(

    I guess those who want cpu-based ray-tracing to win out could say doom was among the most technically impressive games ever and your system ram and video dacs were really all that mattered other than your cpu. I have to admit that's true.

  2. #2
    Joined
    May 2000
    Location
    Florence, KY, USA
    Age
    37
    Posts
    5,143

    Re: Dr. Kirk is right, IMO.

    Yes, I'd like to see some more reader responses.

    Just in case, here is the article he's referring to: http://www.pcper.com/article.php?aid=530

    Also, if you guys like the article, please DIGG and SLASHDOT us:

    http://slashdot.org/firehose.pl?op=view&id=559366
    http://digg.com/hardware/NVIDIA_Comm...ization_Debate
    Last edited by Ryan; 03-07-2008 at 10:28 AM.
    Ryan Shrout
    Owner, PC Perspective
    rshrout -at- pcper -dot- com
    --= Follow me on Twitter =--


  3. #3
    Joined
    Nov 2007
    Posts
    1,520

    Re: Dr. Kirk is right, IMO.

    thanks, ryan=)

  4. #4
    Joined
    Jul 2003
    Location
    Long Beach CA
    Age
    59
    Posts
    4,773

    Re: Dr. Kirk is right, IMO.

    Well what a shock, I donít disagree with VRP on this issue. I think there are obvious reasons why Intel would want to push Ray Tracing and multiple CPU solutions as opposed to GPU solutions. I guess my answer would be that until I can see it demonstrated as being a better, and more cost effective way to render graphics I will judge it to be inferior to our current GPU controlled methods.

    Might Ray tracing be the answer in the futureÖ. Perhaps but it would certainly narrow our options and cut down on competition, and we all know what happens then.


    You see I donít have anything against you personally VRP.
    Game - BIOSTAR TA785 A2+ / Phenom II X4 965 Black Edition Deneb 3.4GHz 125W / Patroit 8GB DDR2 800 CAS 4 Timing: 4-4-4-12 / Sapphire HD 6870 1GB / Antec Three Hundred Case / PC Power & Cooling S61EPS 610W / X-Fi XtremeGamer / Western Digital 640GB 7200 RPM SATA 3.0Gb/s /Windows Vista home premium 64 SP1

    Back up - Biostar 6100-939 / A64 3800 Venice(Stock 2.4) / 1.5 GB Corsair XMS 3200 / HD3870

  5. #5
    Joined
    Sep 2002
    Location
    Kansas, Wheat Country Baby!
    Posts
    739

    Re: Dr. Kirk is right, IMO.

    I thought it was a great article. It was nice to see a good point / counter-point look at ray tracing. Great work PCPER staff for putting this together!
    AMD Thuban 1055T @ 4ghz
    Gigabyte 890GPA-UD3
    4 Gig Mushkin Memory
    500 Gig WD Caviar Black HDD
    Palit GTX460 Sonic Platinum
    Antec 300 Case
    BFG 750 watt SLI Powersupply

  6. #6
    Joined
    Nov 2007
    Posts
    1,520

    Re: Dr. Kirk is right, IMO.

    Quote Originally Posted by RatboyX View Post
    Well what a shock, I donít disagree with VRP on this issue. I think there are obvious reasons why Intel would want to push Ray Tracing and multiple CPU solutions as opposed to GPU solutions. I guess my answer would be that until I can see it demonstrated as being a better, and more cost effective way to render graphics I will judge it to be inferior to our current GPU controlled methods.

    Might Ray tracing be the answer in the futureÖ. Perhaps but it would certainly narrow our options and cut down on competition, and we all know what happens then.


    You see I donít have anything against you personally VRP.
    Thanks=)

  7. #7
    Joined
    Sep 2003
    Location
    Independence, MO
    Posts
    365

    Re: Dr. Kirk is right, IMO.

    I must say, it looks impressive, but running a game as old as Q4 at an HD resolution isn't exactly earth-shattering. I'd be curious to see how it would do rendering a newer game like Crysis.

    I guess I'm not terribly shocked to see Intel pushing for a move away from GPU rendering. Intel has already proven that they don't seem willing/able to produce even a decent IGP (and drivers!), much less a stand alone GPU. However, Intel is very good at making CPUs. It only makes sense for them to promote a standard that benefits them and their industry lead. That's the nature of business, it's often not about what is best for the consumer, it's about what is best for the business (ironically, when both AMD and Intel have benefited the most is when they gave the industry what it wanted!).

    With Intel's only real competitor buying a very established GPU company, Intel knows it is now way behind in producing a decent GPU, much less a competitive one. That's not really an insult, as even ATI and nVidia have exchanged successes and failures over the years. Where Intel (and maybe nVidia) really should be worried is with AMD's Fusion. I think the concept of merging the GPU and CPU is going to produce some very good things for several industries, as I feel core specialization is the future. Symmetric cores are nice for now, but we already see instances where GPUs can do some things far better than an x86 CPU could ever hope to do. Intel seems to be leaning towards creating one "general use" core design and then increasing the number of those cores in the system. This might work if pure raytracing wins out. If a hybrid Raytracing/rasterization concept unfolds, AMD would be sitting in a very favorable position. I already suspected that Fusion would be a compelling solution (due to low-latency connections between cores), but things may have gotten even more interesting.

    It's curious to see 3 rendering approaches developing, and it's all based on what each company does best. We have Intel pushing CPU-based raytracing, AMD is developing Fusion, and nVidia is sticking to traditional standalone products. When AMD bought ATI and started talking about Fusion, it appeared that AMD was taking a big risk. However, they seem to be in the safest position now, since they will should be able to produce a powerful product regardless of which approach comes out ahead. They might even be in the best situation if a hybrid technology wins out.

  8. #8
    Joined
    May 2000
    Location
    Florence, KY, USA
    Age
    37
    Posts
    5,143

    Re: Dr. Kirk is right, IMO.

    Intel has the money and will power to get themselves out of a lot of tough situations. Don't forget that though this is being shown on CPUs now, the Larrabee project isn't excatly a CPU - its an 80-core processor and that should be the target for these ray tracing applications.
    Ryan Shrout
    Owner, PC Perspective
    rshrout -at- pcper -dot- com
    --= Follow me on Twitter =--


  9. #9
    Joined
    Jul 2003
    Location
    Long Beach CA
    Age
    59
    Posts
    4,773

    Re: Dr. Kirk is right, IMO.

    Quote Originally Posted by Ryan View Post
    Intel has the money and will power to get themselves out of a lot of tough situations. Don't forget that though this is being shown on CPUs now, the Larrabee project isn't excatly a CPU - its an 80-core processor and that should be the target for these ray tracing applications.
    80 coreÖ like I said when this technology can be shown to be superior AND cost effective I might say itís the best solution. What would a 80 core GPU be capable of doing? Iím going to be able to run my ďHolo DeckĒ on 80 cores but it wont be cheep.
    Game - BIOSTAR TA785 A2+ / Phenom II X4 965 Black Edition Deneb 3.4GHz 125W / Patroit 8GB DDR2 800 CAS 4 Timing: 4-4-4-12 / Sapphire HD 6870 1GB / Antec Three Hundred Case / PC Power & Cooling S61EPS 610W / X-Fi XtremeGamer / Western Digital 640GB 7200 RPM SATA 3.0Gb/s /Windows Vista home premium 64 SP1

    Back up - Biostar 6100-939 / A64 3800 Venice(Stock 2.4) / 1.5 GB Corsair XMS 3200 / HD3870

  10. #10
    Joined
    Sep 2003
    Location
    Independence, MO
    Posts
    365

    Re: Dr. Kirk is right, IMO.

    Well, it's ironic, since GPUs are essentially a bunch of simple cores working as a group towards a common result. They are very specific hardware, but unified shaders perform more than one task, it's just that those tasks are typically bound to graphics.

  11. #11
    Joined
    Apr 2001
    Location
    Northern California
    Age
    38
    Posts
    799

    Re: Dr. Kirk is right, IMO.

    I think ray tracing has potential but I tend to agree with Dr. Kirk as well. We arn't going to just jump right to full ray tracing, more likely we will see hybrid techniques emerge first.

    I like the fact that Intel is pushing it forward, but im sure a lot of the reason for that is so they can justify yet more cores in their CPUs when we can't even really utilize what we have effectively yet.
    Last edited by zeejay; 03-08-2008 at 02:32 PM.
    -Zeejay

  12. #12
    Joined
    Mar 2008
    Posts
    2

    Re: Dr. Kirk is right, IMO.

    Dr. Kirk is talking out of his arse. Of course Nvidia would say something like this because their future depends on this. They are fragging scared by what Intel is doing in their labs that you can see by their actions of acquisitioning companies before Intel comes around. Nvidia is trapped in their mindset as SGI and 3DFX have been before in the past and they might easily dropout the market as they did.
    Besides GPUs are becoming monsters >200 W/h eating monsters with ridicolous huge dies, ridicolous cooling solutions and ridicolous big cards. And their general purpose processing power is a myth. GPUs still are completely shit when it comes to handle out of order executions compared to CPU. Multi-GPU scalling is shit compared to CPU-core scaling as well. Because rasterization does not scale that well simply because the executions and memory access patterns are more complicated.

  13. #13
    Joined
    Jul 2003
    Location
    Long Beach CA
    Age
    59
    Posts
    4,773

    Re: Dr. Kirk is right, IMO.

    Quote Originally Posted by DrBalthar View Post
    Dr. Kirk is talking out of his arse. Of course Nvidia would say something like this because their future depends on this. They are fragging scared by what Intel is doing in their labs that you can see by their actions of acquisitioning companies before Intel comes around. Nvidia is trapped in their mindset as SGI and 3DFX have been before in the past and they might easily dropout the market as they did.
    Besides GPUs are becoming monsters >200 W/h eating monsters with ridicolous huge dies, ridicolous cooling solutions and ridicolous big cards. And their general purpose processing power is a myth. GPUs still are completely shit when it comes to handle out of order executions compared to CPU. Multi-GPU scalling is shit compared to CPU-core scaling as well. Because rasterization does not scale that well simply because the executions and memory access patterns are more complicated.
    And what are your qualifications? No wait I donít really care because there is no way for me to verify that. Maybe you could be more precise in your criticism and not just spout off unfounded generalizations. What specifically about his statements are untrue?
    Game - BIOSTAR TA785 A2+ / Phenom II X4 965 Black Edition Deneb 3.4GHz 125W / Patroit 8GB DDR2 800 CAS 4 Timing: 4-4-4-12 / Sapphire HD 6870 1GB / Antec Three Hundred Case / PC Power & Cooling S61EPS 610W / X-Fi XtremeGamer / Western Digital 640GB 7200 RPM SATA 3.0Gb/s /Windows Vista home premium 64 SP1

    Back up - Biostar 6100-939 / A64 3800 Venice(Stock 2.4) / 1.5 GB Corsair XMS 3200 / HD3870

  14. #14
    Joined
    May 2000
    Location
    Florence, KY, USA
    Age
    37
    Posts
    5,143

    Re: Dr. Kirk is right, IMO.

    We do have some interesting commentary on the debate from star in the development community - hopefully up tomorrow.
    Ryan Shrout
    Owner, PC Perspective
    rshrout -at- pcper -dot- com
    --= Follow me on Twitter =--


  15. #15
    Joined
    Mar 2008
    Posts
    1

    Re: Dr. Kirk is right, IMO.

    Dr. Kirk says a lot of things that are very misleading. Consider his description of why raytracing is slow:

    First, you must trace rays for visibility (what objects the eye sees directly) and antialiasing. Then, for each object that is hit, you must trace shadow rays, to determine if the point on the surface can "see" the light or if it is in shadow. More modern film-rendering software goes a step beyond this and looks not only at light sources, but considers that every other surface in the environment can reflect light. So effectively everything is a light source.
    But of course, a raytracer doesn't need to trace shadow rays: that's only necessary if you want accurate shadows. It doesn't need to trace environment rays: that's only necessary if you want accurate global illumination. It doesn't need to trace reflection rays: that's only necessary if you want accurate reflections. And so on. Furthermore, the techniques needed to get these effects with a rasterizer (shadow volumes, reflection maps, etc.) are also expensive, sometimes much more expensive than tracing rays. So he's comparing apples to oranges, a raytracer with accurate shadows, lighting, reflections, etc. to a rasterizer with none of these things. If you don't need these effects, the raytracer becomes much faster. And if you do want them, the rasterizer becomes much, much slower.

    He makes a similar apples to oranges comparison when talking about ambient occlusion:

    In order to do a good job of rendering these effects, you would have to shoot tens or hundreds of rays per pixel. This is far from real time. As a side note, these effects are "soft" and very well-approximated through rasterization and texturing techniques in real-time.
    Now, it's very easy to do accurate ambient occlusion with a raytracer: you just trace rays in lots of different directions to see how many are blocked. Yes, that's computationally expensive, but at least it's easy to implement. In contrast, it's impossible to do accurate ambient occlusion with a rasterizer. Instead, you have to use approximate methods, like precalculating occlusion factors for each vertex based on only the static scene geometry, then interpolating between them. But those approximate methods work just as well in a raytracer as in a rasterizer. Dr. Kirk uses doublespeak to imply that ambient occlusion is easier in a rasterizer than a raytracer. In fact, the opposite is true. If you're satisfied with approximate techniques, it's equally easy (and fast) in both types of renderer. If you're not satisfied with them, a raytracer is the only game in town.

    Finally, he's very misleading in dismissing a raytracer's algorithmic advantage in dealing with large scenes (the speed scales logarithmically with the amount of scene geometry instead of linearly). He presents two arguments against this. First, he says that building the hierarchy in the first place requires visiting every triangle. But that's only true if you rebuild the hierarchy from scratch for every frame. Most games have a lot of static scene geometry that doesn't change from one frame to the next, so you only need to rebuild the parts of the hierarchy that contain moving objects.

    Second, he claims that hierarchical acceleration structures work just as well for rasterizers as for raytracers. That is simply false. The hierarchies used in rasterizers serve a very different purpose: to avoid drawing objects that aren't currently visible. The rendering time is still linear in the amount of visible scene geometry. So if you want to increase the size of the world without changing how much is visible at one time, rasterizers and raytracers both do fine. But if you want to make the world more detailed so that more geometric detail is visible at one time the raytracer does much better than the rasterizer.

    It's true that right now, rasterization is clearly the fastest method for real time rendering. But I suspect this has less to do with the algorithm itself than with the huge amount of time GPU makers have spent developing specialized hardware to accelerate this particular algorithm. If every computer included an RPU (Ray Processing Unit) with specialized circuitry that could calculate 128 hardware accelerated ray/triangle intersections in parallel, then raytracing would be much faster than it is today.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •