Revisiting the postcard pathtracer with CUDA and Optix

栏目: IT技术 · 发布时间: 5年前

内容简介:I main(){I w=960,h=540,s=16;V e(-22,5,25),g=!(V(-3,4,0)+e*-1),l=!V(g.z, 0,-g.x)*(1./w),u(g.y*l.z-g.z*l.y,g.z*l.x-g.x*l.z,g.x*l.y-g.y*l.x);printf("P\ 6 %d %d 255 ",w,h);for(I y=h;y--;)for(I x=w;x--;){V c;for(I p=s;p--;)c=c+T(e ,!(g+l*(x-w/2+U())+u*(y-h/2+U(

I main(){I w=960,h=540,s=16;V e(-22,5,25),g=!(V(-3,4,0)+e*-1),l=!V(g.z, 0,-g.x)*(1./w),u(g.y*l.z-g.z*l.y,g.z*l.x-g.x*l.z,g.x*l.y-g.y*l.x);printf("P\ 6 %d %d 255 ",w,h);for(I y=h;y--;)for(I x=w;x--;){V c;for(I p=s;p--;)c=c+T(e ,!(g+l*(x-w/2+U())+u*(y-h/2+U())));c=c*(1./s)+14./241;V o=c+1;c=V(c.x/o.x,c. y/o.y,c.z/o.z)*255;printf("%c%c%c",(I)c.x,(I)c.y,(I)c.z);}}

// Andrew Kensler

Usual tricks,Vector class,Utils,Database,Ray marching,Sampling,main.

With a bit of renaming and clang-tidy,a.cpp gives a much clearer picture.

Establishing the baseline

To make testing easier, I modified the code to take as parameter the number of samples per pixel. The 1mn budget was completely blown up when establishing the baseline (with all optimization disabled). Even lowering spp to one, it took 5mn27s to generate a very noisy image.

$ clang -O0 -lm -o a a.cpp 
$ time ./a <font>1</font> > /dev/null

real    5m27.311s
user    5m27.094s
sys     0m0.078s

Revisiting the postcard pathtracer with CUDA and Optix
One core, no SIMD, -O0, and 1spp = 5mn27s runtime.

Compiler optimizations

The first step, and some people would even argue that there is nothing below -O3 and it should have been the base, is to enable compiler optimizations. Performance improved 30x resulting in 6spp.

$ clang -lm -O3 -o a a.cpp 
$ time ./a <font>6</font> > /dev/null

real    0m58.100s
user    0m58.266s
sys     0m0.031s

Revisiting the postcard pathtracer with CUDA and Optix
With -O3 the pathtracer can be pushed to 6spp and renders in 57s.

-fFast-math

There is a compilation flag, -ffast-math, which allows the compiler to relax IEEE 754 compliance in favor of performance. It is automatically enabled when using -Ofast and shows another 2.6x performance improvement allowing 16spp.

$ clang -lm -Ofast -o a a.cpp 
$ time ./a <font>16</font> > /dev/null 

real    0m56.304s
user    0m56.266s
sys     0m0.031s

Revisiting the postcard pathtracer with CUDA and Optix
With -ffast-math we can cross a double digit spp to reach 16. Trivia: Opening the executable with BinaryNinja showed that clang was able to detect two calls to cosf(X) and sinf(X) separated by several lines of code and optimizes them into sincosf (which I had no idea existed). Surprisingly this only happened with -Ofast and not with -O3.

Going SIMD

Going SIMD is something I also did when I revisited the business card raytracer. Or at least I thought I did. I was mistaken when after opening the binary and seeing XMM registers in use I concluded that SIMD instructions were used.

Just because compiler is outputting SIMD instructions it does not mean it's taken care of. In asm you can see that pretty much all mul/add instructions have ss or sd suffix - meaning they operate on single data element. What you want is to have mulps/mulpd instructions.

If Clang is be able to generate SIMD instructions via auto-vectorization, I found the feature capricious to trigger.

struct Vec{

 float p[<font>3</font>];

 <font>NOT SIMD!!</font>
 Vec operator+(Vec o){ 
   Vec v;
   for(int i=0 ; i<<font>3</font> ; i++){
     v.p[i] = p[i] + o.p[i];
   }
   return v;
 }
};
struct Vec{

 float p[<font>4</font>];

 <font>NOT SIMD!!</font>
 Vec operator+(Vec o){ 
   Vec v;
   for(int i=0 ; i<<font>3</font> ; i++){
     v.p[i] = p[i] + o.p[i];
   }
   return v;
 }
};
struct Vec{

 float p[<font>4</font>];

 <font>SIMD!!</font>
 Vec operator+(Vec o){ 
   Vec v;
   for(int i=0 ; i<<font>4</font> ; i++){
     v.p[i] = p[i] + o.p[i];
   }
   return v;
 }
};

In a professional environment I would probably not have felt confident relying on the compiler's good will. Using intrinsics via "nmmintrin.h", __m128, _mm_set_ps, _mm_add_ps, and _mm_div_pswould have been safer. For research purposes however it was fine.

Modifying the Vector struct to operate on four items (b.cpp) secured the Packet SIMD Instructions mulps and addps.

$ clang -lm -Ofast -o b b.cpp 
$ time ./b <font>17</font> > /dev/null 

real    0m59.722s
user    0m59.719s
sys     0m0.000s

The speed increase was marginal (17spp) and the visual difference imperceptible.

Using all cores

At this point, it was time to use the 11 other cores on the Ryzen 5 2600. The task was considerably facilitated by the code from the previous exercise. A few pthread_create and pthread_join resulted inc.cpp which spawn 540 threads, each rendering lines of 960 pixels.

$ clang -lm -lpthread -Ofast -o c c.cpp 
$ time ./c <font>128</font> > /dev/null 

real    0m55.203s
user    10m52.188s
sys     0m0.063s

As expected, the performance gain was linear with the number of cores. Twelve of them allowed to push sampling to 128spp. However the visual result had too much personality for my liking.

Revisiting the postcard pathtracer with CUDA and Optix
It is fast but also ugly. What is going on here? There is a visible repeating horizontal pattern. Could it be explained by the random generator implemented by truncating libc's rand()and keeping only the linear congruential generator (LCG)?

Nope. LCG is fine. The problem was that each thread pseudo-random number generator seed is initialized with the same value. Which means the same "random" serie is used across each line. Changing the seed value to use the thread id fixed the glitch (d.cpp).

Revisiting the postcard pathtracer with CUDA and Optix
12 cores, -Ofast -> 128spp. Still noisy. Trivia: The business raytracer optimization from last weekd prompted feedback from readers advising that 500+ threads was messing up with the scheduler and likely degraded performance. I tried to lower the thread count (12, 24, and even 36) but could not confirm this claim. An OS may have trouble dealing with 10,000 threads but it looks like Linux was well able to handle 500 of them.

GPGPU with CUDA

All optimizations used for the business card raytracer were applied intoe.cu.

  • Maximize occupancy -> 0.2 :/.
  • Maximize branch efficiency -> 90% :).
  • Avoid float64.
  • Use intrinsics.
  • Use -use_fast_math flag.
C:\Users\fab>nvcc -O3 -o e -use_fast_math e.cu
C:\Users\fab>e.exe <font>2048</font> > NUL 
Time: 59s

The speed gain (2048spp) was monstrous. Unfortunately, so was the generated image.

Revisiting the postcard pathtracer with CUDA and Optix

Probably a bug in the kernel.

That bug took an afternoon to track down. Ultimately the culprit was determined to be fastmath flag which made intrinsic __powf too inaccurate for ray-marching. Replacing it with manual multiplication (f.cu) solved the problem but also reduced spp to 600. There was also another issue.

Revisiting the postcard pathtracer with CUDA and Optix
Quite artistic but also quite not right. The pseudo-random generator had been converted in a hurry using the same seed uninitialized seed for a whole Streaming Multiprocessor. Using cuRAND and having a seed per block/thread solved the issue (g.cu).
Revisiting the postcard pathtracer with CUDA and Optix

GPGPU rendering, 600spp. And still noisy.

Denoising

Starting with OptiX 5.0, NVidia provides an A.I based denoiser with a pre-trained neural network. Even though NVidia claimed the model was "trained using tens of thousands of images rendered from one thousand 3D scenes", I had reservations about the result. I didn't even check the API and instead used Declan Russeel's standalone executable.

The denoiser ran in 300ms and the result made me eat my words. It is so good it is mind-blowing. A denoised 600spp which takes 1mn to render is equivalent to a 40,960spp which takes 1h to render.

Revisiting the postcard pathtracer with CUDA and Optix

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

数据压缩导论

数据压缩导论

萨尤得 / 2009-2 / 99.00元

《数据压缩导论(英文版·第3版)》是数据压缩方面的经典著作,介绍了各种类型的压缩模式。书中首先介绍了基本压缩方法(包括无损压缩和有损压缩)中涉及的数学知识,为常见的压缩形式打牢了信息论基础,然后从无损压缩体制开始,依次讲述了霍夫曼编码、算术编码以及字典编码技术等,对于有损压缩,还讨论了使用量化的模式,描述了标量、矢量以及微分编码和分形压缩技术,最后重点介绍了视频加密。《数据压缩导论(英文版·第3版......一起来看看 《数据压缩导论》 这本书的介绍吧!

JS 压缩/解压工具
JS 压缩/解压工具

在线压缩/解压 JS 代码

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器