PBRT_V2 总结记录 CreateRadianceProbes

时间:2019-01-17
本文章向大家介绍PBRT_V2 总结记录 CreateRadianceProbes,主要包括PBRT_V2 总结记录 CreateRadianceProbes使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

// CreateRadianceProbes Declarations
class CreateRadianceProbes : public Renderer {
public:
    // CreateRadianceProbes Public Methods
    CreateRadianceProbes(SurfaceIntegrator *surf, VolumeIntegrator *vol,
        const Camera *camera, int lmax, float probeSpacing, const BBox &bbox,
        int nIndirSamples, bool includeDirect, bool includeIndirect,
        float time, const string &filename);
    ~CreateRadianceProbes();
    void Render(const Scene *scene);
    Spectrum Li(const Scene *scene, const RayDifferential &ray,
        const Sample *sample, RNG &rng, MemoryArena &arena, Intersection *isect,
        Spectrum *T) const;
    Spectrum Transmittance(const Scene *scene, const RayDifferential &ray,
        const Sample *sample, RNG &rng, MemoryArena &arena) const;
private:
    // CreateRadianceProbes Private Data
    SurfaceIntegrator *surfaceIntegrator;
    VolumeIntegrator *volumeIntegrator;
    const Camera *camera;
    int lmax, nIndirSamples;
    BBox bbox;
    bool includeDirectInProbes, includeIndirectInProbes;
    float time, probeSpacing;
    string filename;
};

作用:

(CreateRadianceProbes 主要的思路的就是,

1. 把一整个场景分成了很多块,所谓的 the grid cell,其实更加具体的就是把场景中的包围盒 分成很多个不重叠的 子的包围盒。

2. 以Camera为中心,360度发出射线,射线与场景的物体进行相交,得到交点,保存camera的位置和 这些交点 位置。

3. 每一个子包围盒里面,随机地选择一系列的 候选点,候选点 与 camera 和 交点进行判断 indirect visible, 只要候选点 和 camera 和 任意其中一个 交点是 互相可视的话,那么这个就会计算 这个候选点 的入射光照(Li),把入射光照 投影到SH basis上,得到对应的 SH 系数,那么这个子包围盒中,有一系列这样的候选点,就直接平均他们的 SH系数,平均的SH系数,一个子包围盒  对应一个 平均的 SH 系数

4.  遍历所有的子包围盒,把他们的 平均的SH 系数,  保存到文件中)

CreateRadianceProbes Renderer, which computes incident radiance and projects it into
the spherical harmonics at a grid of points in the scene.
The computed SH coefficients include
the effect of all of the lights in the scene; thus, there is no incremental performance
cost for a large number of lights at final rendering time. Of course, a limited number
of SH coefficients can represent only a limited amount of lighting complexity. However,
for low-frequency (i.e., diffuse or only slightly glossy) BRDFs, the error from this
approximation is often acceptable.

This renderer then writes the SH coefficients out to a file. CreateRadianceProbes is thus
a renderer that doesn’t create an image as its output but instead computes a series of
measurements of the scene.

 

1. 构造函数


// CreateRadianceProbes Method Definitions
CreateRadianceProbes::CreateRadianceProbes(SurfaceIntegrator *surf,
        VolumeIntegrator *vol, const Camera *cam, int lm, float ps, const BBox &b,
        int nindir, bool id, bool ii, float t, const string &fn) {
    lmax = lm;
    probeSpacing = ps;
    bbox = b;
    filename = fn;
    includeDirectInProbes = id;
    includeIndirectInProbes = ii;
    time = t;
    nIndirSamples = nindir;
    surfaceIntegrator = surf;
    volumeIntegrator = vol;
    camera = cam;
}

作用:

The parameters to the CreateRadianceProbes constructor manage the radiance probe
creation process. Radiance probes are created with no more than probeSpacing distance
between samples in each dimension, in a regular grid with extent given by the provided
bounding box. (If an explicit bounding box isn’t provided by the parameters in the scene
description file, then the scene bounds are used for the bounding box.) The incident radiance
functions are projected into spherical harmonics using lmax SH bands and stored
in the given file. The camera and integrators from the scene description file are passed
to the constructor, and the number of Monte Carlo samples to use when estimating SH
coefficients of indirect illumination is given by nIndirSamples.

 

2. 


void CreateRadianceProbes::Render(const Scene *scene) {
    // Compute scene bounds and initialize probe integrators
    if (bbox.pMin.x > bbox.pMax.x)
        bbox = scene->WorldBound();
    surfaceIntegrator->Preprocess(scene, camera, this);
    volumeIntegrator->Preprocess(scene, camera, this);
    Sample *origSample = new Sample(NULL, surfaceIntegrator, volumeIntegrator,
                                    scene);

    // Compute sampling rate in each dimension
    Vector delta = bbox.pMax - bbox.pMin;
    int nProbes[3];
    for (int i = 0; i < 3; ++i)
        nProbes[i] = max(1, Ceil2Int(delta[i] / probeSpacing));

    // Allocate SH coefficient vector pointers for sample points
    int count = nProbes[0] * nProbes[1] * nProbes[2];
    Spectrum **c_in = new Spectrum *[count];
    for (int i = 0; i < count; ++i)
        c_in[i] = new Spectrum[SHTerms(lmax)];

    // Compute random points on surfaces of scene

    // Create scene bounding sphere to catch rays that leave the scene
    Point sceneCenter;
    float sceneRadius;
    scene->WorldBound().BoundingSphere(&sceneCenter, &sceneRadius);
    Transform ObjectToWorld(Translate(sceneCenter - Point(0,0,0)));
    Transform WorldToObject(Inverse(ObjectToWorld));
    Reference<Shape> sph = new Sphere(&ObjectToWorld, &WorldToObject,
        true, sceneRadius, -sceneRadius, sceneRadius, 360.f);
    Reference<Material> nullMaterial = Reference<Material>(NULL);
    GeometricPrimitive sphere(sph, nullMaterial, NULL);
    vector<Point> surfacePoints;
    uint32_t nPoints = 32768, maxDepth = 32;
    surfacePoints.reserve(nPoints + maxDepth);
    Point pCamera = camera->CameraToWorld(camera->shutterOpen,
                                          Point(0, 0, 0));
    surfacePoints.push_back(pCamera);
    RNG rng;
    while (surfacePoints.size() < nPoints) {
        // Generate random path from camera and deposit surface points
        Point pray = pCamera;
        Vector dir = UniformSampleSphere(rng.RandomFloat(), rng.RandomFloat());
        float rayEpsilon = 0.f;
        for (uint32_t i = 0; i < maxDepth; ++i) {
            Ray ray(pray, dir, rayEpsilon, INFINITY, time);
        
            Intersection isect;
            if (!scene->Intersect(ray, &isect) &&
                !sphere.Intersect(ray, &isect))
                break;
        
            surfacePoints.push_back(ray(ray.maxt));
        
            DifferentialGeometry &hitGeometry = isect.dg;
            pray = isect.dg.p;
            rayEpsilon = isect.rayEpsilon;
            hitGeometry.nn = Faceforward(hitGeometry.nn, -ray.d);
        
            dir = UniformSampleSphere(rng.RandomFloat(), rng.RandomFloat());
            dir = Faceforward(dir, hitGeometry.nn);
        }
    }

    // Launch tasks to compute radiance probes at sample points
    vector<Task *> tasks;
    ProgressReporter prog(count, "Radiance Probes");
    for (int i = 0; i < count; ++i)
        tasks.push_back(new CreateRadProbeTask(i, nProbes, time,
                                   bbox, lmax, includeDirectInProbes,
                                   includeIndirectInProbes, nIndirSamples,
                                   prog, origSample, surfacePoints,
                                   scene, this, c_in[i]));
    EnqueueTasks(tasks);
    WaitForAllTasks();
    for (uint32_t i = 0; i < tasks.size(); ++i)
        delete tasks[i];
    prog.Done();

    // Write radiance probe coefficients to file
    FILE *f = fopen(filename.c_str(), "w");
    if (f) {
        if (fprintf(f, "%d %d %d\n", lmax, includeDirectInProbes?1:0, includeIndirectInProbes?1:0) < 0 ||
            fprintf(f, "%d %d %d\n", nProbes[0], nProbes[1], nProbes[2]) < 0 ||
            fprintf(f, "%f %f %f %f %f %f\n", bbox.pMin.x, bbox.pMin.y, bbox.pMin.z,
                    bbox.pMax.x, bbox.pMax.y, bbox.pMax.z) < 0) {
            Error("Error writing radiance file \"%s\" (%s)", filename.c_str(),
                  strerror(errno));
            exit(1);
        }

        for (int i = 0; i < nProbes[0] * nProbes[1] * nProbes[2]; ++i) {
            for (int j = 0; j < SHTerms(lmax); ++j) {
                fprintf(f, "  ");
                if (c_in[i][j].Write(f) == false) {
                    Error("Error writing radiance file \"%s\" (%s)", filename.c_str(),
                          strerror(errno));
                    exit(1);
                }
                fprintf(f, "\n");
            }
            fprintf(f, "\n");
        }
        fclose(f);
    }
    for (int i = 0; i < nProbes[0] * nProbes[1] * nProbes[2]; ++i)
        delete[] c_in[i];
    delete[] c_in;
    delete origSample;
}

作用:

After some general preparation, the bulk of the work done by the Render() method is
farmed out to tasks that run in parallel; one task is launched for each of the points at
which a radiance probe is to be computed. After the tasks have all finished, the implementation
here writes the SH coefficients of the probes to a file.

 

细节

a.

    // Compute scene bounds and initialize probe integrators
    if (bbox.pMin.x > bbox.pMax.x)
        bbox = scene->WorldBound();
    surfaceIntegrator->Preprocess(scene, camera, this);
    volumeIntegrator->Preprocess(scene, camera, this);
    Sample *origSample = new Sample(NULL, surfaceIntegrator, volumeIntegrator,
                                    scene);

    // Compute sampling rate in each dimension
    Vector delta = bbox.pMax - bbox.pMin;
    int nProbes[3];
    for (int i = 0; i < 3; ++i)
        nProbes[i] = max(1, Ceil2Int(delta[i] / probeSpacing));

    // Allocate SH coefficient vector pointers for sample points
    int count = nProbes[0] * nProbes[1] * nProbes[2];
    Spectrum **c_in = new Spectrum *[count];
    for (int i = 0; i < count; ++i)
        c_in[i] = new Spectrum[SHTerms(lmax)];

作用:

(利用 场景的包围盒和probeSpaceing,可以得到,每一维 可以创建多少个 probes,那么整一个场景空间中,总用可以创建 x轴 probes 个数 * y 轴 * z 轴,每一个 probes 的每一维度 都是间隔 probeSpaceing.)

Now that the bounding box is known, it is possible to determine how many probes to
compute. The number of probes in each dimension is set so that the probes are no farther
apart than the given probeSpacing.

Given the number of probes, it’s now possible to allocate space for the SH coefficients.
The tasks will initialize these values for each probe.

 

b.

     // Create scene bounding sphere to catch rays that leave the scene
    Point sceneCenter;
    float sceneRadius;
    scene->WorldBound().BoundingSphere(&sceneCenter, &sceneRadius);
    Transform ObjectToWorld(Translate(sceneCenter - Point(0,0,0)));
    Transform WorldToObject(Inverse(ObjectToWorld));
    Reference<Shape> sph = new Sphere(&ObjectToWorld, &WorldToObject,
        true, sceneRadius, -sceneRadius, sceneRadius, 360.f);
    Reference<Material> nullMaterial = Reference<Material>(NULL);
    GeometricPrimitive sphere(sph, nullMaterial, NULL);
    vector<Point> surfacePoints;
    uint32_t nPoints = 32768, maxDepth = 32;
    surfacePoints.reserve(nPoints + maxDepth);
    Point pCamera = camera->CameraToWorld(camera->shutterOpen,
                                          Point(0, 0, 0));
    surfacePoints.push_back(pCamera);
    RNG rng;
    while (surfacePoints.size() < nPoints) {
        // Generate random path from camera and deposit surface points
        Point pray = pCamera;
        Vector dir = UniformSampleSphere(rng.RandomFloat(), rng.RandomFloat());
        float rayEpsilon = 0.f;
        for (uint32_t i = 0; i < maxDepth; ++i) {
            Ray ray(pray, dir, rayEpsilon, INFINITY, time);
        
            Intersection isect;
            if (!scene->Intersect(ray, &isect) &&
                !sphere.Intersect(ray, &isect))
                break;
        
            surfacePoints.push_back(ray(ray.maxt));
        
            DifferentialGeometry &hitGeometry = isect.dg;
            pray = isect.dg.p;
            rayEpsilon = isect.rayEpsilon;
            hitGeometry.nn = Faceforward(hitGeometry.nn, -ray.d);
        
            dir = UniformSampleSphere(rng.RandomFloat(), rng.RandomFloat());
            dir = Faceforward(dir, hitGeometry.nn);
        }
    }

作用:

(首先 整一个场景 创建一个 grid cell,那么 在cell 中的 候选点 要与 camera 进行判断是否 indirectly virible,如果在某一个cell 的point 是与 camera 是 indirectly virible的话,那么就可以用来 计算  incident radiance .

在Cell 中的候选点是否与 camera 互相可视的判断规则就是,候选点  与 camera 直接 互相可视,或者是 候选点 与 camera ray 碰撞 scene 的任意一个交点互相可视。  

所以上面的代码就是,在camera 的位置上,360度发出 ray,收集与场景的交点。)

For each radiance probe to be computed, we first compute
the extent of the grid cell of the overall scene bounding box that it represents. We
then select a sequence of points inside the cell; for each one, we determine whether it is
indirectly visible from the camera position.
(This is the only requirement for the camera
position for this renderer; we just require that it be in the same part of the scene where

final rendering will be done.) We define two points as being indirectly visible if there is
a path with zero or more vertices on scene surfaces between them; see Figure 17.9. The
points in grid cells that are indirectly visible to the camera are assumed to thus not be
inside scene objects and can be used to compute the incident radiance function.

 

Figure 17.9: Indirect Visibility Between Two Points. (a) Two points (filled circles) are defined to be
indirectly visible to each other if there is a path of one or more unobstructed ray segments between
them. Here, the vertices of the path on other surfaces of the scene are shown with open circles.
(b) These two points aren’t indirectly visible to each other since there is no path between them that
doesn’t pass through the walls of the box that encloses the one on the right.

 

In order to compute indirect visibility fromthe camera position while computing probes,
a short precomputation is performed before the tasks are launched.
This computation
finds a number of points on surfaces in the scene by tracing random paths starting from
the camera position; these points are stored in the surfacePoints array. If any one of these
points has an unobstructed path between it and a candidate radiance probe point, then
we know by construction that there is an indirect visibility path from the camera to the
point and that the point is a safe one at which to compute a radiance probe (Figure 17.10).

Note that this method for computing indirect visibility isn’t perfect: there may be some
points that are in fact indirectly visible but that are not found by this sampling algorithm.
However, we have found this approach to work reasonably well in practice.

 

Figure 17.10: Efficiently Computing Indirect Visibility. (a) In a preprocessing step, random paths
are followed from the camera (filled circle), and intersection points on scene surfaces (open circles)
are recorded in an array. (b) Given a candidate probe location (filled circle), we can efficiently test to
see if it is indirectly visible to the camera by tracing shadow rays between it and all of the stored
intersection points. If any of the rays is not occluded, the point and the camera are indirectly visible.

 

 

c.

    // Launch tasks to compute radiance probes at sample points
    vector<Task *> tasks;
    ProgressReporter prog(count, "Radiance Probes");
    for (int i = 0; i < count; ++i)
        tasks.push_back(new CreateRadProbeTask(i, nProbes, time,
                                   bbox, lmax, includeDirectInProbes,
                                   includeIndirectInProbes, nIndirSamples,
                                   prog, origSample, surfacePoints,
                                   scene, this, c_in[i]));
    EnqueueTasks(tasks);
    WaitForAllTasks();
    for (uint32_t i = 0; i < tasks.size(); ++i)
        delete tasks[i];
    prog.Done();

作用:

(这里就是开启 CreateRadProbeTask 来进行计算 每一个 cell 的平均的 SH系数 )

After these points are found, tasks are launched to compute the radiance probes. One
task is launched for each probe by the fragment Launch tasks to compute radiance probes
at sample points, which we won’t include in the text here.

 

 

d.

    // Write radiance probe coefficients to file
    FILE *f = fopen(filename.c_str(), "w");
    if (f) {
        if (fprintf(f, "%d %d %d\n", lmax, includeDirectInProbes?1:0, includeIndirectInProbes?1:0) < 0 ||
            fprintf(f, "%d %d %d\n", nProbes[0], nProbes[1], nProbes[2]) < 0 ||
            fprintf(f, "%f %f %f %f %f %f\n", bbox.pMin.x, bbox.pMin.y, bbox.pMin.z,
                    bbox.pMax.x, bbox.pMax.y, bbox.pMax.z) < 0) {
            Error("Error writing radiance file \"%s\" (%s)", filename.c_str(),
                  strerror(errno));
            exit(1);
        }

        for (int i = 0; i < nProbes[0] * nProbes[1] * nProbes[2]; ++i) {
            for (int j = 0; j < SHTerms(lmax); ++j) {
                fprintf(f, "  ");
                if (c_in[i][j].Write(f) == false) {
                    Error("Error writing radiance file \"%s\" (%s)", filename.c_str(),
                          strerror(errno));
                    exit(1);
                }
                fprintf(f, "\n");
            }
            fprintf(f, "\n");
        }
        fclose(f);
    }
    for (int i = 0; i < nProbes[0] * nProbes[1] * nProbes[2]; ++i)
        delete[] c_in[i];
    delete[] c_in;
    delete origSample;

作用:

(把 the overall bounding box,the number of SH bands,the SH coefficients for
each probe,这些信息写到文件中

After these points are found, tasks are launched to compute the radiance probes. One
task is launched for each probe by the fragment Launch tasks to compute radiance probes
at sample points, which we won’t include in the text here. After the tasks complete, the
fragment Write radiance probe coefficients to file (also not included here) writes a text file
that stores the overall bounding box, the number of SH bands, and the SH coefficients for
each probe in turn. This file can be read by the UseRadianceProbes Integrator, defined
shortly.

 

 

3. CreateRadProbeTask

思路:

(CreateRadProbeTask 一开始先计算  probe 的 子包围盒,之后就在子包围盒中选择一些 候选点,如果这些候选点是与camera是indirectly visible的话,那么就会计算这个候选点的 incident radiance,并且计算它的SH系数,最后还会把所有的候选点的SH系数进行平均)

The task for each probe first computes the bounding box of the subregion of the scene
for which it is responsible
, initializes a few common variables, and then selects a number
of candidate points within the bounding box of its region
. For all of the points that
are indirectly visible from the given camera position, it samples the incident radiance
function at the point and projects it into SH coefficients
. In the end, it returns the average
SH coefficients for the radiance function for all accepted candidate points.
Because the
SH functions used here are a linear basis, averaging their coefficients for the probe points
used gives the average of the projected incident radiance functions at the points. No more

than 256 points within the cell are tested, but once 32 points that are indirectly visible
from the camera are found the task exits.

 

4.


void CreateRadProbeTask::Run() {
    // Compute region in which to compute incident radiance probes
    int sx = pointNum % nProbes[0];
    int sy = (pointNum / nProbes[0]) % nProbes[1];
    int sz = pointNum / (nProbes[0] * nProbes[1]);
    Assert(sx >= 0 && sx < nProbes[0]);
    Assert(sy >= 0 && sy < nProbes[1]);
    Assert(sz >= 0 && sz < nProbes[2]);
    float tx0 = float(sx) / nProbes[0], tx1 = float(sx+1) / nProbes[0];
    float ty0 = float(sy) / nProbes[1], ty1 = float(sy+1) / nProbes[1];
    float tz0 = float(sz) / nProbes[2], tz1 = float(sz+1) / nProbes[2];
    BBox b(bbox.Lerp(tx0, ty0, tz0), bbox.Lerp(tx1, ty1, tz1));

    // Initialize common variables for _CreateRadProbeTask::Run()_
    RNG rng(pointNum);
    Spectrum *c_probe = new Spectrum[SHTerms(lmax)];
    MemoryArena arena;
    uint32_t nFound = 0, lastVisibleOffset = 0;
    for (int i = 0; i < 256; ++i) {
        if (nFound == 32) break;
        // Try to compute radiance probe contribution at _i_th sample point

        // Compute _i_th candidate point _p_ in cell's bounding box
        float dx = RadicalInverse(i+1, 2);
        float dy = RadicalInverse(i+1, 3);
        float dz = RadicalInverse(i+1, 5);
        Point p = b.Lerp(dx, dy, dz);

        // Skip point _p_ if not indirectly visible from camera
        if (scene->IntersectP(Ray(surfacePoints[lastVisibleOffset],
                                  p - surfacePoints[lastVisibleOffset],
                                  1e-4f, 1.f, time))) {
            uint32_t j;
            // See if point is visible to any element of _surfacePoints_
            for (j = 0; j < surfacePoints.size(); ++j)
                if (!scene->IntersectP(Ray(surfacePoints[j], p - surfacePoints[j],
                                           1e-4f, 1.f, time))) {
                    lastVisibleOffset = j;
                    break;
                }
            if (j == surfacePoints.size())
                continue;
        }
        ++nFound;

        // Compute SH coefficients of incident radiance at point _p_
        if (includeDirectInProbes) {
            for (int i = 0; i < SHTerms(lmax); ++i)
                c_probe[i] = 0.f;
            SHProjectIncidentDirectRadiance(p, 0.f, time, arena, scene,
                                            true, lmax, rng, c_probe);
            for (int i = 0; i < SHTerms(lmax); ++i)
                c_in[i] += c_probe[i];
        }
        
        if (includeIndirectInProbes) {
            for (int i = 0; i < SHTerms(lmax); ++i)
                c_probe[i] = 0.f;
            SHProjectIncidentIndirectRadiance(p, 0.f, time, renderer,
                origSample, scene, lmax, rng, nIndirSamples, c_probe);
            for (int i = 0; i < SHTerms(lmax); ++i)
                c_in[i] += c_probe[i];
        }
        arena.FreeAll();
    }
    // Compute final average value for probe and cleanup
    if (nFound > 0)
        for (int i = 0; i < SHTerms(lmax); ++i)
            c_in[i] /= nFound;
    delete[] c_probe;
        prog.Update();
}

细节

a.

// Compute region in which to compute incident radiance probes
    int sx = pointNum % nProbes[0];
    int sy = (pointNum / nProbes[0]) % nProbes[1];
    int sz = pointNum / (nProbes[0] * nProbes[1]);
    Assert(sx >= 0 && sx < nProbes[0]);
    Assert(sy >= 0 && sy < nProbes[1]);
    Assert(sz >= 0 && sz < nProbes[2]);
    float tx0 = float(sx) / nProbes[0], tx1 = float(sx+1) / nProbes[0];
    float ty0 = float(sy) / nProbes[1], ty1 = float(sy+1) / nProbes[1];
    float tz0 = float(sz) / nProbes[2], tz1 = float(sz+1) / nProbes[2];
    BBox b(bbox.Lerp(tx0, ty0, tz0), bbox.Lerp(tx1, ty1, tz1));




Point Lerp(float tx, float ty, float tz) const {
        return Point(::Lerp(tx, pMin.x, pMax.x), ::Lerp(ty, pMin.y, pMax.y),
                     ::Lerp(tz, pMin.z, pMax.z));
}

作用:

(这里就是 计算 某一个 probe 的cell 的包围盒)

The bounding box for the cell that corresponds to this probe is found by first computing
the integer (sx, sy, sz) coordinates for the cell with respect to the overall probe sampling
rate and then linearly interpolating from the overall bounding box corners.

 

b.

        // Compute _i_th candidate point _p_ in cell's bounding box
        float dx = RadicalInverse(i+1, 2);
        float dy = RadicalInverse(i+1, 3);
        float dz = RadicalInverse(i+1, 5);
        Point p = b.Lerp(dx, dy, dz);

作用:

(这里就是在子包围盒中 计算一个随机点 P,这个P 就是所谓的 候选点)

To choose candidate points, points from a 3D Halton sequence (Section 7.4.2) are used
to interpolate between the corners of the cell’s bounding box.

 

 

c.

        // Skip point _p_ if not indirectly visible from camera
        if (scene->IntersectP(Ray(surfacePoints[lastVisibleOffset],
                                  p - surfacePoints[lastVisibleOffset],
                                  1e-4f, 1.f, time))) {
            uint32_t j;
            // See if point is visible to any element of _surfacePoints_
            for (j = 0; j < surfacePoints.size(); ++j)
                if (!scene->IntersectP(Ray(surfacePoints[j], p - surfacePoints[j],
                                           1e-4f, 1.f, time))) {
                    lastVisibleOffset = j;
                    break;
                }
            if (j == surfacePoints.size())
                continue;
        }

作用:

(这里就是判断 候选点 p 是否与 camera 或者是 任何一个交点 互相可视)

For each candidate point p, we need to see if it’s indirectly visible from the camera: if any
of the points in the surfacePoints array and the candidate point are mutually visible,
then the candidate point can be used. The implementation here keeps track of the offset
to the last point from surfacePoints that had an unoccluded path to a previous candidate
point in this probe’s grid cell in lastVisibleOffset. It then tests visibility between that
point and the next candidate point first. It’s often the case that this point will also have
an unoccluded path to the next candidate point, so testing it first can substantially speed
up this part of the computation.

 

d.

        // Compute SH coefficients of incident radiance at point _p_
        if (includeDirectInProbes) {
            for (int i = 0; i < SHTerms(lmax); ++i)
                c_probe[i] = 0.f;
            SHProjectIncidentDirectRadiance(p, 0.f, time, arena, scene,
                                            true, lmax, rng, c_probe);
            for (int i = 0; i < SHTerms(lmax); ++i)
                c_in[i] += c_probe[i];
        }
        
        if (includeIndirectInProbes) {
            for (int i = 0; i < SHTerms(lmax); ++i)
                c_probe[i] = 0.f;
            SHProjectIncidentIndirectRadiance(p, 0.f, time, renderer,
                origSample, scene, lmax, rng, nIndirSamples, c_probe);
            for (int i = 0; i < SHTerms(lmax); ++i)
                c_in[i] += c_probe[i];
        }





void SHProjectIncidentDirectRadiance(const Point &p, float pEpsilon,
        float time, MemoryArena &arena, const Scene *scene,
        bool computeLightVis, int lmax, RNG &rng, Spectrum *c_d) {
    // Loop over light sources and sum their SH coefficients
    Spectrum *c = arena.Alloc<Spectrum>(SHTerms(lmax));
    for (uint32_t i = 0; i < scene->lights.size(); ++i) {
        Light *light = scene->lights[i];
        light->SHProject(p, pEpsilon, lmax, scene, computeLightVis, time,
                         rng, c);
        for (int j = 0; j < SHTerms(lmax); ++j)
            c_d[j] += c[j];
    }
    SHReduceRinging(c_d, lmax);
}



void SHProjectIncidentIndirectRadiance(const Point &p, float pEpsilon,
        float time, const Renderer *renderer, Sample *origSample,
        const Scene *scene, int lmax, RNG &rng, int ns, Spectrum *c_i) {
    Sample *sample = origSample->Duplicate(1);
    MemoryArena arena;
    uint32_t scramble[2] = { rng.RandomUInt(), rng.RandomUInt() };
    int nSamples = RoundUpPow2(ns);
    float *Ylm = ALLOCA(float, SHTerms(lmax));
    for (int i = 0; i < nSamples; ++i) {
        // Sample incident direction for radiance probe
        float u[2];
        Sample02(i, scramble, u);
        Vector wi = UniformSampleSphere(u[0], u[1]);
        float pdf = UniformSpherePdf();

        // Compute incident radiance along direction for probe
        Spectrum Li = 0.f;
        RayDifferential ray(p, wi, pEpsilon, INFINITY, time);

        // Fill in values in _sample_ for radiance probe ray
        sample->time = time;
        for (uint32_t j = 0; j < sample->n1D.size(); ++j)
            for (uint32_t k = 0; k < sample->n1D[j]; ++k)
                sample->oneD[j][k] = rng.RandomFloat();
        for (uint32_t j = 0; j < sample->n2D.size(); ++j)
            for (uint32_t k = 0; k < 2 * sample->n2D[j]; ++k)
                sample->twoD[j][k] = rng.RandomFloat();
        Li = renderer->Li(scene, ray, sample, rng, arena);

        // Update SH coefficients for probe sample point
        SHEvaluate(wi, lmax, Ylm);
        for (int j = 0; j < SHTerms(lmax); ++j)
            c_i[j] += Ylm[j] * Li / (pdf * nSamples);
        arena.FreeAll();
    }
    delete[] sample;
}

作用:

(这里就是计算 一个候选点的 SH 系数,保存到 c_in 中,这里是累积,之后 c_in 还会进行平均,

SHProjectIncidentDirectRadiance : 所有Light 的入射光的SH系数合

Given a valid point inside the probe’s grid cell, the utility functions defined in Section
17.2.3 are used to compute SH coefficients for the incident radiance function at the
point, accumulating the results into the c_in array.

 

SHProjectIncidentDirectRadiance() loops over all of the lights in the scene and accumulates
the sum of their SH coefficients into the c_d output array
. Its only unusual feature is
the call to SHReduceRinging(); this function (which will be introduced shortly) reduces
some of the artifacts that can arise from projecting high-frequency functions into the
SH basis when, as is commonly the case, there aren’t enough SH coefficients to perfectly
reconstruct the original function.

 

e.

// Compute final average value for probe and cleanup
    if (nFound > 0)
        for (int i = 0; i < SHTerms(lmax); ++i)
            c_in[i] /= nFound;
    delete[] c_probe;
        prog.Update();

作用:

(计算 c_in 的平均值,也就是 子包围盒的SH系数的平均值)

The final coefficient values in c_in for this cell are found by dividing the summed coefficient
values by the number of points at which probes were computed.