Carl Foxmarten (
carlfoxmarten) wrote2009-10-17 09:22 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Entry tags:
The hazards of raytracing...
One of the courses I'm taking is called Image Synthesis, basically the creation of photo-realistic rendered images on a computer.
Most versions of this course require the students to write their own raytracer from scratch (or near scratch), but since you have to write everything (including the basic framework), you don't have time to fix any major mistakes you make.
Instead, I'm adding code to an existing raytracer called PBRT, which stands for Physically-Based Rendering Toolkit, to understand how major aspects of raytracing works, such as depth-of-field (as simulated by actual lens systems), space partitioning (for major speed boosts), and what are called "Bidirectional Reflectance Distribution Functions" which dictate how a surface looks (and are usually based on how the physics works).
My current assignment is writing a new camera system for PBRT that simulates an actual lens system (not just one lens for simple depth-of-field, but multiple lenses for many different effects, from telephoto to fisheye lenses).
It took me the first week to understand just what I'm supposed to modify, the second week to figure out how to modify it, and this week to figure out just what I'm doing wrong.
I'm quite certain that my code to bend the light rays through the lenses is correct, as all light rays seem to be passing through all the lenses, but for some strange reason the weighting of the light rays seems to be way off, as the resulting image is always pure black.
Very confusing. >.<
Most versions of this course require the students to write their own raytracer from scratch (or near scratch), but since you have to write everything (including the basic framework), you don't have time to fix any major mistakes you make.
Instead, I'm adding code to an existing raytracer called PBRT, which stands for Physically-Based Rendering Toolkit, to understand how major aspects of raytracing works, such as depth-of-field (as simulated by actual lens systems), space partitioning (for major speed boosts), and what are called "Bidirectional Reflectance Distribution Functions" which dictate how a surface looks (and are usually based on how the physics works).
My current assignment is writing a new camera system for PBRT that simulates an actual lens system (not just one lens for simple depth-of-field, but multiple lenses for many different effects, from telephoto to fisheye lenses).
It took me the first week to understand just what I'm supposed to modify, the second week to figure out how to modify it, and this week to figure out just what I'm doing wrong.
I'm quite certain that my code to bend the light rays through the lenses is correct, as all light rays seem to be passing through all the lenses, but for some strange reason the weighting of the light rays seems to be way off, as the resulting image is always pure black.
Very confusing. >.<