How To > Develop a Vertex Renderer |
The tutorial will demonstrate the basics of rendering - transforming objects from 3D space into 2D images.To make it simple and fast (realtime!), the script will draw only vertices. The image will be redrawn every time the viewport is updated - changing the view, panning, zooming and orbiting will be reflected by the vertex renderer instantly!
Accessing Active Viewport Info, Type, and Transforms
We start by defining a MacroScript with the name VertexRender which will appear in the category "HowTo". This will be an advanced macroScripts which will have on execute do or on is Checked do event handlers.
We define a local variable to hold a flag determining whether the renderer is active or not. In the beginning, the variable will be set to false.
We define a couple more local variables to hold the width and height of the image and the two bitmaps - the back buffer and the front buffer (see below for more info).
This is the function that will be executed when the renderer is enabled and the viewport is being updated.
To be able to output info on the rendering speed, we will use the timestamp function to get the current system time.
Then we will copy the back buffer into the front buffer, basically clearing the previous image. The back buffer could be changed to hold a background image, and the function would draw on top of it... For now, the back buffer image is set to black.
We loop through all scene geometry objects that are not target objects of lights, cameras etc. Note that we are not checking for hidden objects - you could add this to the code as an exercise! Right now, all objects will be drawn, even the hidden ones!
Then we snapshot the objects as mesh into memory. This includes all modifiers AND space warps applied to the object.
We define an array to be used for painting the vertices and use the wireframe color to draw.
To speed things up, we read the number of vertices in the mesh once. We will use this value below to limit the vertex loop:
The v variable will count from 1 to the number of vertices.
The position of the v-th vertex has to be multiplied by the transformation matrix of the viewport. (If the viewport is a Camera, the viewport.getTM() returns the inverse of the camera's transformation matrix. If the viewport is a Perspective or Ortho view, the matrix is already inverted). By multiplying the vertex position by this matrix, we transform the 3D space coordinates into camera space!
Now we need to know the size of the projection plane with pixel size equal to the rendering image when placed at the depth of the current vertex. To calculate this, we will need the two extremes (upper left and lower right) points of the plane as 3D points in camera space.
Since thePos is already in camera coordinates, the.Z coordinate is the Z-depth of the point!The return value of mapScreenToView is a 3D point in camera space corresponding to the upper left corner of the projection plane.
We do the same for the lower right corner of the projection plane.
Here is what we just did and why:
As you know, the camera has a cone (frustrum) with the top at the position of the camera. If the projection plane would "travel" along the Z axis of the camera, the farther it gets from the "eye", the larger it becomes in world units, but the number of pixels in the final image remains the same.
On the screenshot, you can see a camera and two projection planes (shown as red X). You can also see that the size of the projection plane is increasing as it gets farther from the camera.
The farther projection plane is exactly at the depth of the teapot's position in camera space.
The red sphere in the upper left corner of the projection plane corresponds to the screen_origin coordinate calculated above. To create the image, the screen_origin value was calculated, then the MAXScript line of code
was called to create a sphere in world coordinates representing that point. ( screen_origin was originally in camera space, multiplying it by the camera's transformation matrix transforms it back into world coordinates!)
Same with the blue sphere - it represents the end_screen coordinate.
Here is what the camera sees when rendering the view - notice the red and blue spheres appearing exactly where expected:
Now we know the two extremes of the projection plane and can calculate the width and height of the projection plane in world units. We also know the size of the plane in pixels, so we can determine the relationship between pixels and world units!
Using the size of the plane, we can calculate the aspect values - basically the number of pixels corresponding to one world unit along X and Y.
screen_coords= point2 (x_aspect*(thePos.x-screen_origin.x)) (-(y_aspect*(thePos.y-screen_origin.y)))
Using the aspect values, we can convert the X and Y values of the vertex position into actual pixel positions. Note that the Y coordinate is inverted, and the X position has to be offset by the screen_origin because the camera axis (the center of the image) is located at coordinates [0,0], while the actual origin of the image in MAXScript is in the upper left corner...
Now we can draw a pixel in the front buffer at the location of the current vertex using the color array defined above using the wireframe color.
When all vertices of all objects have been processed, we can update the display.
Then we can stop the time again to see how long it took to draw once.
The result will be output to the status bar.
When the macroScript is placed on a toolbar, menu or QuadMenu, its checked status will be determined by the boolean value stored in the local variable we defined in the beginning...
When the user presses the button / selects the menu item...
...we flip the state of the variable - if it was true it becomes false and vice-versa.
If the script was just enabled, then
we read the current size values of the Render Scene dialog. You could replace these values with your fixed values, for example 320 and 240 to have a fixed size regardless of the renderer settings...
Then we use these size values to define two bitmaps. The first is the back buffer used to clear the image before drawing. Once again, you could use an openBitmap call here to load a background image from disk to draw on top of it!
Here we register the function we defined above as a viewport redraw callback. Each time the 3ds Max viewport is redrawn, the function will be called and update the rendered image!
To initialize the display, we call the function once.
If the renderer was already open, it has to close now. We unregister the callback function to avoid further updates,