Wednesday, December 15, 2004

blogalog on QC (quantum computing) & CG (computer graphics)

Wook and I are going to do a blogalog on quantum computing and computer graphics. He asks stupid questions about the former, I ask stupid questions about the latter, maybe a person or two learns something. Sometimes they'll be low-level questions, but I hope that sometimes they will also be thought-provoking.

All right, first question:

I'm doing some graphics for my quantum computing stuff, and I need to pick the location of the camera and which direction it is pointing in. I want all of my objects to appear at reasonable sizes, and the frame to be mostly full, and nothing to be cut off. My objects (just spheres and some pipes and some floating text) are of a fixed size, but the number and location vary according to the size of the quantum algorithm I'm animating, and the specific quantum computer topology. Often they are in a line, sometimes they are in a 2D grid, later they will be in more complex arrangements.

How do I pick my camera location and pointing direction?

2 comments:

mr.wook said...

Well, negative is usually the direction into
the image, so place the camera at 0,0,0 and
aim it at 0,0,-1 (this doesn't assume a normalized
world space). Place your objects centered around
0,0,-1, and place the objects east/west, adjusting
the object (negative) Z to keep fitting your objects
in as groups get bigger.

Think about a 2:1 (width:height) aspect ratio
as your output design, elevate your text by 2
units (where 1 unit is the height of your standard
object [Qubit?]), and if you need to put in a ground
plane, put it at -1 units along Y.

rdv said...

What you've done is invert the problem. Now, instead of placing the camera to get everything in the frame, you have to place the objects so that they are in the FOV. I figured it was easier to do the former, but I could be wrong. At the moment my graphics are somewhat abstract, but they do represent physical objects, and as things progress, it will take on a more concrete form. I'm effectively constructing a large object out of many small pieces, and it seems easier to do that with fixed object size and placement than transforming the object size or placement to fit the viewing needs.

In either case, I need an algorithm suitable for use with POVray camera FOV characteristics that I can use to judge whether or not everything is in view. I can then tweak the position around until it all is.

For example, if I have N qubits, and they are placed at (0,0,0), (d,0,0), (2d,0,0)...((N-1)d,0,0), how far back do I have to pull the camera?

For the 2D layout, the qubits might be in an NxM array, so there are also qubits at (0,d,0)...(0,(M-1)d,0)...(((N-1)d,(M-1)d,0). I want to position the camera so that I get a decent perspective view from an angle above the plane.

If you still think the camera ought to be at the origin, I will accept a tutorial on how to build my overall object layout, then group the sub-objects, and translate that group in the -y direction a suitable amount (and rotate, if necessary). POVray syntax, please.

I know this isn't rocket science, I just haven't done it before...