Given the freedom to achieve any stereo base (as it is the case when using a slide bar and a single camera) what is the correct stereo base to use for a given scene?
This has been the subject of some debate. My answer to this is that there is only one “correct” stereo base and this is Bv (B=Bv, equal to the spacing of the eyes). Anything else will result in an impression that alters reality, in which case there is no right or wrong.
If we decide however that we want to alter reality then there are formulas and rules of thumb which guide us into producing stereo pairs with a decent amount of depth (not too little or too much). There are two schools of thought: One advocates having a constant/maximum on-film deviation. It uses the basic stereoscopic formula, plugs the distances Inear, Imax, also F, and maximum-on-film-deviation, usually 1.2mm (for 35mm film) and calculates B.
I find this approach very artificial. The stereo base will change any time the distances of near/far objects change. Imagine that the spacing of our eyes changes as we move around, thus changing the distances of near/far objects. It is crazy!
The other school of thought advocates a constant convergence angle (expressed as ratio: B/I). One example is the well-known rule of thumb the “1/30 rule” which says that the stereo base should be equal to 1/30 the distance of the nearest object (B/I = 1/30). I prefer this approach for my stereo photography, but I understand that the convergnece angle can change, depending on the subject. For example, close-ups and macro photography generally requires a larger convergence (1/20 to 1/10). The reason I like the convergence approach is that it is easier to measure (divide the stereo base by the distance of the near object, or, multibly the ratio with the distance, to get the stereo base) and easier to visualize.
Is "More" Always "Better"?
Related to the this topic is the frequently asked question: “Wider stereo base means more depth. Stereo photography is about depth. So a wider base (and therefore more depth) is always better, right?”
I hope it is clear that the answer is “not necessarily”. More is not always better. Sometimes less can be better. The effect of putting more depth into the scene will result in the scene appearing smaller in size. This can lead to unusual and impressive images, like a “toy model” impression of a building or Grand Canyon. But many times making an object appear larger in size is equally, or more impressive. And many times just reproducing a scene in near-ortho (as seen by the eyes) is best. It all depends on the subject, application, and personal taste.
By all means experiment with different stereo bases but it would be a mistake to assume that more is always better!
1 comment:
Hi George!
You wrote:...maximum-on-film-deviation, usually 1.2mm (for 35mm film) and calculates B...
My point: 1,2 mm on film deviation ONLY with 35 mm lens on 35 mm film.
In other words: Realist format.
Using the 1/30 rule with a 35 mm lens, the convergence is 2 degrees.With 35 mm lens, this setup produces 1,2 mm on film deviation. But the same rule, with 50 mm lenses produces 1,7 mm on film deviation. Both are 2 degrees of convergence and 2 degrees of on film deviation.
So...the people who use the calculators, must decide what lens write: 50 or 35 mm? With 50 mm lens must be used 1,7 mm on film deviation, with 35 mm lens only 1,2 mm of on film deviation.
My point of 2 degrees of deviation is this: 1 degree for BEHIND the convergence point,and another 1 degree for BEFORE the convergence point.
About the stereo base: the bigger stereo base produces ONLY better stereo visibility of the infinity point.
Best regards. Imre
Post a Comment