An idea came to me when I asked myself: « Which is the optimal resolution of a texture, depending on the distance of an object from the camera? » and I started to split the problem in smaller tasks.
The first answer was: « Assume that we are watching a square plane, which dimensions apparently fit the frame of the camera. And assume that the texture perfectly fits the borders of the plane (a common situation, obtained by executing "Unwrap" command, in "Editing" mode). The condition, required to obtain such result, is to place the plane at a "special" distance from the camera (distance that I will call Critical Distance). At that distance the plane texture can have exactly the same resolution of the final render frame. Beyond the Critical Distance, the texture resolution can be lower. At distances lower then the CD, texture must have higher resolution then the render frame. ».
By default, Blender 3D render frame output resolution is configured at 1920 * 1080 px. Therefore, the texture of our plane can be (for example) the nearest power of 2 square: 2048 * 2048 px.
So, my first challenge was to find the Critical Distance and (why not) to place the plane at that distance in front of the camera.
Critical Distance is obtained by de formula:
Where alpha is the FOV of the camera along X direction and l is the width of the object.
So I coded a simple add-on for Blender (still early alpha: to run it use ALT-P in the Text Editor).
# Apparent Dimension To Camera Frame Add-On for Blender 2.70
# Author: Marco Frisan
# Author can be found at: http://endercomics.blogspot.it, http://ilearncocoa.blogspot.it, http://www.facebook.com/marcofrisan
import bpy
import mathutils
import math
def main(context):
for ob in context.scene.objects:
print(ob)
class ApparentDimToCamFrame(bpy.types.Operator):
"""Move or scale and rotate selected object to fit the camera frame with its apparent dimensions."""
bl_idname = "object.apparent_dim_to_cam_frame"
bl_label = "Apparent Dimension To Camera Frame"
@classmethod
def poll(cls, context):
return context.active_object is not None
def execute(self, context):
# Get the active object.
obj = context.active_object
# Get the active object dimensions.
objDim = obj.dimensions
print('Object name: ' + repr(obj.name) + ';\nObject dimensions: ' + repr(objDim))
# Get the object transformation matrix in world coordinates.
objWT = obj.matrix_world
print('Object transformation matrix:\n' + repr(objWT))
# Get the components of the object transformation.
objLoc, objRot, objSca = objWT.decompose()
# Get the active camera.
# The Scene.camera attribute returns the camera
# as an object of type Object. To get Camera type
# properties you have to get the data block of this
# object. See later.
cam = context.scene.camera
print("Camera type: " + repr(cam.type), ";\nCamera name: " + repr(cam.name))
# Get the camera transformation matrix in world coordinates.
camWT = cam.matrix_world
print('Camera transformation matrix:\n' + repr(camWT))
# Get the components of the camera transformation.
camLoc, camRot, camSca = camWT.decompose()
# Create a tranlation matrix with the location vector of the camera.
locM = mathutils.Matrix.Translation(camLoc)
print('Camera location matrix:\n' + repr(locM))
# Create a rotation matrix with the rotation quaternion of the camera.
rotM = mathutils.Matrix.Rotation(camRot.angle, 4, camRot.axis)
print('Camera rotation matrix:\n' + repr(rotM))
#print(rotM)
#print('Camera rotation matrix, pick values:\n')
#print('[0][0]:', rotM[0][0])
#print('[1][1]:', rotM[1][1])
#print('[1][2]:', rotM[1][2])
#print('[2][1]:', rotM[2][1])
#print('[2][2]:', rotM[2][2])
# Object must keep its original scale, then we create an identity matrix and set scale values directly.
scaM = mathutils.Matrix.Identity(4)
scaM[0][0] = scaM[1][1] = scaM[2][2] = 1.0
print('Camera scale matrix:\n' + repr(scaM))
resultT = locM * rotM * scaM
#print(cam.type)
#print(cam.data.name)
#print(bpy.data.cameras)
#print(bpy.data.cameras.get(cam.data.name))
# Get the camera data block.
camData = bpy.data.cameras.get(cam.data.name)
# Get the camera FOV angle in X direction and divide it by 2.
fovXHalf = camData.angle_x * 0.5
# Get the X dimension of the object.
dimXHalf = objDim[0] * 0.5
# Get the distance at which the object (apparently) fits camera frame.
d = (math.cos(fovXHalf) * dimXHalf) / math.sin(fovXHalf)
print('Critical distance: ' + repr(d))
# Create a vector of magnitude equal to critical distance.
# Since camera always face -Z, the critical distance is the
# third component of the vector.
dV = mathutils.Vector((0.0, 0.0, -d))
print('Critical distance vector: ' + repr(dV))
# Orient the vector like the camera.
dV.rotate(camRot)
print('Critical distance vector aligned to camera: ' + repr(dV))
# Create a translation matrix with the vector.
# NOTE: I think we could use directly a matrix, setting the [3][2] to -d.
# But it works, for now.
dT = mathutils.Matrix.Translation(dV)
print('Critical distance translation matrix:\n' + repr(dT))
# Apply the critical distance to the result transformation.
resultT = dT * resultT
print('Result transformation:\n' + repr(resultT))
obj.matrix_world = resultT
return {'FINISHED'}
def register():
bpy.utils.register_class(ApparentDimToCamFrame)
def unregister():
bpy.utils.unregister_class(ApparentDimToCamFrame)
if __name__ == "__main__":
register()
# test call
bpy.ops.object.apparent_dim_to_cam_frame()
This is the Blender 3D file: texture_size_&_camera_distance.blend.