Project Details » History » Revision 3
Revision 2 (WONGKAI Briana Monika Luckyta, 12/18/2025 02:14 PM) → Revision 3/4 (WONGKAI Briana Monika Luckyta, 12/25/2025 04:21 PM)
[[Wiki|Home]] | **[[Project Details|Project Details]]** | [[Group Members|Group Members]] | [[UML_Diagrams|UML Diagrams]] | [[Weekly Progress|Weekly Progress]] | [[Code|Code]] | [[Results|Results]] --- > h1. Project Details --- h2. I. Project Overview This project addresses the challenge of producing a unified visual output from multiple projectors by developing a software-driven image composition system. The system combines two or more projected images into a single coherent display while minimizing visible boundaries, luminance variation, and color imbalance across overlapping regions. The implementation is based on **Python** *Python* and the **OpenCV** *OpenCV* framework. Computational image-processing techniques such as luminance normalization, transparency-based blending, and spatial intensity control are applied to correct projection inconsistencies caused by illumination differences and surface variation. Development is conducted using a role-based team structure covering implementation, architectural modeling, testing, and documentation. This organization supports parallel progress and ensures consistency across design and validation phases. Doxygen is used for automated code documentation, while Redmine supports task tracking and coordination. These tools enable a controlled workflow suitable for iterative development and long-term maintainability. The final outcome is a reusable software framework capable of real-time image blending and correction, serving as a foundation for advanced projection and visualization systems. --- h2. II. Motivation and Problem Definition Multi-projector systems commonly exhibit discontinuities in overlapping regions. Since projectors emit light, the overlapping region where two projectors meet receives double the light intensity Left + Right, resulting in a regions, including visible "bright band" or seam. * **The Artifacts** : Visible seams, uneven brightness, and color distortion. * **The Solution** : These artifacts reduce display quality and visual coherence. Manual calibration techniques are time-consuming and highly sensitive to user accuracy, making them impractical as system complexity increases. This project proposes an automated, software-based alternative that performs alignment and blending algorithmically, eliminating the need for expensive hardware blend units. reliance on specialized calibration hardware. --- h2. III. System Capabilities Project Objectives The * Develop an automated system supports: to merge multiple projected images into a single seamless output. * **Automated Projection Blending** : Merges left Normalize luminance and right images based on a configurable color across overlap width. regions. * **Luminance Normalization** : Corrects Apply transparency-based blending for the non-linear brightness output of projectors using Gamma correction. smooth transitions. * **Real-Time Processing** : Capable of processing image inputs efficiently Model system architecture using NumPy matrix operations. UML. * **Modular Architecture** : Separates configuration, logic, Maintain full documentation and display for maintainability. project coordination using Doxygen and Redmine. --- h2. IV. System Capabilities The system supports automatic projection blending, luminance normalization, real-time video processing, an interactive graphical interface, modular architecture, and integrated documentation and task management. --- h2. V. Algorithms and Theoretical Framework Processing Methods The system operates on a shared projection surface illuminated by synchronized projectors. To achieve a seamless blend, we implement two core mathematical adjustments: *Alpha Blending* and *Gamma Correction* h3. *A. Alpha Blending (Transparency Control)* Alpha blending merges two visual layers based on a transparency coefficient Alpha. We generate a gradient mask where the transparency of the left image fades from 1.0 to 0.0, and the right image fades from 0.0 to 1.0. > p=. Linear Blending Formula: The following techniques are applied: p=. !Screenshot%202025-12-25%20at%2015.59.11.png! h3. *B. Gamma Correction (Luminance Normalization)* Standard linear blending fails because projectors are *non-linear devices*. A pixel value of 50% (128) does not result in 50% light output; due to the projector's gamma (approx. 2.2) it results in only ~22% light output. This causes the blended region to appear darker than the rest of the image (a "dark band"). To correct this, we apply an *Inverse Gamma* function to the blend mask before applying it to the image. This "boosts" the pixel values in the overlap region so that the final optical output is linear. > p=. Gamma Correction Formula: p=. !{width: 500px}clipboard-202512251559-mywr2.png! By setting gamma to match the projector (typically 2.2), we ensure that: *Software Correction Projection Gamma = * Linear Output* p=. !clipboard-202512251605-hqoyj.png! p=. *Figure 1.* The relationship between input pixel intensity and corrected output. The +orange line+ shows the software correction (gamma=3) boosting values to counteract the projector's drop (gamma ~2.2). The dashed grey line represents a linear response (no correction), while the blue and green lines represent under-correction (gamma=0.5) and over-correction (gamma=50) respectively. --- h2. V. Experimental Validation: Gamma Analysis To validate the necessity of Gamma Correction and verify our software's behavior, we performed a comparative analysis. First, we generated a computational plot of the quadratic blending mechanics, followed by physical projection tests using three distinct Gamma values to observe the real-world impact on models * Gamma-based luminance uniformity. h3. *A. Blending Mechanics and Theoretical Boost* To visualize how the inverse gamma correction modifies the standard linear fade, we generated a spatial intensity plot of the overlap region (Figure 1). p=. !clipboard-202512251609-07rkw.png! p=. *Figure 2.* * Alpha-based transparency control * Spatial intensity plot of the overlap region. The dashed lines represent a standard linear fade, which results in insufficient light output. The solid blue and red curves show the gamma-corrected output (boosting the mid-tones) applied by our software to compensate for projector non-linearity. As illustrated in Figure 2, the solid curves bow upward across the overlap zone. This represents the mathematical "boost" applied to the pixel values. By increasing the intensity of the mid-tones before they reach the projector, we counteract the projector's physical tendency to dim those mid-tones, theoretically resulting in a linear, uniform light output. h3. *B. Physical Projection Results* We tested this theory physically by projecting a test image and varying the gamma parameter in the configuration. *Case 1: Under-Correction (gamma = 0.5)* attenuation Applying a gamma value below 1.0 results in a curve that bows downward, worsening the natural dimming effect of the projectors. p=. !clipboard-202512251612-0ii60.jpeg! !clipboard-202512251613-szrss.png! p=. *Figure 3.* Physical projection result using gamma = 0.5. A distinct "dark band" is visible in the center overlap region due to under-correction of luminance. As seen in Figure 3, the overlap region is significantly darker than the non-overlapped areas. The software darkened the mid-tones too quickly, compounding the projector's natural light loss. This confirms that a concave (downward-bowing) blending curve is unsuitable * Frame synchronization for uniform projection blending. video input *Case 2: Optimal Compensation (gamma *+Linear Blending Formula:+* *@I_out = 3.0)* Applying a gamma value near the industry standard for display devices (typically between 2.2 and 3.0) provides the necessary upward boost to the mid-tones. p=. !clipboard-202512251615-u1uov.jpeg! !clipboard-202512251615-r2kcq.png! p=. *Figure 4.* Physical projection result using gamma (1 - α) × I₁ + α × I₂@* *+Quadratic Blending Formula:+* *@I_out = 3.0. The blend is seamless, with uniform brightness achieved across the entire overlap zone. Figure 4 demonstrates a successful blend. The luminance boost applied by the software effectively cancelled out the projector's physical gamma curve. The sum of the light intensities from both projectors produces a uniform brightness level, rendering the seam invisible to the naked eye. *Case 3: Over-Correction (gamma (1 - α²) · I₁ + α² · I₂@* *+Gamma Correction Formula:+* *@I_out = 50.0)* Applying an extreme gamma value tests the limits of the algorithm. Mathematically, this creates a curve that jumps almost instantly from black to maximum brightness. 255 × (I_in / 255)¹ᐟᵞ@* p=. !clipboard-202512251616-ks0i1.jpeg! !clipboard-202512251616-ym3xu.png! p=. *Figure 5.* Physical projection result using gamma = 50.0. The gradient is destroyed, resulting in a hard, bright edge instead of a smooth transition. As shown in Figure 5, extreme over-correction destroys the gradient necessary for a smooth transition. The overlap area becomes a uniform bright band with hard edges. This validates that while a luminance boost is necessary, the correction curve must be graduated to match the projector's response characteristics; an excessively steep curve eliminates the blending effect entirely. --- h2. VI. Software Architecture The system is structured into four primary classes to ensure modularity and separation of concerns. 1. *ConfigReader*: Manages * *ConfigReader* manages external configuration parameters (JSON) such as gamma values, screen side, parameters. * *VideoProcessing* handles video input and overlap width. frame acquisition. 2. *ProjectionSplit*: Performs the * *ProjectionSplit* performs core mathematical blending operations. It generates the NumPy masks, applies the gamma power functions, and merges the alpha channels. 3. *VideoProcessing*: Handles frame extraction and resource management for video inputs. 4. *ImageDisplayApp*: Provides * *ImageDisplayApp* provides the Graphical User Interface (GUI) for runtime adjustments graphical interface and full-screen visualization. output display. --- h2. VII. Functional Requirements * **Input**: The system accepts standard shall accept image formats (JPG, PNG) and video streams. * **Configuration**: Users must be able to adjust the gamma and overlap_pixels via inputs, support multiple blending strategies, allow runtime configuration through a configuration file or GUI. * **Output**: The system must generate left GUI, display outputs in real time, and right specific images that, when projected physically, align to form a single continuous image. handle errors without crashing. --- h2. VIII. Development Tools * **Python & OpenCV**: Used *Python* with *OpenCV* and *NumPy* is used for matrix manipulation processing. * *Doxygen* generates structured code documentation. * *Redmine* manages tasks and image rendering. progress. * **NumPy**: Essential *Astah* supports UML diagram creation. --- h2. IX. Practical Applications The system enables cost-efficient multi-projector displays for performing the gamma power function exhibitions, education, public events, and simulation environments without specialized calibration hardware. --- h2. X. Limitations and Future Works Current limitations include sensitivity to physical projector alignment, environmental lighting conditions, and performance constraints at high resolutions. Future work will focus on millions of pixels simultaneously for real-time performance. * **Doxygen**: Used to generate automated technical documentation. geometric calibration, GPU-based optimization, and expanded compatibility with immersive display technologies. ---