Project Details » History » Version 2
WONGKAI Briana Monika Luckyta, 12/18/2025 02:14 PM
| 1 | 1 | VALECHA Bharat | h1. Project Details |
|---|---|---|---|
| 2 | |||
| 3 | --- |
||
| 4 | |||
| 5 | h2. I. Project Overview |
||
| 6 | |||
| 7 | 2 | WONGKAI Briana Monika Luckyta | This project addresses the challenge of producing a unified visual output from multiple projectors by developing a software-driven image composition system. The system combines two or more projected images into a single coherent display while minimizing visible boundaries, luminance variation, and color imbalance across overlapping regions. |
| 8 | |||
| 9 | The implementation is based on *Python* and the *OpenCV* framework. Computational image-processing techniques such as luminance normalization, transparency-based blending, and spatial intensity control are applied to correct projection inconsistencies caused by illumination differences and surface variation. |
||
| 10 | |||
| 11 | Development is conducted using a role-based team structure covering implementation, architectural modeling, testing, and documentation. This organization supports parallel progress and ensures consistency across design and validation phases. |
||
| 12 | |||
| 13 | Doxygen is used for automated code documentation, while Redmine supports task tracking and coordination. These tools enable a controlled workflow suitable for iterative development and long-term maintainability. |
||
| 14 | |||
| 15 | The final outcome is a reusable software framework capable of real-time image blending and correction, serving as a foundation for advanced projection and visualization systems. |
||
| 16 | |||
| 17 | --- |
||
| 18 | |||
| 19 | h2. II. Motivation and Problem Definition |
||
| 20 | |||
| 21 | Multi-projector systems commonly exhibit discontinuities in overlapping regions, including visible seams, uneven brightness, and color distortion. These artifacts reduce display quality and visual coherence. |
||
| 22 | |||
| 23 | Manual calibration techniques are time-consuming and highly sensitive to user accuracy, making them impractical as system complexity increases. |
||
| 24 | |||
| 25 | This project proposes an automated, software-based alternative that performs alignment and blending algorithmically, eliminating reliance on specialized calibration hardware. |
||
| 26 | |||
| 27 | --- |
||
| 28 | |||
| 29 | h2. III. Project Objectives |
||
| 30 | |||
| 31 | * Develop an automated system to merge multiple projected images into a single seamless output. |
||
| 32 | * Normalize luminance and color across overlap regions. |
||
| 33 | * Apply transparency-based blending for smooth transitions. |
||
| 34 | * Model system architecture using UML. |
||
| 35 | * Maintain full documentation and project coordination using Doxygen and Redmine. |
||
| 36 | |||
| 37 | --- |
||
| 38 | |||
| 39 | h2. IV. System Capabilities |
||
| 40 | |||
| 41 | The system supports automatic projection blending, luminance normalization, real-time video processing, an interactive graphical interface, modular architecture, and integrated documentation and task management. |
||
| 42 | |||
| 43 | --- |
||
| 44 | |||
| 45 | h2. V. Algorithms and Processing Methods |
||
| 46 | |||
| 47 | The system operates on a shared projection surface illuminated by synchronized projectors. |
||
| 48 | |||
| 49 | The following techniques are applied: |
||
| 50 | * Linear and quadratic blending models |
||
| 51 | * Gamma-based luminance correction |
||
| 52 | * Alpha-based transparency control |
||
| 53 | * Spatial intensity attenuation |
||
| 54 | * Frame synchronization for video input |
||
| 55 | |||
| 56 | *+Linear Blending Formula:+* *@I_out = (1 - α) × I₁ + α × I₂@* |
||
| 57 | |||
| 58 | *+Quadratic Blending Formula:+* *@I_out = (1 - α²) · I₁ + α² · I₂@* |
||
| 59 | |||
| 60 | *+Gamma Correction Formula:+* *@I_out = 255 × (I_in / 255)¹ᐟᵞ@* |
||
| 61 | |||
| 62 | --- |
||
| 63 | |||
| 64 | h2. VI. Software Architecture |
||
| 65 | |||
| 66 | * *ConfigReader* manages external configuration parameters. |
||
| 67 | * *VideoProcessing* handles video input and frame acquisition. |
||
| 68 | * *ProjectionSplit* performs core blending operations. |
||
| 69 | * *ImageDisplayApp* provides the graphical interface and output display. |
||
| 70 | |||
| 71 | --- |
||
| 72 | |||
| 73 | h2. VII. Functional Requirements |
||
| 74 | |||
| 75 | The system shall accept image and video inputs, support multiple blending strategies, allow runtime configuration through a GUI, display outputs in real time, and handle errors without crashing. |
||
| 76 | |||
| 77 | --- |
||
| 78 | |||
| 79 | h2. VIII. Development Tools |
||
| 80 | |||
| 81 | * *Python* with *OpenCV* and *NumPy* is used for processing. |
||
| 82 | * *Doxygen* generates structured code documentation. |
||
| 83 | * *Redmine* manages tasks and progress. |
||
| 84 | * *Astah* supports UML diagram creation. |
||
| 85 | |||
| 86 | --- |
||
| 87 | |||
| 88 | h2. IX. Practical Applications |
||
| 89 | |||
| 90 | The system enables cost-efficient multi-projector displays for exhibitions, education, public events, and simulation environments without specialized calibration hardware. |
||
| 91 | |||
| 92 | --- |
||
| 93 | |||
| 94 | h2. X. Limitations and Future Works |
||
| 95 | |||
| 96 | Current limitations include sensitivity to physical projector alignment, environmental lighting conditions, and performance constraints at high resolutions. |
||
| 97 | Future work will focus on automated geometric calibration, GPU-based optimization, and expanded compatibility with immersive display technologies. |
||
| 98 | 1 | VALECHA Bharat | |
| 99 | --- |