Project

General

Profile

Actions

Project Details


I. Project Overview

This project addresses the challenge of producing a unified visual output from multiple projectors by developing a software-driven image composition system. The system combines two or more projected images into a single coherent display while minimizing visible boundaries, luminance variation, and color imbalance across overlapping regions.

The implementation is based on Python and the OpenCV framework. Computational image-processing techniques such as luminance normalization, transparency-based blending, and spatial intensity control are applied to correct projection inconsistencies caused by illumination differences and surface variation.

Development is conducted using a role-based team structure covering implementation, architectural modeling, testing, and documentation. This organization supports parallel progress and ensures consistency across design and validation phases.

Doxygen is used for automated code documentation, while Redmine supports task tracking and coordination. These tools enable a controlled workflow suitable for iterative development and long-term maintainability.

The final outcome is a reusable software framework capable of real-time image blending and correction, serving as a foundation for advanced projection and visualization systems.


II. Motivation and Problem Definition

Multi-projector systems commonly exhibit discontinuities in overlapping regions, including visible seams, uneven brightness, and color distortion. These artifacts reduce display quality and visual coherence.

Manual calibration techniques are time-consuming and highly sensitive to user accuracy, making them impractical as system complexity increases.

This project proposes an automated, software-based alternative that performs alignment and blending algorithmically, eliminating reliance on specialized calibration hardware.


III. Project Objectives

  • Develop an automated system to merge multiple projected images into a single seamless output.
  • Normalize luminance and color across overlap regions.
  • Apply transparency-based blending for smooth transitions.
  • Model system architecture using UML.
  • Maintain full documentation and project coordination using Doxygen and Redmine.

IV. System Capabilities

The system supports automatic projection blending, luminance normalization, real-time video processing, an interactive graphical interface, modular architecture, and integrated documentation and task management.


V. Algorithms and Processing Methods

The system operates on a shared projection surface illuminated by synchronized projectors.

The following techniques are applied:
  • Linear and quadratic blending models
  • Gamma-based luminance correction
  • Alpha-based transparency control
  • Spatial intensity attenuation
  • Frame synchronization for video input

Linear Blending Formula: I_out = (1 - α) × I₁ + α × I₂

Quadratic Blending Formula: I_out = (1 - α²) · I₁ + α² · I₂

Gamma Correction Formula: I_out = 255 × (I_in / 255)¹ᐟᵞ


VI. Software Architecture

  • ConfigReader manages external configuration parameters.
  • VideoProcessing handles video input and frame acquisition.
  • ProjectionSplit performs core blending operations.
  • ImageDisplayApp provides the graphical interface and output display.

VII. Functional Requirements

The system shall accept image and video inputs, support multiple blending strategies, allow runtime configuration through a GUI, display outputs in real time, and handle errors without crashing.


VIII. Development Tools

  • Python with OpenCV and NumPy is used for processing.
  • Doxygen generates structured code documentation.
  • Redmine manages tasks and progress.
  • Astah supports UML diagram creation.

IX. Practical Applications

The system enables cost-efficient multi-projector displays for exhibitions, education, public events, and simulation environments without specialized calibration hardware.


X. Limitations and Future Works

Current limitations include sensitivity to physical projector alignment, environmental lighting conditions, and performance constraints at high resolutions.
Future work will focus on automated geometric calibration, GPU-based optimization, and expanded compatibility with immersive display technologies.


Updated by WONGKAI Briana Monika Luckyta about 22 hours ago · 2 revisions