About Project » History » Revision 15
« Previous |
Revision 15/30
(diff)
| Next »
Sandeep GANESAN, 11/06/2025 09:34 AM
💡 About Project¶
I. Project Overview¶
Our project focuses on developing a high-quality image composition system capable of seamlessly merging two or more projected images into a single, visually uniform display. The goal is to ensure that the combined image appears continuous and free from visible seams, color shifts, or brightness inconsistencies.
Built using Python and OpenCV, the system applies a series of advanced image-processing techniques including gamma correction, alpha blending, and intensity adjustment to harmonize overlapping areas. These methods allow us to dynamically compensate for lighting variations and surface irregularities, resulting in a more accurate and visually pleasing projection output.
The project team is divided into multiple sub-groups, each focusing on specific responsibilities such as software development, UML design, testing, and wiki management. This structure encourages effective collaboration, clear communication, and consistent progress across all development phases.
To maintain transparency and ensure reproducibility, we integrate Doxygen for detailed source-code documentation and Redmine for structured task tracking and project coordination. Together, these tools support a development environment that prioritizes scalability, maintainability, and long-term usability.
Ultimately, the project aims to deliver a robust framework for real-time image correction and blending, serving as a foundation for future extensions in projection mapping, interactive displays, and multi-screen visualization systems.
II. Motivation & Problem Statement¶
When using multiple projectors to display a single image, visible seams or brightness inconsistencies often occur in overlapping regions. These inconsistencies degrade image quality and make the final projection appear uneven.
Manual calibration methods are time-consuming and prone to human error.
Our motivation is to develop a software-based approach that automates the alignment and blending process, ensuring seamless image projection.
By leveraging the OpenCV library, the system can detect overlapping areas, apply brightness corrections, and blend images smoothly — eliminating the need for costly hardware-based calibration systems.
III. Objectives¶
- To develop an automated image blending system capable of merging two or more projections into a single seamless image.
- To apply gamma correction and intensity modification techniques to balance color and brightness across overlapping regions.
- To implement alpha blending for smooth transitions between images.
- To design and visualize the system architecture using UML diagrams.
- To document the entire project using Doxygen and manage tasks via Redmine.
IV. Key Features¶
1. Automated Image Blending
Uses OpenCV and user-defined parameters to automatically blend two projected images, ensuring accurate overlap and alignment.
2. Gamma Correction and Intensity Adjustment
Employs advanced color and brightness correction algorithms to maintain consistent luminance across blended areas, effectively removing visible seams and mismatches.
3. Video Blending
Leverages GPU acceleration through PyTorch to calculate per-pixel brightness for video streams, enabling real-time blending and correction.
4. User-Friendly Graphical Interface
Provides an intuitive GUI that allows users to select interpolation modes, specify overlap pixels, and control blending parameters easily.
5. Modular System Architecture
Designed using UML-based class structures that divide the project into smaller, manageable components, improving scalability and ease of feature expansion.
6. Comprehensive Documentation and Project Management
Integrates Doxygen for automated code documentation and Redmine for task tracking, ensuring transparent collaboration and efficient workflow management.
V. Algorithm and Theoretical Framework
(Add later)¶
VI. System Architecture
(Add later)¶
VII. Requirement Analysis¶
This defines the functional requirement of the project and outlining what the system needs to accomplish
- Image Input and Processing
-The system must accept image files and video files as input.
-The system must split a given image into two sub-images (left and right) with a specific overlap region
-The system must allow users to choose a one out of the three blending mode (linear, quadratic, or gaussian).
-The system must apply blending algorithms using OpenCV and PyTorch for GPU-accelerated computation for videos.
-The system must save the blended images (left.png, right.png) locally after processing.
- Video Frame Blending
-The system must be able to process individual video frames sequentially for real-time blending.
-The system must output a smooth blended video stream without visible seams.
- Graphical User Interface (GUI)
-The system must let the user select the overlap pixel value from the GUI.
-The system must let the user choose the blending algorithm from the GUI.
-The system must be able to run both image and video blending modes in the GUI.
-The system must let the user view the original, left, and right images in real-time in the GUI.
-The system must be able display the blended outputs in fullscreen mode from the GUI.
- Error Handling and Feedback
-The system must handle missing files and display warnings or error messages appropriately.
-The GUI must handle invalid user input without crashing the program.
VIII. Technology Stack¶
- Python (OpenCV, NumPy)
- Doxygen
- Redmine
- Astah
IX. Application & Impact
(Add later)¶
X. Limitation & Future Enhancements
(Add later)¶

Updated by Sandeep GANESAN 8 days ago · 15 revisions