Project

General

Profile

About Project » History » Version 25

Sandeep GANESAN, 11/12/2025 07:30 PM

1 5 Yaroslav MAYDEBURA
h1. šŸ’” About Project
2 1 Pratama怀Kwee BRANDON
3 3 Yaroslav MAYDEBURA
---
4 1 Pratama怀Kwee BRANDON
5 5 Yaroslav MAYDEBURA
h2. I. Project Overview
6 4 Yaroslav MAYDEBURA
7 15 Sandeep GANESAN
Our project focuses on developing a high-quality image composition system capable of seamlessly merging two or more projected images into a single, visually uniform display. The goal is to ensure that the combined image appears continuous and free from visible seams, color shifts, or brightness inconsistencies.
8 1 Pratama怀Kwee BRANDON
9 15 Sandeep GANESAN
Built using Python and OpenCV, the system applies a series of advanced image-processing techniques including gamma correction, alpha blending, and intensity adjustment to harmonize overlapping areas. These methods allow us to dynamically compensate for lighting variations and surface irregularities, resulting in a more accurate and visually pleasing projection output.
10 1 Pratama怀Kwee BRANDON
11 15 Sandeep GANESAN
The project team is divided into multiple sub-groups, each focusing on specific responsibilities such as software development, UML design, testing, and wiki management. This structure encourages effective collaboration, clear communication, and consistent progress across all development phases.
12 4 Yaroslav MAYDEBURA
13 15 Sandeep GANESAN
To maintain transparency and ensure reproducibility, we integrate Doxygen for detailed source-code documentation and Redmine for structured task tracking and project coordination. Together, these tools support a development environment that prioritizes scalability, maintainability, and long-term usability.
14
15
Ultimately, the project aims to deliver a robust framework for real-time image correction and blending, serving as a foundation for future extensions in projection mapping, interactive displays, and multi-screen visualization systems.
16 3 Yaroslav MAYDEBURA
17 1 Pratama怀Kwee BRANDON
---
18 3 Yaroslav MAYDEBURA
19 5 Yaroslav MAYDEBURA
h2. II. Motivation & Problem Statement
20 1 Pratama怀Kwee BRANDON
21 5 Yaroslav MAYDEBURA
When using multiple projectors to display a single image, visible seams or brightness inconsistencies often occur in overlapping regions. These inconsistencies degrade image quality and make the final projection appear uneven.  
22 3 Yaroslav MAYDEBURA
23 5 Yaroslav MAYDEBURA
Manual calibration methods are time-consuming and prone to human error.  
24 3 Yaroslav MAYDEBURA
25 5 Yaroslav MAYDEBURA
Our motivation is to develop a software-based approach that automates the alignment and blending process, ensuring seamless image projection.  
26 1 Pratama怀Kwee BRANDON
27 5 Yaroslav MAYDEBURA
By leveraging the **OpenCV** library, the system can detect overlapping areas, apply brightness corrections, and blend images smoothly — eliminating the need for costly hardware-based calibration systems.
28 1 Pratama怀Kwee BRANDON
29
---
30 3 Yaroslav MAYDEBURA
31 5 Yaroslav MAYDEBURA
h2. III. Objectives
32 3 Yaroslav MAYDEBURA
33 5 Yaroslav MAYDEBURA
* To develop an automated image blending system capable of merging two or more projections into a single seamless image.  
34
* To apply *gamma correction* and *intensity modification* techniques to balance color and brightness across overlapping regions.  
35
* To implement *alpha blending* for smooth transitions between images.  
36
* To design and visualize the system architecture using **UML diagrams**.  
37
* To document the entire project using **Doxygen** and manage tasks via **Redmine**.
38 3 Yaroslav MAYDEBURA
39
---
40
41 5 Yaroslav MAYDEBURA
h2. IV. Key Features
42 9 Pratama怀Kwee BRANDON
43 7 Pratama怀Kwee BRANDON
*1. Automated Image Blending*
44
Uses OpenCV and user-defined parameters to automatically blend two projected images, ensuring accurate overlap and alignment.
45 8 Pratama怀Kwee BRANDON
46 7 Pratama怀Kwee BRANDON
*2. Gamma Correction and Intensity Adjustment*
47
Employs advanced color and brightness correction algorithms to maintain consistent luminance across blended areas, effectively removing visible seams and mismatches.
48 8 Pratama怀Kwee BRANDON
49 7 Pratama怀Kwee BRANDON
*3. Video Blending*
50
Leverages GPU acceleration through PyTorch to calculate per-pixel brightness for video streams, enabling real-time blending and correction.
51 8 Pratama怀Kwee BRANDON
52 7 Pratama怀Kwee BRANDON
*4. User-Friendly Graphical Interface*
53
Provides an intuitive GUI that allows users to select interpolation modes, specify overlap pixels, and control blending parameters easily.
54 8 Pratama怀Kwee BRANDON
55 7 Pratama怀Kwee BRANDON
*5. Modular System Architecture*
56
Designed using UML-based class structures that divide the project into smaller, manageable components, improving scalability and ease of feature expansion.
57 8 Pratama怀Kwee BRANDON
58 7 Pratama怀Kwee BRANDON
*6. Comprehensive Documentation and Project Management*
59 1 Pratama怀Kwee BRANDON
Integrates Doxygen for automated code documentation and Redmine for task tracking, ensuring transparent collaboration and efficient workflow management.
60 12 Pratama怀Kwee BRANDON
61 7 Pratama怀Kwee BRANDON
---
62 11 Pratama怀Kwee BRANDON
63 10 Pratama怀Kwee BRANDON
h2. V. Algorithm and Theoretical Framework
64 25 Sandeep GANESAN
65
66
h3. Technologies to be used
67
68
We plan to use a single projection surface illuminated by two projectors, each connected to separate computers. Both projectors will display synchronized images or videos, which are combined into a single, seamless projection.
69
70
To accomplish this, we will apply the following theoratical technologies:
71
* *Linear and Quadratic Mixing*  
72
* *Gamma Correction*  
73
* *Alpha Blending*  
74
* *Intensity Adjustment for Edge Blending*  
75
* *Video Frame Synchronization*  
76
77
---
78
79
h3. Linear and Quadratic Mixing
80
81
This project introduces two blending functions, that are linear and quadratic. These are used to control brightness and transition smoothness between overlapping projection zones.
82
*+Linear Mixing+*
83
Linear mixing provides a constant-rate interpolation between two projected frames:  
84
85
                                                    p=. *@I(blended) = (1 - α) Ɨ I₁ + α Ɨ Iā‚‚@*  
86
87
This method creates a direct and proportionate blend, suitable for small overlaps and real-time applications with limited motion.
88
*+Quadratic Mixing+*
89
Quadratic mixing introduces a *non-linear weight curve* that reduces edge artifacts by applying a quadratic power to alpha:  
90
91
                                                    p=. *@I(blended) = (1 āˆ’ α²) Ɨ I₁ + (α²) Ɨ Iā‚‚@*
92
93
This gives more emphasis to the central region and smoother gradients at the edges, producing results visually similar to gamma-based perceptual blending.
94
95
---
96
97
h2. Gamma Correction Method
98
99
Gamma correction modifies each pixel’s luminance through a non-linear power-law transformatio* to align brightness with human perception: 
100
 
101
                                                    p=. *@Iā‚’ = 255 Ɨ (Iā‚— / 255)Ā¹įŸįµž@*
102
103
* *γ > 1 → darkens the image*  
104
* *γ < 1 → brightens the image*  
105
106
In this project, gamma correction ensures the luminance from both projectors matches across the surface, preventing brightness mismatches when mixing video frames.
107
108
p=. !{width:600px}gamm_correction.png!
109
110
---
111
112
h2. Alpha Blending
113
114
Alpha blending merges two visual layers based on a defined transparency coefficient (alpha).  
115
116
p=. !alpha_blend_equation.png ! 
117
118
By controlling alpha spatially, we can fade one projection into another projection.
119
120
In our system:
121
* Dynamic alpha masks are generated based on projector overlap geometry  
122
* Linear and quadratic variations of alpha allow adaptive blending for different edge behaviors  
123
124
p=. !Alpha_compositing..png!
125
*Diagram to show α blending*
126
127
---
128
129
h2. Intensity Modification
130
131
To achieve seamless edge blending, intensity modification is applied using positional control and mixing curves.
132
133
* If projector_side = 1 → intensity decreases toward the left edge  
134
* If projector_side = 0 → intensity decreases toward the right edge  
135
* Quadratic falloff is used at boundary regions to mimic human perceptual smoothness  
136
137
The blending intensity is dynamically modulated using:  
138
139
p=. !intensity_equation.png ! 
140
141
where *f(x)* follows a linear or quadratic curve depending on the region and overlap type.
142
143
p=. !{width:500px}edge_intensity.png!
144
Shows the intensity modification being applied
145
146
147
---
148
149
h2. Video Support and Frame Synchronization
150
151
Unlike static blending, our system supports *real-time video* by:
152
* Reading synchronized frames from two video sources using OpenCV  
153
* Applying blending and gamma correction on each frame in real time  
154
* Displaying processed frames via a graphical interface  
155
156 13 Pratama怀Kwee BRANDON
157
---
158 3 Yaroslav MAYDEBURA
159 10 Pratama怀Kwee BRANDON
h2. VI. System Architecture
160 22 Pratama怀Kwee BRANDON
!http://www.dh.is.ritsumei.ac.jp/redmine/attachments/download/1859/Class_Diagram.png!
161 21 Pratama怀Kwee BRANDON
162 23 Junyi Xu
h3. The system is split into four classes: ConfigReader for configuration, VideoProcessing for video input and frames, ProjectionSplit for splitting and blending images, and ImageDisplayApp as the Tk GUI that coordinates everything.
163
164
Here are the descriptions of each class:
165
166
*ConfigReader*
167
Reads/writes config.ini and holds parameters such as file paths, overlap, and blend mode, which provides simple getters  so other classes can use the settings without editing code, and keeps configuration separate from code for easy reuse.
168
169
*VideoProcessing*
170
Opens a video and reads frames with cv2.VideoCapture, when the stream ends it returns None and releases resources, keeping all video I/O inside this class so the GUI and processing stay clean.
171
172
*ProjectionSplit*
173
Performs the core image operations by splitting into main/left/right with the chosen overlap and blend, while accepting still images or video frames and returning NumPy arrays using settings from ConfigReader.
174
175
*ImageDisplayApp*
176
Tk GUI manages ProjectionSplit and VideoProcessing class, lets users set overlap, blend, runs processing, and displays results (including fullscreen) while updating on screen labels.
177
178
179 3 Yaroslav MAYDEBURA
---
180
181 14 Pratama怀Kwee BRANDON
h2. VII. Requirement Analysis
182
183
This defines the functional requirement of the project and outlining what the system needs to accomplish 
184
185
*- Image Input and Processing*
186
-The system must accept image files and video files as input.
187
-The system must split a given image into two sub-images (left and right) with a specific overlap region
188
-The system must allow users to choose a one out of the three blending mode (linear, quadratic, or gaussian).
189
-The system must apply blending algorithms using OpenCV and PyTorch for GPU-accelerated computation for videos.
190
-The system must save the blended images (left.png, right.png) locally after processing.
191
192
*- Graphical User Interface (GUI)*
193
-The system must let the user select the overlap pixel value from the GUI.
194
-The system must let the user choose the blending algorithm from the GUI.
195
-The system must be able to run both image and video blending modes in the GUI.
196
-The system must let the user view the original, left, and right images in real-time in the GUI.
197
-The system must be able display the blended outputs in fullscreen mode from the GUI.
198
199
*- Error Handling and Feedback*
200
-The system must handle missing files and display warnings or error messages appropriately.
201
-The GUI must handle invalid user input without crashing the program.
202 3 Yaroslav MAYDEBURA
203
---
204
205 10 Pratama怀Kwee BRANDON
h2. VIII. Technology Stack
206 3 Yaroslav MAYDEBURA
207 16 Yaroslav MAYDEBURA
Our project combines a number of strong tools and technologies to make *image blending, documentation, and management* faster and more reliable.  
208
Each stack has a specific job to do to help the development workflow and keep the system working and easy to maintain.
209
210 17 Yaroslav MAYDEBURA
h3.  Python (OpenCV, NumPy)
211 16 Yaroslav MAYDEBURA
212
The major programming language we utilize to build our image-processing system is *Python*.  
213
It is flexible, easy to read, and has a large number of libraries that are good for scientific computing and computer vision.  
214
215
The *OpenCV* library is the heart of our picture blending technique. It has functions for filtering, correcting colors, and changing gamma.  
216
*NumPy* works with OpenCV to speed up calculations and make it easier to work with data by adding numerical and matrix-based operations.  
217 1 Pratama怀Kwee BRANDON
218 16 Yaroslav MAYDEBURA
They work together to let us effortlessly combine several projected images, change brightness and contrast, and calibrate automatically with great accuracy.
219
220 17 Yaroslav MAYDEBURA
h3.  Doxygen
221 16 Yaroslav MAYDEBURA
222 1 Pratama怀Kwee BRANDON
You can use *Doxygen* to make structured and easy-to-read documentation right from the codebase.  
223 16 Yaroslav MAYDEBURA
It makes sure that our functions, variables, and logic are all linked and defined correctly, which makes it easy for future developers to understand and add to the system.  
224
225
Doxygen makes our source code more open, easier to maintain, and more consistent.
226
227
228 17 Yaroslav MAYDEBURA
h3. Redmine
229 16 Yaroslav MAYDEBURA
230
*Redmine* is the main tool we use for project management and collaboration.  
231
It enables effective tracking of tasks, deadlines, and issues while maintaining clear communication among team members.  
232
233
Redmine makes sure that everyone in the team is on the same page about goals, progress, and deliverables by using Wiki pages, ticket tracking, and file sharing.  
234
It also lets you visualize project progress with *Gantt charts* and integrates version control.
235
236 17 Yaroslav MAYDEBURA
h3. Astah
237 16 Yaroslav MAYDEBURA
238
You can use *Astah* to make UML diagrams such as class diagrams, use case diagrams, and sequence diagrams.  
239
These diagrams show the system's architecture, how components interact with each other, and how data flows across the system.  
240
241
This makes it easier to grasp complicated software systems.  
242
Astah helps people work together by making it easier to go from conceptual design to actual implementation.
243
244
These tools work together to create a *complete ecosystem* that helps with every step of development — from planning and building to writing documentation and managing projects.  
245
With this integrated technology stack, our team can create a strong, well-organized, and scalable system for combining images in real time.
246
247 3 Yaroslav MAYDEBURA
248
---
249 1 Pratama怀Kwee BRANDON
250 10 Pratama怀Kwee BRANDON
h2. IX. Application & Impact
251 18 Yaroslav MAYDEBURA
252
Our *image composition and blending system* opens doors to many real-world applications that make it both *useful and impactful*.  
253
By automating the alignment and blending of multiple projected images, our solution provides a *low-cost, flexible, and scalable* way to create large, seamless visual displays without the need for expensive hardware.
254
255
h3. Art & Exhibition Spaces
256
257
One of the most promising applications lies in *art galleries, museums, and exhibitions*.  
258
Our system can be used to project one continuous, high-quality image across several surfaces — ideal for immersive art installations and digital exhibitions.  
259
260
Traditionally, this requires costly multi-projector hardware and precise manual calibration.  
261
With our software, artists and curators can instead use *two or more affordable projectors* to achieve perfectly blended visuals with minimal setup time.
262
263
h3. Education & Learning Environments
264
265
In classrooms or lecture halls, the system allows teachers to combine multiple projectors to create *wide-format or high-resolution educational displays*.  
266
This enhances *student engagement* and helps visualize complex materials like 3D models, scientific simulations, or large datasets — all at a fraction of the traditional cost.
267
268
h3. Business, Simulation & Public Use
269
270
Beyond art and education, our solution can enhance *business presentations, public events, gaming, and simulation systems*.  
271
It can be integrated into *VR/AR setups* for projection mapping, virtual exhibitions, or multi-screen visualization environments.  
272
Even small organizations or community centers can display wide, seamless visuals using simple, inexpensive equipment.
273
274
h3. Broader Impact
275
276
This project shows how combining *computer vision, automation, and open-source tools* can make professional-grade visual experiences accessible to everyone.  
277
It bridges creativity with affordability — empowering users to build immersive, large-scale projections using *smart algorithms* instead of expensive devices.
278
279
> *"Innovation isn’t always about new hardware — sometimes it’s about making what we already have work smarter."*
280
281 3 Yaroslav MAYDEBURA
282
---
283
284 10 Pratama怀Kwee BRANDON
h2. X. Limitation & Future Enhancements
285 19 Yaroslav MAYDEBURA
286
While our image blending system achieves reliable and high-quality results, it still faces several *technical and practical limitations* that can be improved in future versions. Recognizing these challenges helps guide further development toward a more precise and efficient solution.
287
288
289
h3. Current Limitations
290
291
One key limitation lies in the *precision of projector calibration*.  
292
For accurate image blending, each projector must be positioned carefully with minimal angular error. Even slight misalignment or lens distortion can lead to *visible seams or overlapping inconsistencies* in the final projection.  
293
294
Lighting conditions also affect performance — excessive brightness, uneven wall surfaces, or reflective backgrounds may reduce blending accuracy. Additionally, our current system assumes a *fixed projection setup*; movement or vibration of the projector requires recalibration, which limits flexibility in dynamic environments.
295
296
Another limitation is *processing performance*. While Python and OpenCV provide excellent tools for image manipulation, real-time blending on larger resolutions may require higher computational power or GPU acceleration for smoother performance.
297
298
299
h3. Future Enhancements
300
301
In future iterations, we aim to develop an *automatic projector calibration system* using camera feedback or sensor-based alignment.  
302
By detecting geometric distortions automatically, the system could self-correct in real time, reducing the need for manual setup.  
303
304
We also plan to optimize the blending algorithm through *parallel processing and GPU acceleration*, enabling faster computation for high-resolution displays.  
305
Integrating a *user-friendly interface* will further simplify configuration, making the system accessible even to non-technical users.
306
307
Finally, expanding compatibility with *different projection hardware and AR/VR systems* could open new possibilities for interactive and immersive visual environments.
308
309
310
> *ā€œPerfection is not the absence of flaws, but the pursuit of improvement.ā€*  
311
> — *G12-2025 Team*
312
313 3 Yaroslav MAYDEBURA
314
---
315
316 5 Yaroslav MAYDEBURA
!https://media.tenor.com/Q14Y3rSxX5wAAAAM/plan-roadmap.gif!