Replies: 1 comment
-
Performing realtime stitching is indeed possible, see the issue here: https://github.com/orgs/bonsai-rx/discussions/1090 In terms of producing a seamless stitched video, it depends on your camera positioning. You can crop the incoming feeds so that the resulting stitched video doesn't have any overlap, but depending on camera placement the two sides of the stitched video may show different projections of the physical arena. If you can physically calibrate this projection you can warp the videos so that they have the same projection in the resulting video but again, this depends on your camera positioning and what you want to do with the final video. |
Beta Was this translation helpful? Give feedback.
-
Hello folks. I'm a PhD student trying to record real-time video of an aquatic animal in a long tank. One camera can't capture the entire length of the tank, so I've got two ELP USB cameras, a computer running Windows 10 with an i3-5005U CPU @ 2.00 GHz, and a dream. I'm wondering if it's possible to perform realtime stitching of two overlapping video feeds in Bonsai to produce a single, intergrated output video that isn't just the two camera feeds side-by-side; in other words, I want the region of overlap to be... overlapped.
I know it's possible to correlate video from two cameras simultaneously (see Fig. 3 in this paper), but I'm looking to go beyond that and create a stitched video with seamless overlap (seamless is really important as I need to analyse data related to animal movement within the overlap zone).
The bottom of the experimental arena has three clearly defined zones: a grey area, a stimulus area, and another grey area. I'm thinking it may be possible to use these as fixation points or targets for alignment. Cameras are fixed above the arena and the only thing moving in the videos will be the animal.
Thank you in advance for reading and/or suggesting any solutions!
Beta Was this translation helpful? Give feedback.
All reactions