Forum: FPGA, VHDL & Verilog FPGA image fusion & stereo vision

von Karamazov (Guest)

Rate this post
0 useful
not useful

I would like to create an embedded multispectral stereovision system 
using 4 sensors (4k@60 fps).

I use 2 sensors for each eye (1 rgb and 1 near-infrared (nir) sensor 
with a dichroic beam splitter) in order to capture rgb & nir videos for 
each eye.

I have to fusion the videos capture by the sensors for each eye (rgb and 
nir videos). Then, i have to merge the two videos in order to display it 
on a VR headset (HDMI output).

Is this project possible only by using a single FPGA connected to 4 
sensors and a HDMI VR Headset without any CPU/GPU (Tegra) ? Which FPGA 
is the most suitable according to your experience ?

Thank you.

von BK (Guest)

Rate this post
0 useful
not useful
Which fusion algorithm are you going to use?


Entering an e-mail address is optional. If you want to receive reply notifications by e-mail, please log in.

Rules — please read before posting

  • Post long source code as attachment, not in the text
  • Posting advertisements is forbidden.

Formatting options

  • [c]C code[/c]
  • [vhdl]VHDL code[/vhdl]
  • [code]code in other languages, ASCII drawings[/code]
  • [math]formula (LaTeX syntax)[/math]

Bild automatisch verkleinern, falls nötig
Note: the original post is older than 6 months. Please don't ask any new questions in this thread, but start a new one.