Skip to main content
Vision search search close

Image processing, sensor fusion, deep learning, AI, and high-speed controls with Speedgoat low-latency vision I/O modules with vision protocol support

Do you need to process high-bandwidth vision data and want to unload the CPU from handling these computationally expensive tasks? You require a processor that supports typical vision protocols?

Speedgoat offers Simulink-programmable FPGA I/O modules, as an essential part of a real-time test system, that are tailored for applications such as deep learning, AI, machine vision for robotics, aerospace, advanced driver-assistance systems/automated driving (ADAS/AD), medical imaging.

The vision I/O modules can input image data from cameras, 3D simulation environments or logged data using USB-UVC, GigE, SDI, or HDMI protocol and output using HDMI, SDI, or DisplayPort to monitors, perception units, or controllers. Vision I/O modules can be used along with other I/O modules to build high-fidelity real-time controls, DSP, and vision applications.

Title

Selection Guide

I/O module Description
IO352 Low-latency vision processing I/O module supporting USB-UVC, GigE Vision, SDI, HDMI, and DisplayPort

Flexible and Scalable Solution

  • Process large amounts of high-bandwidth signals
  • Work with various vision input and output protocols, all supported on the same I/O module
  • Transform vision data protocol and modify the video data stream
  • Interconnect multiple vision I/O modules for higher processing power and to support different vision protocols
  • Perform sensor fusion using acquired or emulated data from other I/O modules
  • Use the vision module for rapid control prototyping (RCP) of your demanding Simulink control designs requiring FPGA processing
  • Perform hardware-in-the-loop (HIL) testing of camera and processing units by emulating the camera or other vision sensors, and by generating the image stream on the vision module to test vision processing algorithms
  • Leverage video fault insertion to run verification and validation tests 

Integrated into Simulink Environment

  • Simulink Real-Time provides high-speed data logging, seamless instrumentation, and test automation capabilities
  • HDL Coder generates HDL code automatically and programs the vision processing I/O module
  • Vision HDL Toolbox facilitates creating FPGA-optimized and ready-to-use designs of image processing, video, and computer vision algorithms
  • Deep Learning HDL Toolbox helps to implement your custom deep learning networks and to hgenerate portable, synthesizable Verilog® and VHDL® code for deployment on any FPGA 
  • Computer Vision Toolbox provides algorithms, functions, and apps for designing and testing computer vision, 3D vision, and video processing systems.

 

Applications and Use-Cases

Automotive and Autonomous Systems​

Lane Detection

Lane detection is an essential part of advanced driving assistance systems (ADAS) and requires processing high-resolution video streams. Model the lane detection algorithm using the Vision HDL Toolbox and use the HDL Coder to deploy the algorithm onto the FPGA-based vision I/O module. Use emulated sensor data to optimize your sensor fusion and feature recognition algorithms.

Pothole Detection

Automated detection of potholes or road hazards is safety-plus. Images of the road can be scanned, and the regions can be assessed based on features like area, brightness value, or texture. You can process a stream of images using a vision I/O module.

ECU Testing with Optical Display

Drivers of vehicles usually interact with the ECUs via electronic instruments like a speedometer, tachometer, or other instrumentation panels. Testing of instrument clusters is usually done manually by visual inspection, which is prone to errors. Using the vision I/O module, you can capture images of the instrumentation panel in real time and compare measurements with expected values using image processing. Obtaining e.g. the speed data from the ECU requires synchronization with other I/O modules like the CAN interface, and this can easily be done within a real-time test system.

Industrial Automation and Robotics​

Vision Guidance

Assembly processes require individual components to be aligned and oriented within a specified tolerance level. Here, machine vision is useful to capture images of the components and automatically correct the orientation. This involves locating certain key features of the components. With the vision I/O module, you can get vision input of the assembly line in real-time from a frame grabber or other types of video input and an algorithm detects features like edges, corners, or regions of interest. Those can then be used to prototype the controller for the assembly.

Vision Inspection

Quality assurance during manufacturing processes involves visual inspection of the individual  components. Machine vision allows to perform the inspection in an automated way and reduces the errors involved with a manual approach. You can leverage machine or deep learning techniques available in the Deep Learning HDL Toolbox for training networks to inspect the components. The algorithms then can be deployed onto the vision I/O module for real-time processing.
With visual inspection, you can for example perform defect detection, quality inspection, or sorting.

Vision Identification

With machine vision, you can identify certain specific parts or unique pattern on the components. This can be, for example, characters printed on medical or manufacturing equipment, barcodes, 2D data matrices. Machine vision helps in detecting these character strings and analyzing the patterns. With MATLAB & Simulink, you can also perform optical character recognition and process it further. Performing error-proofing, process control monitoring, inventory control, or any other quality control is possible with these techniques.

Medical Devices​

Robot Assisted Surgery with Machine Vision

Advances in the field of robotics enable developers to create control design for robots to assist in medical procedures. Adding vision capabilities to the system enables the robot to autonomously configure its controller based on the features of the image extracted. You can use the vision I/O module to process the images from the connected camera. The system provides a magnified 3D view of the body, helping locate abnormalities, and helps the robot make precise movements. You can use the Medical Imaging Toolbox™ of MATLAB and Simulink, providing apps, functions, and workflows for designing and testing diagnostic imaging applications. This also helps add machine vision to already certified medical devices.

Aerospace

Autonomous Unmanned Aerial Vehicle (UAV)

Autonomous UAVs needs to take-off, land, and carry out missions without the intervention of a human. Augment your UAV systems with vision capabilities for applications such as performing missions according to a flight path, obstacle detection and avoidance, terrain mapping, asset inspection. With UAV Toolbox, you can design autonomous flight algorithms, UAV missions, and flight controllers for autonomous UAVs. Perform hardware-in-the-loop (HIL) testing of your flight controller, by modeling the UAV and different scenarios in a photorealistic 3D environment using the Unreal Engine.

 

Resources

 


 
 

Curious how to accelerate control design innovation with a modular controller hardware setup?


Free Workflow Demo

See how Speedgoat can help you in the development of your control design for your application.


Schedule now
 

Have Questions?

Talk to our experts about your application requirements.

 
Follow Speedgoat LinkedIn