Computer Vision for Robotics: Perception with OpenCV & Deep Learning Training Course
OpenCV is an open-source computer vision library that enables real-time image processing, while deep learning frameworks such as TensorFlow provide the tools for intelligent perception and decision-making in robotic systems.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers, computer vision practitioners, and machine learning engineers who wish to apply computer vision and deep learning techniques for robotic perception and autonomy.
By the end of this training, participants will be able to:
- Implement computer vision pipelines using OpenCV.
- Integrate deep learning models for object detection and recognition.
- Use vision-based data for robotic control and navigation.
- Combine classical vision algorithms with deep neural networks.
- Deploy computer vision systems on embedded and robotic platforms.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using OpenCV and TensorFlow.
- Live-lab implementation on simulated or physical robotic systems.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction to Computer Vision for Robotics
- Overview of computer vision applications in robotics
- Key challenges in perception and visual understanding
- Setting up the development environment with OpenCV and Python
Image Processing Fundamentals
- Image representation and manipulation
- Filtering, edge detection, and feature extraction
- Color spaces and segmentation techniques
Object Detection and Tracking with OpenCV
- Detecting objects using classical methods (Haar cascades, HOG)
- Tracking moving objects in video streams
- Integrating visual feedback into robotic systems
Deep Learning for Visual Perception
- Overview of convolutional neural networks (CNNs)
- Training and deploying object detection models
- Applying pre-trained models (YOLO, SSD, Faster R-CNN)
Sensor Fusion and Depth Perception
- Integrating camera data with LiDAR and ultrasonic sensors
- Depth estimation and 3D reconstruction
- Perception for obstacle avoidance and navigation
Vision-Based Control and Decision Making
- Applying computer vision to robotic manipulation
- Visual servoing and closed-loop control
- Autonomous decision-making based on visual input
Deploying and Optimizing Vision Models
- Deploying models on embedded systems and edge devices
- Optimizing inference performance for real-time applications
- Troubleshooting and improving accuracy
Summary and Next Steps
Requirements
- An understanding of basic robotics concepts
- Experience with Python programming
- Familiarity with machine learning fundamentals
Audience
- Robotics engineers
- Computer vision practitioners
- Machine learning engineers
Custom Corporate Training
Training solutions designed exclusively for businesses.
- Customized Content: We adapt the syllabus and practical exercises to the real goals and needs of your project.
- Flexible Schedule: Dates and times adapted to your team's agenda.
- Format: Online (live), In-company (at your offices), or Hybrid.
Price per private group, online live training, starting from 3900 € + VAT*
Contact us for an exact quote and to hear our latest promotions
(*The final price may vary depending on the technical specialization of the course, the level of customization, the method of delivery and the number of learners)
Need help picking the right course?
Computer Vision for Robotics: Perception with OpenCV & Deep Learning Training Course - Enquiry
Computer Vision for Robotics: Perception with OpenCV & Deep Learning - Consultancy Enquiry
Testimonials (2)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
its knowledge and utilization of AI for Robotics in the Future.
Ryle - PHILIPPINE MILITARY ACADEMY
Course - Artificial Intelligence (AI) for Robotics
Provisional Upcoming Courses (Contact Us For More Information)
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursArtificial Intelligence (AI) for Robotics integrates machine learning, control systems, and sensor fusion to develop intelligent machines capable of autonomous perception, reasoning, and action. By leveraging modern tools such as ROS 2, TensorFlow, and OpenCV, engineers can now create robots that intelligently navigate, plan, and interact with real-world environments.
This instructor-led live training, available both online and onsite, is designed for intermediate-level engineers looking to develop, train, and deploy AI-driven robotic systems using contemporary open-source technologies and frameworks.
Upon completion of this training, participants will be able to:
- Utilize Python and ROS 2 to construct and simulate robotic behaviours.
- Implement Kalman and Particle Filters for accurate localization and tracking.
- Apply computer vision techniques via OpenCV for perception and object detection.
- Employ TensorFlow for motion prediction and learning-based control.
- Integrate SLAM (Simultaneous Localization and Mapping) to enable autonomous navigation.
- Develop reinforcement learning models to enhance robotic decision-making capabilities.
Course Format
- Interactive lectures and discussions.
- Practical implementation using ROS 2 and Python.
- Hands-on exercises involving simulated and real robotic environments.
Course Customisation Options
To request a customised training session for this course, please get in touch to arrange it.
AI and Robotics for Nuclear - Extended
120 HoursIn this instructor-led, live training in Portugal (online or onsite), participants will explore the technologies, frameworks, and techniques required to program various types of robots for applications in nuclear technology and environmental systems.
Spanning six weeks, the course runs five days a week. Each four-hour session combines lectures, discussions, and practical robot development within a live lab environment. Participants will undertake various real-world projects relevant to their roles to apply and reinforce their newly acquired knowledge.
The target hardware for this course will be simulated in 3D using simulation software. Programming the robots will involve the open-source ROS (Robot Operating System) framework, along with C++ and Python.
Upon completion of this training, participants will be capable of:
- Grasping the core concepts underlying robotic technologies.
- Understanding and managing the interaction between software and hardware in a robotic system.
- Understanding and implementing the software components that support robotics.
- Building and operating a simulated mechanical robot capable of seeing, sensing, processing, navigating, and interacting with humans via voice.
- Comprehending the essential elements of artificial intelligence (including machine learning, deep learning, etc.) needed to construct a smart robot.
- Implementing filters (Kalman and Particle) to allow the robot to locate moving objects within its environment.
- Implementing search algorithms and motion planning.
- Implementing PID controls to regulate a robot's movement within its environment.
- Implementing SLAM algorithms to enable a robot to map an unknown environment.
- Enhancing a robot's ability to perform complex tasks through Deep Learning.
- Testing and troubleshooting a robot in realistic scenarios.
AI and Robotics for Nuclear
80 HoursIn this instructor-led, live training in Portugal (online or onsite), participants will learn the different technologies, frameworks and techniques for programming different types of robots to be used in the field of nuclear technology and environmental systems.
The 4-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The code will then be loaded onto physical hardware (Arduino or other) for final deployment testing. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Test and troubleshoot a robot in realistic scenarios.
Autonomous Navigation & SLAM with ROS 2
21 HoursROS 2 (Robot Operating System 2) is an open-source framework designed to support the development of complex and scalable robotic applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers and developers who wish to implement autonomous navigation and SLAM (Simultaneous Localization and Mapping) using ROS 2.
By the end of this training, participants will be able to:
- Set up and configure ROS 2 for autonomous navigation applications.
- Implement SLAM algorithms for mapping and localization.
- Integrate sensors such as LiDAR and cameras with ROS 2.
- Simulate and test autonomous navigation in Gazebo.
- Deploy navigation stacks on physical robots.
Course Format
- Interactive lecture and discussion.
- Hands-on practice using ROS 2 tools and simulation environments.
- Live-lab implementation and testing on virtual or physical robots.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Developing Intelligent Bots with Azure
14 HoursAzure Bot Service unites the capabilities of the Microsoft Bot Framework with Azure Functions, offering a robust platform for the rapid creation of intelligent bots.
During this instructor-led, live training, participants will discover how to efficiently develop intelligent bots using Microsoft Azure.
Upon completing the training, participants will be able to:
Grasp the fundamental concepts underpinning intelligent bots.
Construct intelligent bots using cloud-based applications.
Acquire practical expertise in the Microsoft Bot Framework, the Bot Builder SDK, and Azure Bot Service.
Implement established bot design patterns in real-world scenarios.
Create and deploy their first intelligent bot utilizing Microsoft Azure.
Audience
This course is tailored for developers, hobbyists, engineers, and IT professionals with an interest in bot development.
Course Format
The training integrates lectures and discussions with exercises, placing a strong emphasis on hands-on practice.
Developing a Bot
14 HoursChatbots act as automated assistants, streamlining user interactions across various messaging channels and accelerating task completion without requiring human intervention.
During this instructor-led live training, attendees will gain the foundational knowledge needed to develop bots by practically building sample chatbots using industry-standard development tools and frameworks.
Upon completion of this training, participants will be able to:
- Identify the diverse use cases and applications of bots
- Grasp the end-to-end bot development lifecycle
- Evaluate the various tools and platforms available for bot construction
- Construct a prototype chatbot for Facebook Messenger
- Develop a prototype chatbot utilizing the Microsoft Bot Framework
Target Audience
- Software developers keen on creating their own bot solutions
Course Format
- A blend of lectures, interactive discussions, exercises, and extensive hands-on practice
Edge AI for Robots: TinyML, On-Device Inference & Optimization
21 HoursEdge AI allows artificial intelligence models to execute directly on embedded or resource-constrained devices, thereby reducing latency and power consumption while enhancing autonomy and privacy within robotic systems.
This instructor-led live training (available online or onsite) is designed for intermediate-level embedded developers and robotics engineers aiming to implement machine learning inference and optimization techniques directly on robotic hardware using TinyML and edge AI frameworks.
Upon completing this training, participants will be able to:
- Comprehend the core principles of TinyML and edge AI for robotics.
- Convert and deploy AI models for on-device inference.
- Optimize models for speed, size, and energy efficiency.
- Integrate edge AI systems into robotic control architectures.
- Evaluate performance and accuracy in real-world scenarios.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using TinyML and edge AI toolchains.
- Practical exercises on embedded and robotic hardware platforms.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Human-Centric Physical AI: Collaborative Robots and Beyond
14 HoursThis instructor-led, live training in Portugal (online or onsite) is designed for intermediate-level participants wishing to explore the role of collaborative robots (cobots) and other human-centric AI systems in modern workplaces.
By the end of this training, participants will be able to:
- Understand the principles of Human-Centric Physical AI and its applications.
- Explore the role of collaborative robots in enhancing workplace productivity.
- Identify and address challenges in human-machine interactions.
- Design workflows that optimise collaboration between humans and AI-driven systems.
- Promote a culture of innovation and adaptability in AI-integrated workplaces.
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control
21 HoursHuman-Robot Interaction (HRI): Voice, Gesture & Collaborative Control is a practical course aimed at introducing participants to the design and implementation of intuitive interfaces for human–robot communication. This training blends theory, design principles, and programming practice to help learners build natural and responsive interaction systems using speech, gestures, and shared control techniques. Participants will acquire skills in integrating perception modules, developing multimodal input systems, and designing robots that collaborate safely with humans.
This instructor-led live training (available online or onsite) is designed for beginner to intermediate-level participants who wish to design and implement human–robot interaction systems that enhance usability, safety, and user experience.
Upon completion of this training, participants will be able to:
- Understand the foundations and design principles of human–robot interaction.
- Develop voice-based control and response mechanisms for robots.
- Implement gesture recognition using computer vision techniques.
- Design collaborative control systems for safe and shared autonomy.
- Evaluate HRI systems based on usability, safety, and human factors.
Format of the Course
- Interactive lectures and demonstrations.
- Hands-on coding and design exercises.
- Practical experiments in simulation or real robotic environments.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursIndustrial Robotics Automation: ROS-PLC Integration & Digital Twins is a practical course designed to bridge the gap between industrial automation and contemporary robotics frameworks. Participants will acquire the skills to integrate ROS-based robotic systems with PLCs for synchronized operations, whilst exploring digital twin environments to simulate, monitor, and optimise production processes. The curriculum places a strong emphasis on interoperability, real-time control, and predictive analysis through the use of digital replicas of physical systems.
This instructor-led live training, available both online and onsite, targets intermediate-level professionals seeking to develop practical expertise in linking ROS-controlled robots with PLC environments and implementing digital twins to optimise automation and manufacturing outcomes.
Upon completion of this training, participants will be able to:
- Comprehend the communication protocols utilized between ROS and PLC systems.
- Implement real-time data exchange mechanisms between robots and industrial controllers.
- Develop digital twins for the purposes of monitoring, testing, and process simulation.
- Integrate sensors, actuators, and robotic manipulators within industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Course Format
- Interactive lectures and architecture walkthroughs.
- Practical exercises focused on integrating ROS and PLC systems.
- Implementation of simulation and digital twin projects.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Artificial Intelligence (AI) for Mechatronics
21 HoursThis instructor-led live training in Portugal (offered online or onsite) is designed for engineers seeking to understand the application of artificial intelligence to mechatronic systems.
Upon completing this training, participants will be equipped to:
- Obtain a comprehensive overview of artificial intelligence, machine learning, and computational intelligence.
- Comprehend the fundamental concepts of neural networks and various learning methodologies.
- Select appropriate artificial intelligence strategies to address real-world challenges effectively.
- Develop and implement AI solutions within the field of mechatronic engineering.
Multi-Robot Systems and Swarm Intelligence
28 HoursMulti-Robot Systems and Swarm Intelligence is an advanced training course that explores the design, coordination, and control of robotic teams inspired by biological swarm behaviors. Participants will learn how to model interactions, implement distributed decision-making, and optimize collaboration across multiple agents. The course combines theory with hands-on simulation to prepare learners for applications in logistics, defense, search and rescue, and autonomous exploration.
This instructor-led, live training (online or onsite) is aimed at advanced-level professionals who wish to design, simulate, and implement multi-robot and swarm-based systems using open-source frameworks and algorithms.
By the end of this training, participants will be able to:
- Understand the principles and dynamics of swarm intelligence and cooperative robotics.
- Design communication and coordination strategies for multi-robot systems.
- Implement distributed decision-making and consensus algorithms.
- Simulate collective behaviors such as formation control, flocking, and coverage.
- Apply swarm-based techniques to real-world scenarios and optimization problems.
Format of the Course
- Advanced lectures with algorithmic deep dives.
- Hands-on coding and simulation in ROS 2 and Gazebo.
- Collaborative project applying swarm intelligence principles.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Multimodal AI in Robotics
21 HoursThis instructor-led, live training in Portugal (online or onsite) is designed for advanced robotics engineers and AI researchers who wish to utilize Multimodal AI for integrating various sensory data to create more autonomous and efficient robots that can see, hear, and touch.
Upon completion of this training, participants will be able to:
- Implement multimodal sensing within robotic systems.
- Develop AI algorithms for sensor fusion and decision-making.
- Build robots capable of executing complex tasks in dynamic environments.
- Tackle challenges related to real-time data processing and actuation.
Smart Robots for Developers
84 HoursA Smart Robot represents an Artificial Intelligence (AI) system capable of learning from its environment and past experiences, thereby enhancing its capabilities based on that accumulated knowledge. These robots can collaborate with humans, working alongside them and learning from their behaviours. Beyond performing manual labour, Smart Robots are also equipped to handle cognitive tasks. In addition to physical hardware, Smart Robots can exist purely as software applications within a computer, operating without moving parts or direct physical interaction with the external world.
In this instructor-led live training, participants will explore the various technologies, frameworks, and techniques required to program different types of mechanical Smart Robots. Attendees will then apply this knowledge to complete their own Smart Robot projects.
The course is structured into four sections, each comprising three days of lectures, discussions, and hands-on robot development in a live lab environment. Each section concludes with a practical, hands-on project, allowing participants to practise and demonstrate their acquired skills.
The target hardware for this course will be simulated in 3D using simulation software. Programming will be conducted using the open-source ROS (Robot Operating System) framework, along with C++ and Python.
Upon completion of this training, participants will be able to:
- Grasp the key concepts underpinning robotic technologies
- Understand and manage the interaction between software and hardware in a robotic system
- Understand and implement the software components that form the foundation of Smart Robots
- Build and operate a simulated mechanical Smart Robot capable of seeing, sensing, processing, grasping, navigating, and interacting with humans via voice
- Enhance a Smart Robot's ability to perform complex tasks through Deep Learning
- Test and troubleshoot a Smart Robot in realistic scenarios
Audience
- Developers
- Engineers
Format of the course
- A blend of lectures, discussions, exercises, and extensive hands-on practice
Note
- To customise any part of this course (programming language, robot model, etc.), please contact us to arrange.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursSmart Robotics represents the integration of artificial intelligence into robotic systems to enhance perception, decision-making, and autonomous control.
This instructor-led live training, available online or onsite, is designed for advanced robotics engineers, systems integrators, and automation leads who aim to implement AI-driven perception, planning, and control within smart manufacturing environments.
Upon completing this training, participants will be able to:
- Understand and apply AI techniques for robotic perception and sensor fusion.
- Develop motion planning algorithms for collaborative and industrial robots.
- Deploy learning-based control strategies for real-time decision making.
- Integrate intelligent robotic systems into smart factory workflows.
Format of the Course
- Interactive lecture and discussion.
- Extensive exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.