Abstract
We propose a novel geometry-constrained learning-based method for camera-in-hand visual servoing systems that eliminates the need for camera intrinsic parameters, depth information, and the robot’s kinematic model. Our method uses a cerebellar model articulation controller (CMAC) to execute online Jacobian estimation within the control framework. Specifically, we introduce a fixed-dimension, uniform-magnitude error function based on the projective homography matrix. The fixed-dimension error function ensures a constant Jacobian size regardless of the number of feature points, thereby reducing computational complexity. By not relying on individual feature points, the approach maintains robustness even when some features are occluded. The uniform magnitude of the error vector elements simplifies neural network input normalization, thereby enhancing online training efficiency. Furthermore, we incorporate geometric constraints between feature points (such as collinearity preservation) into the network update process, ensuring that model predictions conform to the fundamental principles of projective geometry and eliminating physically impossible control outputs. Experimental and simulation results demonstrate that our approach achieves superior robustness and faster learning rates compared to other model-free image-based visual servoing methods.
| Original language | English |
|---|---|
| Article number | 2514 |
| Journal | Sensors |
| Volume | 25 |
| Issue number | 8 |
| DOIs | |
| State | Published - Apr 2025 |
| Externally published | Yes |
Keywords
- cerebellar model articulation controller
- eye-in-hand configuration
- geometry constraints
- visual servoing