Localization Based Grasping for Robotic Grippers

Authors

  • Shangquan Li

DOI:

https://doi.org/10.54097/948twa81

Keywords:

Robotic grippers, grasping methods, sensors, localization.

Abstract

To meet the demands of grasping in complex environments, robotic grippers are moving from traditional open-loop control to perception-based closed-loop localization for grasping. Open-loop control lacks real-time feedback and struggles with the uncertainty of unstructured settings. In contrast, closed-loop grasping with multimodal sensing such as vision and touch adapts the strategy online and increases success rates. This paper reviews three categories of grippers and their development, namely industrial, medical, and soft. Industrial grippers provide high stiffness, high payload capacity, and high precision. They meet heavy-load and high-accuracy requirements in production and use vision and control algorithms to raise grasp success. Medical grippers emphasize compliance and micro-manipulation so that surgery meets strict requirements on fine control and safety. They combine force sensing and vision to ensure safe and stable operation. Soft grippers use compliant structures and under-actuated designs to perform enveloping grasps on irregular objects that improves tolerance to error and adaption to shape. Finally, the paper summarizes the value and challenges of fusing vision, tactile sensing, and force sensing with localization strategies in closed-loop grasp control.

Downloads

Download data is not yet available.

References

[1] Volodymyr Tonkonogyi, Vitalii Ivanov, Justyna Trojanowska, et al. Advanced Manufacturing Processes Selected Papers from the Grabchenko's International Conference on Advanced Manufacturing Processes (InterPartner-2019), September 10-13, 2019, Odessa, Ukraine.

[2] Y. Hao, H. Zhang, Z. Zhang, C. Hu and C. Shi. Development of Force Sensing Techniques for Robot-Assisted Laparoscopic Surgery: A Review, in IEEE Transactions on Medical Robotics and Bionics, vol. 6, no. 3, pp. 868-887, Aug. 2024

[3] Kleeberger, K., Bormann, R., Kraus, W. et al. A Survey on Learning-Based Robotic Grasping. Curr Robot Rep 1, 239–249 (2020).

[4] Del Bianco E, Torielli D, Rollo F, Gasperini Det al. A High-Force Gripper with Embedded Multimodal Sensing for Powerful and Perception-Driven Grasping. 2024 IEEE-RAS 23rd International Conference on Humanoid Robots (Humanoids), Nancy, France, 2024, pp. 149-156,

[5] Tai, K, El-Sayed, A.-R., Shahriari, M., Biglarbegian, M., Mahmud, S. State of the Art Robotic Grippers and Applications. Robotics 2016, 5, 11.

[6] T. M. Huh et al. A Multi-Chamber Smart Suction Cup for Adaptive Gripping and Haptic Exploration, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 2021, pp. 1786-1793.

[7] J. Mahler, M. Matl, X. Liu, A. Li, D. Gealy and K. Goldberg. Dex-Net 3.0: Computing Robust Vacuum Suction Grasp Targets in Point Clouds Using a New Analytic Model and Deep Learning, 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 2018, pp. 5620-5627.

[8] Jonathan Tremblay, Thang To, Balakumar Sundaralingam, et al. Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects

[9] Douglas Morrison, Peter Corke, Jürgen Leitner. Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

[10] Shademan A, Decker RS, Opfermann JD, Leonard S, Krieger A, Kim PC. Supervised autonomous robotic soft tissue surgery. Sci Transl Med. 2016, 8(337):337ra64.

[11] Saeidi, H., Opfermann, J. D., Leonard, S., et al. Autonomous robotic laparoscopic surgery for intestinal anastomosis in pigs. Science Robotics.2022.

[12] Uneri A, Balicki MA, Handa J, Gehlbach P, Taylor RH, Iordachita I. New Steady-Hand Eye Robot with Micro-Force Sensing for Vitreoretinal Surgery. Proc IEEE RAS EMBS Int Conf Biomed Robot Biomechatron. 2010, (26-29):814-819.

[13] Maclachlan RA, Becker BC, Tabarés JC, Podnar GW, Lobes LA Jr, Riviere CN. Micron: an Actively Stabilized Handheld Tool for Microsurgery. IEEE Trans Robot. 2012 Feb 1;28(1):195-212.

[14] Rus, D., Tolley, M. Design, fabrication and control of soft robots. Nature 521, 2015, 467–475.

[15] Shintake J, Cacucciolo V, Floreano D, Shea H. Soft Robotic Grippers. Adv Mater. 2018 May 7: e1707035.

[16] E. Brown, N. Rodenberg, J. Amend, A. Mozeika, E. Steltz, M.R. Zakin, H. Lipson, and H.M. Jaeger. Universal robotic gripper based on the jamming of granular material, Proc. Natl. Acad. Sci. U.S.A. 107 (44) 18809-18814.

[17] Catalano MG, Grioli G, Farnioli E, Serio A, Piazza C, Bicchi A. Adaptive synergies for the design and control of the Pisa/IIT SoftHand. The International Journal of Robotics Research.

[18] Deimel R, Brock O. A novel type of compliant and underactuated robotic hand for dexterous grasping. The International Journal of Robotics Research. 2015;35(1-3):161-185.

[19] J. Wang and E. Olson, "AprilTag 2: Efficient and robust fiducial detection," 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea (South), 2016, pp. 4193-4198.

[20] Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, Dieter Fox. PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes

[21] Labbé, Y., Carpentier, J., Aubry, M., Sivic, J., CosyPose: Consistent Multi-view Multi-object 6D Pose Estimation. Springer, 2020, 574–591.

[22] Cheng, H.K., Schwing, A.G. (2022). XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision ECCV 2022. Lecture Notes in Computer Science, vol 13688. Springer, Cham.

[23] Ho Kei Cheng et al. Tracking Anything with Decoupled Video Segmentation. ICCV 2023.

[24] Elliott Donlon, Siyuan Dong, Melody Liu, et al. GelSlim: High-Resolution Tactile-sensing Finger.

[25] M. Lambeta et al., "DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor With Application to In-Hand Manipulation," in IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 3838-3845, July 2020.

[26] Roberto Calandra, Andrew Owens, Dinesh Jayaraman et al. More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch. RA-L, 2018.

[27] Shoujie Li, Haixin Yu, Wenbo Ding et al., "Visual–Tactile Fusion for Transparent Object Grasping in Complex Backgrounds," in IEEE Transactions on Robotics, vol. 39, no. 5, pp. 3838-3856, Oct. 2023,

[28] Maria Bauza et al. SimPLE, a visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects.Sci. Robot.9, eadi8808, 2024.

Downloads

Published

30-03-2026

Issue

Section

Articles

How to Cite

Li, S. (2026). Localization Based Grasping for Robotic Grippers. Academic Journal of Science and Technology, 20(2), 351-360. https://doi.org/10.54097/948twa81