Diseño de un sistema automático de evasión de obstáculos basado en visión artificial para el robot colaborativo UR3

dc.contributor.advisorGonzález Acevedo, Hernando
dc.contributor.advisorArizmendi Pereira, Carlos Julio
dc.contributor.authorBlanco Vacca, Naifer David
dc.contributor.authorBuitrago Rangel, Alex Julian
dc.contributor.cvlacGonzález Acevedo, Hernando [0000544655]spa
dc.contributor.cvlacArizmendi Pereira, Carlos Julio [0001381550]spa
dc.contributor.googlescholarGonzález Acevedo, Hernando [V8tga0cAAAAJ]spa
dc.contributor.googlescholarArizmendi Pereira, Carlos Julio [JgT_je0AAAAJ]spa
dc.contributor.orcidGonzález Acevedo, Hernando [0000-0001-6242-3939]spa
dc.contributor.researchgateGonzález Acevedo, Hernando [Hernando_Gonzalez3]spa
dc.contributor.researchgateArizmendi Pereira, Carlos Julio [Carlos_Arizmendi2]spa
dc.contributor.scopusGonzález Acevedo, Hernando [55821231500]spa
dc.contributor.scopusArizmendi Pereira, Carlos Julio [16174088500]spa
dc.coverage.campusUNAB Campus Bucaramangaspa
dc.coverage.spatialBucaramanga (Santander, Colombia)spa
dc.coverage.temporal2022spa
dc.date.accessioned2022-11-21T21:21:10Z
dc.date.available2022-11-21T21:21:10Z
dc.date.issued2022-08-20
dc.degree.nameIngeniero Mecatrónicospa
dc.description.abstractLos robots colaborativos están fabricados para realizar cada vez más tareas con los humanos, por esto es más seguro que un robot perciba su entorno para poder hacer movimientos que no comprometan la integridad tanto del humano como del robot. Aquí se muestra el desarrollo y validación de un sistema de evasión de obstáculos basados en visión artificial implementado en el robot colaborativo UR3. Se implementa un algoritmo de visión artificial para que el robot pueda tener la capacidad de identificar los obstáculos que hay entre un punto inicial y uno final. Posteriormente se implementa un algoritmo de planeación de trayectorias el cual permite al robot saber cuál es la ruta que debe seguir para llegar del punto inicial al punto final sin colisionar con los obstáculos o consigo mismo. Ambos algoritmos se desarrollaron en el software MATLAB.spa
dc.description.abstractenglishCollaborative robots are made to perform more and more tasks with humans, which is why it is safer for a robot to perceive its environment in order to make movements that do not compromise the integrity of both the human and the robot. Here we show the development and validation of an obstacle avoidance system based on artificial vision implemented in the UR3 collaborative robot. An artificial vision algorithm is implemented so that the robot can have the capacity to identify the obstacles that exist between an initial point and an end point. Subsequently, a trajectory planning algorithm which allows the robot to know the route it must follow to get from the start point to the end point without colliding with obstacles or with itself. Both algorithms were developed in MATLAB software.spa
dc.description.degreelevelPregradospa
dc.description.learningmodalityModalidad Presencialspa
dc.description.tableofcontents1. INTRODUCCIÓN 1. OBJETIVOS 1.1. Objetivo General 1.2. Objetivos específicos 2. ESTADO DEL ARTE 3. VISIÓN ARTIFICIAL 3.1. Software 3.2. Hardware 3.3. Calibración de Kinect V2 3.4. Reconocimiento basado en RBG-D 3.5. Redes neuronales convolucionales 3.6. Residual Networks 3.7. Segmentación semántica 3.8. Implementación del algoritmo basado en segmentación semántica 4. PLANEACIÓN DE TRAYECTORIAS 4.1. Robot UR3 4.2. Cinemática 4.3. Planeación de trayectorias 4.4. Protocolo de comunicación 5. VALIDACIONES 6. CONCLUSIONES 7. BIBLIOGRAFÍA 8. ANEXOS 8.1. Anexo 1 8.2. Anexo 2spa
dc.format.mimetypeapplication/pdfspa
dc.identifier.instnameinstname:Universidad Autónoma de Bucaramanga - UNABspa
dc.identifier.reponamereponame:Repositorio Institucional UNABspa
dc.identifier.repourlrepourl:https://repository.unab.edu.cospa
dc.identifier.urihttp://hdl.handle.net/20.500.12749/18419
dc.language.isospaspa
dc.publisher.facultyFacultad Ingenieríaspa
dc.publisher.grantorUniversidad Autónoma de Bucaramanga UNABspa
dc.publisher.programPregrado Ingeniería Mecatrónicaspa
dc.relation.referencesE. B. Kumar and V. Thiagarasu, "Color channel extraction in RGB images for segmentation," 2017 2nd International Conference on Communication and Electronics Systems (ICCES), 2017, pp. 234-239, doi: 10.1109/CESYS.2017.8321272.spa
dc.relation.referencesM. Minos-Stensrud, O. H. Haakstad, O. Sakseid, B. Westby and A. Alcocer, "Towards Automated 3D reconstruction in SME factories and Digital Twin Model generation," 2018 18th International Conference on Control, Automation and Systems (ICCAS), 2018, pp. 1777-1781spa
dc.relation.referencesZ. Shan, X. Xu, Y. Tao and H. Xiong, "A Trajectory Planning and Simulation Method for Welding Robot," 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), 2017, pp. 510-515, doi: 10.1109/CYBER.2017.8446181.spa
dc.relation.referencesE. Shelhamer, J. Long and T. Darrell, "Fully Convolutional Networks for Semantic Segmentation," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640-651, 1 April 2017, doi: 10.1109/TPAMI.2016.2572683.spa
dc.relation.referencesK. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770- 778, doi: 10.1109/CVPR.2016.90.spa
dc.relation.referencesC. Lin and M. Li, "Motion planning with obstacle avoidance of an UR3 robot using charge system search," 2018 18th International Conference on Control, Automation and Systems (ICCAS), 2018, pp. 746-750.spa
dc.relation.referencesA. Y. Lee, G. Jang and Y. Choi, "Infinitely differentiable and continuous trajectory planning for mobile robot control," 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 2013, pp. 357-361, doi: 10.1109/URAI.2013.6677386.spa
dc.relation.referencesL. S. Scimmi, M. Melchiorre, S. Mauro and S. P. Pastorelli, "Implementing a Vision Based Collision Avoidance Algorithm on a UR3 Robot," 2019 23rd International Conference on Mechatronics Technology (ICMT), 2019, pp. 1-6, doi: 10.1109/ICMECT.2019.8932105.spa
dc.relation.referencesL. S. Scimmi, M. Melchiorre, S. Mauro and S. Pastorelli, "Experimental Real-Time Setup for Vision Driven Hand-Over with a Collaborative Robot," 2019 International Conference on Control, Automation and Diagnosis (ICCAD), 2019, pp. 1-5, doi: 10.1109/ICCAD46983.2019.9037961.spa
dc.relation.referencesIntel RealSense D400 Series Product Family [En linea]. Avalaible: https://www.intel.com/content/dam/support/us/en/documents/emerging technologies/intel-realsense-technology/Intel-RealSense-D400-Series-Datasheet.pdf [Último accedo: 2022]spa
dc.relation.referencesKinect for Windows SDK 2.0. http://www.todokinect.com/spa
dc.relation.referencesL. Egorova and A. Lavrov, "Determination of workspace for motion capture using Kinect," 2015 56th International Scientific Conference on Power and Electrical Engineering of Riga Technical University (RTUCON), 2015, pp. 1-4, doi: 10.1109/RTUCON.2015.7343155spa
dc.relation.referencesIbañez, R., Soria, Á., Teyseyre, A., & Campo, M. (2014). Easy gesture recognition for Kinect. Advances in Engineering Software, 76, 171–180. doi:10.1016/j.advengsoft.2014spa
dc.relation.referencesM. Shoryabi, A. Foroutannia and A. Rowhanimanesh, "A 3D Deep Learning Approach for Classification of Gait Abnormalities Using Microsoft Kinect V2 Sensor," 2021 26th International Computer Conference, Computer Society of Iran (CSICC), 2021, pp. 1-4, doi: 10.1109/CSICC52343.2021.9420611.spa
dc.relation.referencesDive into depp learning, Residual Networks (ResNet). https://classic.d2l.ai/chapter_convolutional-modern/resnet.html#residual-networks resnetspa
dc.relation.referencesMathWorks, Segmentación semántica. Mathworks. https://la.mathworks.com/solutions/image-video-processing/semantic segmentation.html.spa
dc.relation.references"Universal Robot UR3". Universal Robotsspa
dc.relation.referencesRomero C. Juan, Paez R. David, Guarnizo M. José (2021). “UR3 Modelo Cinemático Inverso”spa
dc.relation.referencesM. Ortiz-Salazar, A. Rodríguez-Liñán, L. M. Torres-Treviño and I. López-Juárez, "IMU Based Trajectory Generation and Modelling of 6-DOF Robot Manipulators," 2015 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE), 2015, pp. 181-186, doi: 10.1109/ICMEAE.2015.27.spa
dc.relation.referencesJ. -D. Sun, G. -Z. Cao, W. -B. Li, Y. -X. Liang and S. -D. Huang, "Analytical inverse kinematic solution using the D-H method for a 6-DOF robot," 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 2017, pp. 714-716, doi: 10.1109/URAI.2017.7992807.spa
dc.relation.referencesY. Ren, H. Sun, Y. Tang and S. Wang, "Vision Based Object Grasping of Robotic Manipulator," 2018 24th International Conference on Automation and Computing (ICAC), 2018, pp. 1-5, doi: 10.23919/IConAC.2018.8749001.spa
dc.relation.referencesL. D. Hanh and C. -Y. Lin, "Combining stereo vision and fuzzy image based visual servoing for autonomous object grasping using a 6-DOF manipulator," 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2012, pp. 1703-1708, doi: 10.1109/ROBIO.2012.6491213.spa
dc.relation.referencesMathWorks, bidirectional rapidly exploring random trees. Mathworks. https://la.mathworks.com/help/robotics/ref/manipulator_rrt_true.pngspa
dc.relation.referencesT. LI and H. ZHU, "Research on model control of binocular robot vision system," 2018 Chinese Automation Congress (CAC), 2018, pp. 1794-1797, doi: 10.1109/CAC.2018.8623756spa
dc.relation.referencesIglesias García, M., & Lorenzo Prada, A.Sistema de Visión Artificial (Ingeniería).Universidad Carlos III.spa
dc.relation.referencesMathWorks, «Computer Vision Toolbox,» [En línea]. Available: https://la.mathworks.com/products/computer-vision.html. [Último acceso: 2022]spa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.rights.creativecommonsAtribución-NoComercial-SinDerivadas 2.5 Colombia*
dc.rights.localAbierto (Texto Completo)spa
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/2.5/co/*
dc.subject.keywordsMechatronicspa
dc.subject.keywordsRoboticsspa
dc.subject.keywordsAlgorithmspa
dc.subject.keywordsMatlabspa
dc.subject.keywordsHandlersspa
dc.subject.keywordsAutomatic machineryspa
dc.subject.keywordsArtificial visionspa
dc.subject.keywordsAutomationspa
dc.subject.keywordsAutomatic controlspa
dc.subject.keywordsNumerical analysisspa
dc.subject.lembMecatrónicaspa
dc.subject.lembRobotspa
dc.subject.lembManipuladoresspa
dc.subject.lembMaquinaria automáticaspa
dc.subject.lembAutomatizaciónspa
dc.subject.lembControl automáticospa
dc.subject.lembAnálisis numéricospa
dc.subject.proposalRobóticaspa
dc.subject.proposalAlgoritmospa
dc.subject.proposalVisión artificialspa
dc.titleDiseño de un sistema automático de evasión de obstáculos basado en visión artificial para el robot colaborativo UR3spa
dc.title.translatedDesign of an automatic obstacle avoidance system based on artificial vision for the collaborative robot UR3spa
dc.type.coarhttp://purl.org/coar/resource_type/c_7a1f
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aaspa
dc.type.driverinfo:eu-repo/semantics/bachelorThesis
dc.type.hasversioninfo:eu-repo/semantics/acceptedVersion
dc.type.localTrabajo de Gradospa
dc.type.redcolhttp://purl.org/redcol/resource_type/TP

Archivos

Bloque original

Mostrando 1 - 2 de 2
Cargando...
Miniatura
Nombre:
2022_Tesis_Blanco_Vacca_Naifer (1).pdf
Tamaño:
1.91 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis
Cargando...
Miniatura
Nombre:
2022_Licencia_Blanco_Vacca_Naifer.pdf
Tamaño:
486.75 KB
Formato:
Adobe Portable Document Format
Descripción:
Licencia

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
829 B
Formato:
Item-specific license agreed upon to submission
Descripción: