Vision recognition using shape context for autonomous underwater sampling (Record no. 50149)

MARC details
000 -LEADER
fixed length control field 02357nam a2200193Ia 4500
003 - CONTROL NUMBER IDENTIFIER
control field MX-MdCICY
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20250625160142.0
040 ## - CATALOGING SOURCE
Transcribing agency CICY
090 ## - LOCALLY ASSIGNED LC-TYPE CALL NUMBER (OCLC); LOCAL CALL NUMBER (RLIN)
Classification number (OCLC) (R) ; Classification number, CALL (RLIN) (NR) B-15973
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 250602s9999 xx |||||s2 |||| ||und|d
245 10 - TITLE STATEMENT
Title Vision recognition using shape context for autonomous underwater sampling
490 0# - SERIES STATEMENT
Volume/sequential designation IEEE/OES Autonomous Underwater Vehicles, p. 1-7, 2012
500 ## - GENERAL NOTE
General note Artículo
520 3# - SUMMARY, ETC.
Summary, etc. The ocean floor is one of the few remaining unexplored places on the planet. Underwater vehicles, both teleoperated and autonomous, have been built to take images of the ocean floor. The depth that a teleoperated vehicle can achieve is limited by its tether. Autonomous vehicles are able to study the deepest parts of the ocean without a complex tether system. These vehicles, while being great at mapping the ocean floor, are not able to autonomously retrieve samples. In order to retrieve samples the vehicle must: know what objects look like, correctly identify new instances of the target object, estimate the pose so the manipulator can grab it, and retrieve its coordinates in 3D space. Color filtering, shape context and the use of stereovision have been used to autonomously locate, identify, and estimate the pose of objects. Color filtering allows the image to be filtered so that only objects of similar color remain and extraneous information can be disregarded. Shape context matches the shape, as defined by the edge pixels, of each potential target to a known object. Shape context uses a costing function to determine if the potential target is a match to the known object. The costing function takes into account the amount of 'bending energy' it takes to make the shape of the potential target conform to that of the known object. This gives a metric of how well the match is between the potential target and a known object and is done for both the left and right cameras. Once objects have been identified in each image, calibration parameters can be used to retrieve the 3D position of the object. This allows a manipulator on an underwater vehicle to autonomously sample targets.
700 12 - ADDED ENTRY--PERSONAL NAME
Personal name McBryan, K.
700 12 - ADDED ENTRY--PERSONAL NAME
Personal name Akin, D. L.
856 40 - ELECTRONIC LOCATION AND ACCESS
Uniform Resource Identifier <a href="https://drive.google.com/file/d/1v8bww-71jf__bd_ggM0PSXwpmIXVMCDU/view?usp=drivesdk">https://drive.google.com/file/d/1v8bww-71jf__bd_ggM0PSXwpmIXVMCDU/view?usp=drivesdk</a>
Public note Para ver el documento ingresa a Google con tu cuenta: @cicy.edu.mx
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Source of classification or shelving scheme Clasificación local
Koha item type Documentos solicitados
Holdings
Lost status Source of classification or shelving scheme Damaged status Not for loan Collection Home library Current library Shelving location Date acquired Total checkouts Full call number Date last seen Price effective from Koha item type
  Clasificación local     Ref1 CICY CICY Documento préstamo interbibliotecario 25.06.2025   B-15973 25.06.2025 25.06.2025 Documentos solicitados