Ontology-based surgical subtask automation, automating blunt dissection

In a paper to be published in the Journal of Medical Robotics Research, a team of researchers have discovered a way to automate blunt dissection using the da Vinci surgical robot, which is controlled by the Da Vinci Research Kit (DVRK).

Blunt dissection is the surgical separation of tissue layers by means of a blunt instrument. In most surgical procedures, blunt dissection takes up far more time than sharp dissection, which can be described as the practice of cutting through tissues with a sharp instrument. Due to this, any innovation that can make the process of blunt dissection easier, safer or faster could significantly improve the lives of both surgeons and patients alike.

A good way to make blunt dissection easier, safer, and faster is to automate it. Here’s a general overview of how the automation of surgical procedures is done. Surgical operations are first described as a series of tasks known as a Surgical Process (SP). An ontology, or a complex data/knowledge representation system based on the accurate description of agents in surgery, is developed to represent the SP in a way that allows it to be analysed in an automated manner. A Surgical Process Model (SPM) is then created to simplify the SP into a pattern that can be performed with support from a workflow management system.

Recently, a team of researchers comprising Dénes Ákos Nagy, Tamás Dániel Nagy, Renáta Elek, Imre J. Rudas and Tamás Haidegger have discovered a way to automate blunt dissection using the da Vinci surgical robot, which is controlled by the Da Vinci Research Kit (DVRK). The team decided to automate blunt dissection as it is performed during Laparoscopic Cholecystectomy (LC), or the procedure by which the gall bladder is removed by keyhole surgery. First, they read the surgical literature on LC and watched videos depicting the procedure. Based on this information, an SPM was created and the portion of the SPM which involved blunt dissection was selected for further study. An algorithm was written to control and monitor the execution of the process. The algorithim requires the surgeon to select a start and an end point for the dissection on an endoscopic image. The 3D field is then reconstructed and the dissection line between the boundary points identified. The computer vision algorithm selects one point on this line with the least depth and the surgical robot executes blunt dissection at this point. When the dissection is complete, the program checks if the target anatomy is exposed. If not, the algorithm reinitiates the dissection line and starts the process again.

To test the performance of this algorithm, the team created an object consisting of two outer layers of hard silicone and an inner layer of soft, foamy, dissectable silicone which could be penetrated with the laparoscopic tool. In all the test cases, the dissection progressed on the intended dissection line. The team also tested the sensitivity of the method to texture and rotation. In addition, they tested the method on more realistic objects such as chicken breast, pork shoulder, and duck liver. It was found that the method is highly sensitive to texture but not significantly sensitive to rotation, and that the performance of the method on realistic objects depended largely on the texture of the objects and the lighting. The method performed far better on feature-rich objects than on feature-poor ones.

Source: Read Full Article