Print Email Facebook Twitter Pattern recognition in hyperspectral data acquired during surgical procedures: Differentiation between nerve and adipose tissue Title Pattern recognition in hyperspectral data acquired during surgical procedures: Differentiation between nerve and adipose tissue Author Schols, R.M. ter Laan, M. Stassen, L.P.S. Bouvy, N.D. Wieringa, F.P. Alic, L. Publication year 2016 Abstract Intraoperative nerve localization is extremely important during surgery, especially laparoscopy. This is particularly challenging when nerves show visual resemblance to surrounding tissue. An example of such a delicate procedure is thyroid and parathyroid surgery, where iatrogenic injury of the recurrent laryngeal nerve can result in transient or permanent vocal problems. A camera system, enabling nerve-specific image enhancement, would be useful in preventing such complications. Hyperspectral camera technology has a potential to provide a nerve-specific image enhancement. As a first step towards such a dedicated camera system, we evaluated the availability of useful spectral tissue signatures by diffuse reflectance spectroscopy using silicon (Si) and indium gallium arsenide (InGaAs) sensors. The spectral signatures from the combined Si & InGaAs bandwidth ranges 350–1,830 nm (1 nm spectral resolution) were used to develop a classifier. To build the classifier, 36 heuristic features were extracted from spectral signatures collected during carpal tunnel release (CTR) surgery as well as thyroid and parathyroid (T&P) surgery. As the larger median nerve (exposed during T&P surgery) provided a lower probability to partial volume effect, this data (15 tissue spots) was used to train the classifier. For validation purposes, 40 tissue spots acquired during CTR surgery were used. The differentiation between nerve tissue and the visually quite similar adipose tissue yielded good results. When using one feature, we reached the accuracy of 93.3% in training set and the accuracy of 85% in the independent validation set. When using two features, we reached accuracy of 100% in training set (26 pairs of features) and the maximum accuracy of 92.5% (11 pairs of features) in the independent validation set. For three features, we reached the accuracy of 100% in training set (410 triplets of features), with the accuracy of 100% in the independent validation set (37 triplets of features). Subject Nano TechnologyOPT - OpticsTS - Technical SciencesBiomedical InnovationElectronicsHealthy LivingDiffuse reflectance spectroscopyTissue spectral analysisNerve classificationRecurrent laryngeal nerveMedian nerveAdipose tissue To reference this document use: http://resolver.tudelft.nl/uuid:b77bdb4c-aac8-4897-9a28-71e4838433d9 TNO identifier 534791 Source Proceedings MLDAS, Third Machine Learning and Data Analytics Symposium, 14-15 March 2016, Doha, Qatar Document type conference paper Files To receive the publication files, please send an e-mail request to TNO Library.