Loading…
Attending this event?
This schedule is a draft. Events may change at any time. Click the links below to manage your conference experience. Adding events to your personal schedule does not reserve a space for you.

Register  |  Add Tickets  |  Book Hotel
Thursday, May 23 • 8:30am - 9:00am
(Research & Technical Studies) Advancing Conservation Techniques Through Deep Learning of Optical Coherence Tomography Images For Classifying Kozo-Fibered Papers

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

In this presentation, we will share a novel method of acquiring the cross-sectional images of 35 sample papers using optical coherence tomography (OCT) and feeding the images through Deep Convolutional Neural Networks (AlexNet) to achieve highly accurate and non-destructive classification. Paper identification and analysis of morphological characteristics related to plant cultivation and craft tradition have long relied on interpretive observation and/or destructive fiber sampling techniques [1-3]. Optical coherence tomography (OCT) is a non-invasive technique used for medical imaging that has been applied to art conservation to capture both the surface and subsurface structure information of cultural heritage objects [4]. Thirty-five paper samples were sourced from a conservation vendor specializing in Japanese handmade papers. These were selected based on their known fiber content and production methods as well as use in book and paper conservation treatments for hinging, tear repairs, and loss compensations. Cross-sectional images of the samples produced by OCT reveal how light scatters in the paper substrate. The patterns of scattering light seem arbitrary to the human eye, however, AlexNet, first introduced in 2012 as a convolution neural network (CNN) for image classifications [5], can be used for deep learning to classify these papers. A total of 35,840 OCT cross-section images were generated, of which 3,500 images (~10% of the dataset) were used for training, 8,960 images (25% of the dataset) were used for validation, and 23,380 images (~65% of the dataset) were used for testing. The AlexNet achieved a test accuracy of 98.99%, with 23 out of 35 paper samples achieving 100% accuracy in the tests. This presentation will share information on this testing methodology and equipment, as well as a summary of the results which demonstrate that combining OCT with AlexNet can provide conservators with a highly accurate tool for classifying papers used in treatment repairs.







Reference

[1] Barbara Borghese, “Understanding Asian papers and their applications in paper conservation”: a workshop review by Laura Dellapiana,” The International Institute for Conservation of Historic and Artistic Works, 2017.

[2] P. Webber, “The use of Asian paper conservation techniques in Western collections.” [Online]. Available: https://icon.org.uk/node/4998

[3] H. Yonenobu, S. Tsuchikawa, and K. Sato, “Near-infrared spectroscopic analysis of aging degradation in antique washi paper using a deuterium exchange method,” Vib Spectrosc, vol. 51, no. 1, pp. 100–104, Sep. 2009, doi: 10.1016/j.vibspec.2008.11.001.

[4] X. Zhou et al., “A Note on Macroscopic Optical Coherence Tomography Imaging Enabled 3D Scanning for Museum and Cultural Heritage Applications,” Journal of the American Institute for Conservation, pp. 1–10, Oct. 2022, doi: 10.1080/01971360.2022.2093537.

[5] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386.

Authors
AK

Ayush Kale

Research Assistant, New Jersey Institute of Technology
YL

Yuwei Liu

New Jersey Institute of Technology
XL

Xuan Liu

Associate Professor, Electrical and Computer Engineering, New Jersey Institute of Technology
SR

Sarah Reidell

Margy E. Meyerson Head of Conservation, University of Pennsylvania
Sarah Reidell is the Margy E. Meyerson Head of Conservation at the University of Pennsylvania Libraries. She is a peer-reviewed Fellow of AIC with a focus on book and paper conservation.
avatar for Yi Yang

Yi Yang

Associate Professor, Electrical Engineering and Computer Science, Penn State Abington
Yi Yang is an Associate Professor of the Science and Engineering Division at Penn State Abington where he directs the YYLab. Dr. Yang is also serving as one of the team leads for the 3D imaging team under the American Institute for Conservation(AIC)’s imaging working group. He obtained... Read More →

Speakers
avatar for Yi Yang

Yi Yang

Associate Professor, Electrical Engineering and Computer Science, Penn State Abington
Yi Yang is an Associate Professor of the Science and Engineering Division at Penn State Abington where he directs the YYLab. Dr. Yang is also serving as one of the team leads for the 3D imaging team under the American Institute for Conservation(AIC)’s imaging working group. He obtained... Read More →


Thursday May 23, 2024 8:30am - 9:00am MDT
Room 355 EF (Salt Palace)