JOEL LAING LUYOH Universiti Teknologi MARA
This innovation presents a convolutional neural network (CNN) based sign language recognition core architecture for application-specific integrated circuits (ASICs) on field-programmable gate array (FPGA) prototype. The deaf and hard of hearing (DHH) individuals still face a big challenge communication due to the lack of sign language proficiency in society and in need of interpreter. With the upper hand of CNN in the field of image classification and processing, this innovation develops a standard 7-layered CNN based on the adaptation to FPGA for sign language classification. The implementation of FPGA serves as a primary functional prototyping platform in facilitating ASIC design. The overall performance of the CNN model for sign language classification scores training accuracy of 93.97%, validation accuracy of 92.40%, 93.98% of precision, 93.99% of recall and 93.97% of f1-score. Finally, the optimized CNN model is deployed in FPGA in real-time execution and conducts sign language classification effectively.