Pocholo James M. Loresco
AssociateECE Associate at FEU Institute of Technology
👨🏻🏫 Seminars and Trainings
Attendee
AI in the Workplace: Practical Applications for Educators and Associates to Improve Teaching and School Management
Awarded by Educational Innovation and Technology Hub on August 14, 2024
View Credential
Attendee
Review of Complex Engineering Problems
Awarded by FEU Tech College of Engineering on August 12, 2024
View Credential
Attendee
Data Privacy Act Awareness Seminar
Awarded by FEU Tech Human Resources Office on August 07, 2024
View Credential
Attendee
Enhancing Physical and Mental Resilience in the Workplace
Awarded by FEU Tech Human Resources Office on August 05, 2024
View Credential
Attendee
Nanolearning: Bite-Sized Content as the Next Big Trend in Contemporary Education
Awarded by Educational Innovation and Technology Hub on December 12, 2023
View CredentialResearch Publications
Powered by:
Conference Paper · 10.1109/TENCON66050.2025.11375053
A Machine Vision-Based FSL Tutor with Static and Dynamic Gesture Recognition and Real-Time User Feedback Using MediaPipe FrameworksTENCON 2025 - 2025 IEEE Region 10 Conference (TENCON), (2026), pp. 1215-1219
Filipino Sign Language (FSL) is an invaluable tool for communication within the deaf and mute communities, yet there is a shortage of proficient special education teachers and accessible learning materials. Current research on FSL recognition is limited to basic detection, often invasive, and lacks comprehensive systems that provide feedback to users. Additionally, FSL incorporates distinctive static and dynamic gestures, including contractions, which set it apart from other sign languages. This study presents the development of a machine vision-based FSL tutor that leverages the MediaPipe framework-specifically, MediaPipe Hands for static gesture recognition and MediaPipe Holistic for full-body dynamic gesture tracking. LSTM networks were used to classify dynamic gestures based on sequential landmark data to capture temporal dependencies in sign execution. The system supports a desktop application platform enabling learners to engage in interactive modules with real-time feedback through visual prompts and audio cues. It utilizes 42 static hand feature landmarks and over 1,662 key points derived from hand, pose, and facial data to ensure accurate recognition and feedback. A total of 50 essential FSL gestures-aligned with the kindergarten curriculum-were modeled, covering alphabet knowledge, vocabulary development, self-introduction, and polite expressions. Performance evaluation using computer vision metrics demonstrated high recognition accuracy for both gesture types. In addition, the System Usability Scale (SUS) and statistical comparisons with traditional instruction methods confirmed the platform's effectiveness and user acceptability. The results validate the system as a comprehensive and accessible solution for FSL education, particularly suited for early learners and self-guided instruction.

Conference Paper · 10.1109/TENCON61640.2024.10902929
Indoor Navigation Glasses for the Visually Impaired with Deep Learning and Audio Guidance Using Google Coral Edge TPUTENCON 2024 - 2024 IEEE Region 10 Conference (TENCON), (2024), pp. 842-845
Visual impairment continues to be a global health concern. People with visual impairment experience difficulty moving around indoors, especially in unfamiliar spaces. While existing assistive technologies like smart canes offer point-to-point navigation or rely on infrastructure like RFID tags or beacons, they lack the ability to provide comprehensive indoor navigation with obstacle detection and avoidance. This paper presents a novel indoor navigation system for visually impaired individuals using deep learning and audio guidance. The system utilizes 3D-printed glasses equipped with a Raspberry Pi v2 camera, audio user interface and a processing unit comprising a Raspberry Pi 4B and Google Coral Edge tensor processing unit (TPU). As validated in a controlled indoor environment, the deep learning models for localization, navigation, obstacle detection, and obstacle avoidance achieve high results in terms of accuracy, precision recall, and F1-score. Based on user tests using the System Usability Scale, this wearable assistive device appears to offer a promising solution for promoting independent navigation and spatial awareness among visually impaired individuals.

Conference Paper · 10.1109/TENCON61640.2024.10903009
An Adaptive Neuro-Fuzzy Framework for Monitoring Student Outcomes with Individualized Dashboard in Outcome-Based EducationTENCON 2024 - 2024 IEEE Region 10 Conference (TENCON), (2024), pp. 1286-1289
Outcome-Based Education (OBE) emphasizes the importance of defining and assessing specific learning outcomes. Effective monitoring of these outcomes is crucial for ensuring student success and program effectiveness. Previous research has explored various approaches to enhance program outcome monitoring, however, have not fully addressed the need for individualized and comprehensive progress tracking that goes beyond binary pass or fail measurements. This paper presents a novel approach to enhance program outcome monitoring through the development of individualized dashboards and the application of an adaptive neuro-fuzzy logic (ANFIS) framework. Data were derived from CSV reports of students in a learning management system and Canvas New Analytics from a sample class in the pilot study. The ANFIS framework is based on formative and summative assessments, total and maximum page views and participation, and average weekly page views and participation. The ANFIS model and dashboard results demonstrate its effectiveness in providing students and educators with a deeper understanding of student progress in terms of program outcomes, enabling targeted interventions and personalized learning experiences. This comprehensive approach empowers educators with the tools and insights needed to optimize educational practices and ensure that all students achieve the desired learning outcomes.

Conference Paper · 10.1109/HNICEM60674.2023.10589022
Comparative Assessment of Off-shore Wind Converters and Wave Energy Converters in the Philippines2023 IEEE 15th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), (2023), pp. 1-6
The Republic of the Philippines is confronted with rebuilding its energy landscape, which now depends heavily on imported fossil fuels for a substantial supply. The Department of Energy (DOE) has established lofty objectives to enhance the nation's renewable energy (RE) capability; nevertheless, these objectives are still to be achieved. This research supports DOE's goals by studying other possible renewable energy sources. In particular, the primary aim of this research is to examine the viability of Offshore Wind Converters (OWCs) and Wave Energy Converters (WECs) as viable sustainable energy options for the Philippines. Ocean wave converters (OWCs) provide inherent benefits in terms of dependability and have widespread societal acceptance. Conversely, wave energy converters (WECs) harness the vast energy potential contained within ocean waves. A comparative evaluation was undertaken to analyze the differences between these two potential renewable energy sources. The assessment concludes that OWCs possess a minor advantage over WECs regarding their economic viability and higher societal acceptability. It recommended that the government adopts a diversified energy portfolio, which may include the incorporation of WECs to effectively navigate the changing dynamics of the energy sector, enhance sustainability, and ensure the long-term security of the nation's energy supply.

Conference Paper · 10.1109/HNICEM60674.2023.10589034
Identifying Rust Infection and Estimating Severity on Coffee Leaves Using Vision-Based ANN-KNN- Thresholding Methods2023 IEEE 15th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), (2023), pp. 1-6
The coffee rust disease threatens coffee production in the Philippines with widespread defoliation and reduced yield. Identifying rust infection and its severity is critical for implementing effective mitigation strategies. As an alternative to recent methods that rely on deep learning approaches, our vision-based approach utilizes Artificial Neural Networks, K-Nearest Neighbors, and Thresholding methods to identify rust infection on coffee leaves and estimate severity, providing a computationally lightweight alternative for agricultural disease management. Twenty-four (24) color and texture features of a collected dataset of coffee leaf images were extracted as inputs for an ANN classifier. The percentage of damage on coffee leaves was determined by comparing the damaged pixels to the total area of the leaf using KNN and thresholding segmentation techniques. Through the use of confusion matrix and RMSE, the decision support system has demonstrated promising results in identifying coffee leaf health and estimating severity of coffee rust infection.