FEU Institute of Technology

Educational Innovation and Technology Hub

Loading...

Shaneth C. Ambat

14 Publications
Web-Based Air Quality Monitoring and Mapping System using Fuzzy Logic Algorithm

Proceedings of the 13th International Conference on Information Technology: IoT and Smart City, (2026), pp. 151-158

Shaneth C. Ambat Shaneth C. Ambat , Ace C. Lagman Ace C. Lagman , ... Alejandro D. Magnaye

Conference Paper | Published: March 16, 2026

View PDF
Abstract
Air quality monitoring has become increasingly critical in urban environments, particularly in densely populated megacities like Manila, Philippines. This research presents the design and conceptual framework for a comprehensive web-based air quality monitoring and mapping system that leverages fuzzy logic algorithms to provide intelligent, real-time assessment of atmospheric conditions across Metro Manila. The proposed system addresses the inherent uncertainties and complexities associated with environmental data by implementing a sophisticated fuzzy inference system specifically calibrated for Manila's unique atmospheric conditions, pollution sources, and regulatory requirements. The research encompasses a thorough analysis of Manila's current air quality challenges, including the identification of primary pollutants such as particulate matter (PM2.5 and PM10), carbon monoxide (CO), nitrogen dioxide (NO2), sulfur dioxide (SO2), and ground level ozone (O3). The proposed system architecture integrates multiple technological components including a distributed sensor network, centralized data processing infrastructure, fuzzy logic engine, web-based visualization platform, and real-time mapping capabilities. The fuzzy inference system is specifically designed to accommodate Manila's tropical climate conditions, high population density, and diverse pollution sources ranging from vehicular emissions to industrial activities. The methodology incorporates adaptive membership functions that adjust to seasonal variations and local environmental patterns, ensuring accurate and contextually relevant air quality assessments. The system design emphasizes scalability, real-time processing capabilities, and user accessibility through responsive web interfaces optimized for both desktop and mobile platforms. The technical implementation framework encompasses comprehensive hardware specifications for sensor deployment, software architecture for data processing and visualization, database design for efficient time-series data management, and API development for system integration and third-party access. Expected outcomes of this research include improved public awareness of air quality conditions, enhanced decision-making capabilities for environmental authorities, and the establishment of a robust foundation for future environmental monitoring initiatives in Manila and similar urban environments. The fuzzy logic approach provides a more nuanced and human-interpretable assessment of air quality compared to traditional crisp methodologies, enabling better communication of environmental risks to diverse stakeholder groups. This comprehensive study contributes to the growing knowledge in environmental informatics and smart city technologies, demonstrating the practical application of artificial intelligence techniques in addressing real-world environmental challenges. The research provides a detailed roadmap for implementing intelligent air quality monitoring systems in developing urban environments, with particular emphasis on cost-effectiveness, technological accessibility, and community engagement.
Factors Influencing C/C++ Intelligent Tutoring System Adoption: An Analysis of Modified Technology Acceptance Model Using Structural Equation Modeling

Proceedings of the 2025 9th International Conference on Education and Multimedia Technology, (2026), pp. 14-20

Conference Paper | Published: March 16, 2026

View PDF
Abstract
This study extended a previous paper that focuses on the acceptability of selected Bachelor of Science in Computer Science (BSCS) and Information Technology (BSIT) students on the use of Intelligent Tutoring System (ITS) as an educational technology tool for C/C++ Programming. A one-shot case study research design was carried out in 5 programming classes taught by the author. A Slovin's formula computation from the population was 35.54. A stratified sampling method was employed with the 4 intervals between students to mitigate bias. The study involved 39 participants, out of which 74.36% were male and 25.64% were female computer science and IT students. Utilizing the Technology Acceptance Model (TAM) as an evaluation tool online enabled importing the dataset into IBM SPSS for finding the correlations and factor loading calculations. Cronbach alpha was conducted by the author with a value of 0.947, which signifies the measure of internal consistency. The seven (7) factors of TAM were analyzed to reveal coefficient values for comparisons and derive their relative implications. Research indicates that every factor significantly influences the acceptance of ITS among BSCS and BSIT students. Interestingly, PerUse→Att has the highest coefficient value (0.883) next in the rank was SocNor→Att by a factor of 0.822 signifying their impact on ITS (Att), leaving SocNor→PerEas ranking last amongst relations with a 0.630 coefficient value. Finally, the results implied CS and IT students are open to the notion of incorporating intelligent teaching tools into their laboratory sessions to supplement their programming activity and increase their efficiency when building console applications.
Utilizing Modified Viterbi Algorithm for Religious Text: A Cebuano Part-of-Speech Tagging

2024 International Conference on IT Innovation and Knowledge Discovery (ITIKD), (2025), pp. 1-6

Conference Paper | Published: January 1, 2025

Abstract
Part of speech tagging (POS) is crucial in natural language processing, identifying the grammatical categories of words in sentences. This research highlights the lack of focus on POS tagging for Asian languages, particularly Cebuano. Inadequate research on Cebuano religious text has hindered linguistic documentation and understanding its grammar and vocabulary. This study introduces a Parts-of-Speech Tagging for Cebuano utilizing a Modified Viterbi Algorithm. The researchers also apply a method for handling unfamiliar words. Results indicate that the algorithm performs exceptionally well on a religious text corpus comprising 50,000 datasets, achieving an accuracy of93%,precision of90%, recall of 90. 52%, and an F1-score of92%. These results highlight the algorithm's effectiveness in tackling language challenges within specific genres. Furthermore, the research supports the Sustainable Development Goals (SDGs) by promoting linguistic diversity and advancing inclusive language technologies. The study also provides valuable insights into Cebuano's linguistic characteristics and grammatical structures, laying a solid foundation for future research in natural language processing.
Text Sentiment Analysis from University Stakeholders feedback: A Comparative Analysis of RNN architectures and Transformer based model

2024 International Conference on IT Innovation and Knowledge Discovery (ITIKD), (2025), pp. 1-6

Conference Paper | Published: January 1, 2025

Abstract
In this study, we use various RNN architectures namely, RNN, Bi-LSTM, and GRU — alongside BERT to analyze sentiment across university departments. Our aim is a comparative analysis of these models in sentiment classification within education. We collected and pre-processed textual data from multiple departments for balanced training and validation. Results showed that traditional RNNs achieved 90% accuracy, Bi-LSTM 93%, and GRU 89%. BERT, leveraging its Transformer architecture, outperformed with 94% accuracy. These findings highlight the superiority of BERT in capturing complex language patterns for sentiment analysis. This study underscores the potential of advanced neural network architectures to gain insights into departmental sentiments, informing policy decisions and educational strategies. Aligning with sustainable development goals in education, we aim to use AI models to develop effective, inclusive, and responsive educational strategies, enhancing quality and accessibility.
A Cebuano Parts-of-Speech(POS) Tagger Using Hidden Markov Model(HMM) Applied to News Text Genre

TENCON 2024 - 2024 IEEE Region 10 Conference (TENCON), (2024), pp. 940-943

Conference Paper | Published: January 1, 2024

Abstract
Part of speech tagging (POS) is crucial in natural language processing, identifying the grammatical categories of words in sentences. This research highlights the lack of focus on POS tagging for Asian languages, particularly Cebuano. Limited research on Cebuano has hindered linguistic documentation and understanding of its grammar and vocabulary. This study introduces a Cebuano POS tagger using the Hidden Markov Model (HMM) to improve Cebuano text processing. The researchers also propose a method for handling unfamiliar words. Results show the algorithm performs well on a news text corpus of 25,000 datasets, with an accuracy of 84 %, precision of 80%, recall of 81.52%, and F1-score of 82%. These outcomes demonstrate the algorithm's effectiveness in addressing language challenges in specific genres. Additionally, the research contributes to the Sustainable Development Goals (SDGs) by promoting linguistic diversity and fostering inclusive language technologies. The study provides insights into Cebuano's linguistic traits and grammatical structures, offering a foundation for further research in natural language processing.
Analyzing Machine Learning Algorithm Performance in Predicting Student Academic Performance in Data Structures and Algorithms Based on Lifestyles

2023 IEEE 15th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), (2023), pp. 1-4

Conference Paper | Published: January 1, 2023

Abstract
This research study employed machine learning algorithm in This research study employed a machine learning algorithm in predicting student academic performance in the Data Structures and Algorithm (DSA) course which is based on student lifestyle to analyze the factors that affect the high or low performance result. A total number of 251 Bachelor of Science in Computer Science (BSCS) students participated in the study where 207 or 82% were male and 44 or 18% were female. A oneshot case study was conducted that led to data collection through the administration of an online survey on former enrollees of the said course. The dataset was extracted with 43 features and was analyzed using Python on Jupyter Notebook. Randomly selected 70% of these, 176 observations, are used to train the classifier models. The remaining 30%, 75 observations, were used as the test data. In order to classify academic performance students, eight machine learning algorithms were applied based on random forest (RF), decision tree (DT), support vector machines (SVM), K-nearest neighbors (KNN), logistic regression (LR), Gaussian Naive Bayes (GNB), stochastic gradient descent (SGD), and perceptron. Although SGD and Perceptron classifier models show comparably low classification performances, both random forest and decision tree classifiers provided the highest metric performance. The study indicated that the lifestyles of students contributed to whether the student performance became high or low in their grade performance.
Analysis of C Programming Performance: A Correlational Study of Novice Programmers’ Compiler Error Logs

2022 IEEE 14th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), (2022), pp. 1-5

Conference Paper | Published: January 1, 2022

Abstract
Computer programming is now one of the most critical competencies taught in computer courses. [1]. Students require any assistance they can get when learning programming in order to acquire the necessary abilities to excel in the field of computing [2]. This paper aims to investigate the C compiler error logs of Computer Science freshmen students. A prototype was developed and pilot-tested to obtain C source code snippets focusing on assignment statements. The dataset consisting of 1013 logs were extracted from the initial prototype then followed the data science approach of [3] for pre-processing. A Person correlational analysis was conducted on eight features to investigate the relationship between all variables in the dataset. Results of the study show that there is a strong relationship between wrong expression and operator (0.806), wrong expression and numeric value (0.794), operator and numeric value (0.663). Implications of this study is also helpful to computing instructors to improvise the delivery of their teaching pedagogy.
License Plate Recognition for Stolen Vehicles Using Optical Character Recognition

Lecture Notes in Networks and Systems, (2022), pp. 575-583

Armand Christopher Luna, Christian Trajano, ... Shaneth C. Ambat Shaneth C. Ambat

Book Chapter | Published: January 1, 2022

Abstract
Optical character recognition (OCR) is the process of extracting the characters from a digital image. The concept behind OCR is to acquire a text in a video or image formats and extract the characters from that image and present it to the user in an editable format. In this study, a convolutional neural network (CNN) is applied, which is a mathematical representation of the functionality of the human brain, using back-propagation algorithm with test case files of English alphabets and numbers. The purpose of this study is to test systems capable of recognizing vehicle plate number English alphabets and numbers with different fonts, and to be familiar with CNN and digital image processing applied for character recognition. Scientific journals and reports were used to research the relevant information required for the thesis project. The chosen software was then trained and tested with both computer and video output files. The tests revealed that the OCR software can recognize both vehicular plate and computer alphabets and learns to do it better with each iteration. The study shows that although the system needs more training for vehicular plate characters than computerized fonts, and the use of CNN in OCR is of great benefit and allows for quicker and better character recognition.
E-Aid: Open Wound Identifier and Analyzer Using Smartphone Through Captured Image

Lecture Notes in Networks and Systems, (2022), pp. 691-697

Joie Ann W. Maghanoy, Daryl G. Guzman, ... Shaneth C. Ambat Shaneth C. Ambat

Book Chapter | Published: January 1, 2022

Abstract
E-Aid is a study that aims to develop an application based on the convolutional neural network algorithm. The central idea for the creation of E-Aid is to provide a mobile application which offers more advanced capabilities and leads to a strong emergence for the medical health applications in the market. The reliability for the usage of CNN as an algorithm produces positive results which is essential for this study. The researchers trained CNN model that will be used later on during the execution of the CCN algorithm, and this CNN model must be able to identify 4 types of open wounds (laceration, puncture, abrasion and avulsion) and 4 types of skin burns (1st-, 2nd-, 3rd- and 4th-degree burn) and also must be able to classify it whether the wound is infected or not infected. The researchers tested the accuracy of the CNN model before sending to our respondents. The researchers tested the accuracy by getting a random image of open wounds and skin burns in the Internet and run it on the E-Aid app. After the researchers finish testing the accuracy of the app, they distributed the app to their respondents to test furthermore the accuracy and reliability of the app. The researchers’ respondents are composed of 6 medical professionals (doctors/nurses), 5 IT/CS professionals and 14 students (in the field of medicine and computer studies).
A Deep Learning Approach for Automatic Scoliosis Cobb Angle Identification

2022 IEEE World AI IoT Congress (AIIoT), (2022), pp. 111-117

Renato R. Maaliw, Julie Ann B. Susa, ... Ma. Corazon G. Fernando Ma. Corazon G. Fernando

Conference Paper | Published: January 1, 2022

View PDF
Abstract
Efficient and reliable medical image analysis is indispensable in modern healthcare settings. The conventional approaches in diagnostics and evaluations from a mere picture are complex. It often leads to subjectivity due to experts' various experiences and expertise. Using convolutional neural networks, we proposed an end-to-end pipeline for automatic Cobb angle measurement to pinpoint scoliosis severity. Our results show that the Residual U-Net architecture provides vertebrae average segmentation accuracy of 92.95% based on Dice and Jaccard similarity coefficients. Furthermore, a comparative benchmark between physician's measurement and our machine-driven approach produces an acceptable mean deviation of 1.57 degrees and a T-test p-value of 0.9028, indicating no significant difference. This study has the potential to help doctors in prompt scoliosis magnitude assessments.

A Time Capsule Where Research Rests, Legends Linger, and PDFs Live Forever

Repository is the home for every research paper and capstone project created across our institution. It’s where knowledge kicks back, ideas live on, and your hard work finds the spotlight it deserves.

© 2026 Educational Innovation and Technology Hub. All Rights Reserved.