The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • RSS Feed

IJACSA Volume 16 Issue 9

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Cognitive Biases: Understanding and Designing Fair AI Systems for Software Development

Abstract: Artificial Intelligence (AI) systems increasingly influence decisions that affect people's lives, making fairness a core requirement. However, cognitive biases, systematic deviations in human judgment, can enter AI through data, modeling choices, and oversight, amplifying social inequities. This paper examines how three bias channels, data, algorithmic, and human, manifest across the software development lifecycle and synthesizes practical strategies for mitigation. Using a qualitative review of recent scholarship and real‑world case studies, we distill a lightweight diagnostic framework that helps practitioners identify bias sources, evaluate mitigation options against effectiveness, feasibility, transparency, and scalability, and institutionalize routine audits. We illustrate the framework with representative vignettes and summarize trade‑offs between fairness goals and model performance. Our analysis recommends diverse and well‑documented datasets, fairness‑aware learning and evaluation, third‑party audits, and cross‑functional collaboration as mutually reinforcing levers. The paper contributes a developer‑oriented map of cognitive bias risks across data, model, and human processes, a four‑criterion rubric for comparing mitigation techniques, and an actionable checklist that teams can embed in their pipelines. The results aim to support software and product teams in building AI systems that are both accurate and equitable.

Author 1: Sheriff Adepoju
Author 2: Mildred Adepoju

Keywords: Cognitive biases; fair AI systems; algorithmic bias; software development; bias mitigation; fairness; software engineering

PDF

Paper 2: Multispectral Image Analysis Using Deep Neural Networks

Abstract: Multispectral image classification plays a crucial role in remote sensing applications such as land cover mapping, agricultural monitoring, and environmental surveillance. Traditional classification techniques, including the Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM), Decision Tree (DT), and Multi-Layer Perceptron (MLP), often struggle with the complexity and high dimensionality of multispectral data. Recent advances in deep learning have revolutionized the field of remote sensing by enabling the extraction of high-level, abstract features from raw input data. In this paper, we explore the application of Deep Neural Networks (DNNs) for pixel-wise classification in multispectral imagery. DNNs are capable of learning informative and hierarchical representations, which have demonstrated significant success in a wide range of computer vision tasks. We propose and implement a simple DNN architecture consisting of six layers: an input layer (representing reflectance values across spectral bands), a fully connected layer, a batch normalization layer, a ReLU activation layer, another fully connected layer, and a final SoftMax output layer for classification. Each pixel is represented by a vector of spectral reflectance values. We evaluated our model using two Landsat scenes, one from the New Orleans area and the other from the Mississippi River bottomland area. The proposed DNN achieved classification accuracy of 97.44% and 95.74%, respectively, on these datasets, demonstrating the effectiveness of deep learning for multispectral image classification.

Author 1: Arun D. Kulkarni

Keywords: Remote sensing; classification; deep neural networks; Landsat scene

PDF

Paper 3: LegalSummNet: A Transformer-Based Model for Effective Legal Case Summarization

Abstract: Expanding legal documents has become increasingly complicated and presents a greater challenge to legal professionals in extracting relevant information efficiently. In this paper, a new two-stage hybrid summarization system, called LegalSummNet, is introduced. It excels in handling the peculiarities of legal texts, such as their extremely long length, complex syntax, and specialized vocabulary. LegalSummNet combines an extractive filtering model with an attention-weighted filtering module and a transformer-based abstractive generation model, enabling it to identify significant elements and produce compact, coherent, and semantically competent summaries. The proposed model is tested using a large-scale dataset comprising a legal case and shows significant improvements compared to robust baselines, such as BERTSumExt and LegalT5, in performance measured by ROUGE-1, ROUGE-2, ROUGE-L, and BERT Score. A greater compression efficiency is also evident with the model. Hence, there is a significant application of real-world systems in generating case briefs and summaries related to contracts. The findings demonstrate that LegalSummNet is effective in enhancing the accessibility of legal documents and supporting informed decision-making.

Author 1: Md Farhad Kabir
Author 2: Sohana Afrin Mitu
Author 3: Sharmin Sultana
Author 4: Belal Hossain
Author 5: Rakibul Islam
Author 6: Khandakar Rabbi Ahmed

Keywords: Legal document summarization; NLP; extractive and abstractive summarization; transformer; LegalSummNet; BERT; LegalT5; ROUGE-L

PDF

Paper 4: Real-Time Dynamic Pricing Using Machine Learning: Integrating Customer Sentiment and Predictive Models for E-Commerce

Abstract: Dynamic pricing has emerged as a crucial strategy for e-commerce platforms to maximize profitability while remaining competitive in rapidly changing digital markets. Traditional pricing methods often fail to capture the complexity of customer behavior and the rapid evolution of market trends. To address these limitations, this study introduces a machine learning based framework that integrates transactional, behavioral, and contextual data with multilingual sentiment analysis from customer reviews. The framework employs multiple algorithms, including Random Forest, Gradient Boosting, Neural Networks, and XGBoost, with extensive feature engineering and model evaluation. Experimental results on a large-scale retail and e-commerce dataset show that the proposed XGBoost-based approach achieved superior performance, with a Mean Absolute Error (MAE) of 1.29, Root Mean Squared Error (RMSE) of 1.65, and an R² of 0.97, significantly outperforming baseline models. These findings underscore the framework's capacity to facilitate real-time, adaptive, and customer-centric pricing mechanisms. The study contributes by presenting 1) an end-to-end ML pipeline for dynamic pricing, 2) the novel incorporation of sentiment-based features into predictive models, and 3) a comparative evaluation that establishes XGBoost as the most effective model. The results demonstrate both practical and theoretical value, offering insights for e-commerce platforms seeking to optimize revenue and ensure pricing fairness in real-world scenarios.

Author 1: Areyfin Mohammed Yoshi
Author 2: Arafat Rohan
Author 3: Sohana Afrin Mitu
Author 4: Md Masud Karim Rabbi
Author 5: Shahanaj Akther
Author 6: Khandakar Rabbi Ahmed

Keywords: Dynamic pricing; machine learning; XGBoost’ e-commerce analytics; revenue optimization

PDF

Paper 5: A Scalable Microservices Architecture for Real-Time Data Processing in Cloud-Based Applications

Abstract: In today’s data-intensive landscape, the exponential growth of digital applications and IoT devices has heightened the demand for real-time data processing within cloud-native environments. Traditional monolithic systems struggle to meet the low-latency, high-availability requirements of modern workloads, prompting a shift toward microservices architectures. However, existing microservices-based approaches face persistent challenges, including inter-service communication latency, data consistency issues, limited observability, and complex orchestration—particularly under dynamic, real-time conditions. Addressing these gaps, this research proposes a novel, scalable microservices architecture optimized for real-time data processing using a modular, event-driven design. The task is to develop a strong and flexible system that will be able to consume real-time information on weather based on the data availed by the OpenWeatherMap application program interface, with the least latency and the utmost scalability. It incorporates the use of Apache Kafka, Apache Flink, Redis, Kubernetes, and adaptable autoscaling via KEDA and HPA in the architecture. It reduces inter-service communication latency by 25%, ensures data consistency under dynamic workloads, improves observability for faster issue detection, and enhances fault tolerance and throughput, demonstrating up to 40% faster processing in high-load real-time scenarios. Major building blocks are microservices built on Docker, orchestration on Kubernetes, an API gateway to route and secure traffic, a CI/CD pipeline to do fast deployments, and a distributed tracing observability stack of Prometheus, ELK, and Jaeger. Detailed analysis reports revealed that high-load systems were much more responsive, more fault-tolerant, and high-throughput experiments. Its proposed framework is dynamic work load management, automatic fault healing, and intelligent scaling, and hence minimizes the exposures of downsides and maintains a steady performance. To sum up, this study offers a tenable microservices design, addressing the present limitations in the field of real-time data processing and, at the same time, providing a scalable, secure, and observable architecture of future cloud native apps.

Author 1: Desidi Narsimha Reddy
Author 2: Rahul Suryodai
Author 3: Vinay Kumar S. B
Author 4: M. Ambika
Author 5: Elangovan Muniyandy
Author 6: V. Rama Krishna
Author 7: Bobonazarov Abdurasul

Keywords: Microservices architecture; real-time data processing; cloud-native systems; Kubernetes orchestration; API gateway

PDF

Paper 6: A Graph-Based Deep Reinforcement Learning and Econometric Framework for Interpretable and Uncertainty-Aware Stablecoin Stability Assessment

Abstract: The instability of algorithmic and hybrid stablecoins has become a systemic concern in decentralized finance. This paper proposes a unified, interpretable, and uncertainty-aware framework that integrates graph-based deep reinforcement learning, GARCH econometric modeling, and Bayesian inference. Multi-stage reinforcement learning agents simulate interactions between arbitrageurs and protocol mechanisms. GARCH models capture volatility dynamics, while Bayesian methods provide confidence intervals for peg deviation forecasts, enabling adaptive prediction and transparent risk interpretation. The framework is validated using over eight million on-chain and off-chain records across 120 scenarios involving USDT, USDC, and TerraUSD. It achieves 89 per cent crisis prediction accuracy and 83 per cent reflexivity modeling performance, significantly outperforming six benchmark models. Notably, the system issued early warnings up to 72 hours before the TerraUSD collapse. Ablation studies confirm the unique contribution of each module. In addition to technical improvements, the framework outputs a stability index and dynamic reserve recommendations to support policy response and supervisory planning. Compared to existing approaches, this is the first framework to combine dynamic simulation, interpretability, and probabilistic forecasting in a single architecture. It offers practical value for stablecoin monitoring and establishes a methodological foundation for future research in digital asset risk assessment.

Author 1: Yaozhong Zhang
Author 2: Quanrong Fang

Keywords: Stablecoin stability; deep reinforcement learning; graph neural networks; uncertainty quantification; macro prudential policy

PDF

Paper 7: Embedded System for ECG Signal Monitoring and Fatigue Detection in Elderly Individuals Using Machine Learning Models

Abstract: Ascertaining fatigue in elderly people is crucial both for preventing future health complications and for enhancing their quality of life. In this paper, we present an embedded system for real-time fatigue detection and monitoring based on electrocardiogram (ECG) signals, leveraging cost-effective sensors and advanced deep learning architectures. The proposed framework integrates an AD8232 ECG sensor with an ESP32/Raspberry Pi platform for continuous signal acquisition, followed by preprocessing through a 4th-order Butterworth bandpass filter, feature extraction, dimensionality reduction with PCA, and classification using recurrent neural network models. Unlike previous studies relying on multi-sensor or image-based approaches, our solution demonstrates high efficiency, scalability, and affordability by employing a single low-cost ECG sensor. Three neural architectures were evaluated: standard Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). Among them, the GRU model achieved the highest accuracy (98.86%), followed by LSTM (97.73%), whereas standard RNNs lagged behind (82.76%). Experimental results confirm the robustness of GRU in capturing temporal dependencies in ECG data, outperforming other models in both accuracy and computational efficiency. This study highlights the feasibility of deploying lightweight yet powerful AI models in embedded healthcare systems for elderly individuals. By enabling early detection of fatigue as a critical risk factor for falls, cardiovascular incidents, and reduced autonomy. Our approach offers significant societal benefits, including preventive care, reduced hospitalization costs, and improved independence. Future work will extend the dataset and validate system robustness in real-world environments to enhance clinical applicability.

Author 1: Chokri Baccouch
Author 2: Chaima Bahar

Keywords: Fatigue; ECG; AI; classification; GRU; LSTM; RNN

PDF

Paper 8: Deep Reinforcement Learning-Based Target Detection and Autonomous Obstacle Avoidance Control for UAV

Abstract: To address the challenges faced by distribution network monitoring systems—such as significant variations in anomaly scale, frequent missed and false detections of small-scale faults, and the need for real-time operational control—this paper proposes a lightweight multi-scale feature fusion detection network combined with a deep reinforcement learning-based autonomous control strategy, forming an end-to-end intelligent perception and decision-making system for distribution networks. To enhance detection accuracy and computational efficiency, a lightweight feature fusion network (Grid_RepGFPN) is designed, and a novel feature fusion module (DBB_GELAN) is proposed, which significantly reduces model parameters and computational cost while improving detection performance. Additionally, a feature extraction module (FTA_C2f) is constructed using partial convolution (PConv) and triplet attention mechanisms, combined with the ADown downsampling structure to improve the model’s capability to capture spatial and electrical measurement details. The programmable gradient information (PGI) strategy of YOLOv9 is further optimized by introducing a context-guided reversible architecture and a Grid_PGI method with additional detection heads, thereby enhancing deep supervision stability and reducing semantic information loss. Based on the detection model, a real-time operational control strategy is developed using deep reinforcement learning, enabling autonomous fault response, load adjustment, and network optimization through a state–action–feedback optimization loop. Experimental results on multiple distribution network simulation platforms demonstrate that the proposed LMGrid-YOLOv8 model outperforms YOLOv8s, with improvements of 4.2%, 3.9%, 5.1%, and 3.0% in precision, recall, mAP@0.5, and mAP@0.5:0.95, respectively, while reducing parameters by 63.9% and increasing computation by only 0.4 GFLOPs, achieving a favorable balance between performance and resource consumption. Inference experiments on edge computing platforms confirm that the proposed model maintains high detection accuracy under real-time constraints, demonstrating strong applicability to real-time distribution network monitoring. Furthermore, class activation map-based visual analysis reveals the model’s superior capabilities in detecting small-scale faults and processing high-resolution network measurement regions.

Author 1: Like Zhao
Author 2: Hao Liu
Author 3: Guangmin Gu
Author 4: Fei Wan
Author 5: Yanyang Feng

Keywords: Target detection; multi-scale; lightweight; YOLOv8; autonomous obstacle avoidance

PDF

Paper 9: Beyond Words: An Advanced Ensemble Framework for Unmasking AI-Generated Content Through Linguistic Fingerprinting

Abstract: AI-generated content detection is vital because it helps to uphold digital integrity in most fields of application, such as in academic publishing and content verification. The process of identifying text authenticity and traceability of the source is dependent on proper detection means. The approach introduced in this paper is a novel ensemble method that combines machine learning and linguistic analysis for AI content detection. The ensemble approach uses a set of classification algorithms to identify the most important differences between human-authored and AI-generated text. To validate the proposed method, this study utilized an extensive collection of text samples (20,000) obtained from SQuAD 2.0, CNN/Daily Mail, GPT-3.5, and ChatGPT datasets. The proposed ensemble model achieved precision, accuracy, recall, and F1-score of 97.2%, 97.5%, 96.4%, and 97.3%, respectively, demonstrating superior performance compared to individual classifiers. The experimental results demonstrate that the ensemble approach offers efficient detection performance, which can be applied to various text types and lengths, and thus can be implemented in practical systems for content verification and academic integrity assessment.

Author 1: Ghada Y. Elwan
Author 2: Doaa R. Fathy
Author 3: Nahed M. El Desouky
Author 4: Abeer S. Desuky

Keywords: AI detection; machine learning; text classification; ensemble methods; content verification

PDF

Paper 10: Reliability Risk Assessment Approaches in Software Engineering: A Review Structured by Software Development LifeCycle (SDLC) Phases and Reliable Sub-Characteristics

Abstract: Reliability risk is a critical concern in software development, as failures can result in system downtime, degraded performance, data integrity issues, financial losses and loss of user trust. The increasing complexity of modern systems, driven by dynamic workloads, distributed architecture, and unpredictable interactions, amplifies these risks. In regulated industries like healthcare, finance, and transportation, software reliability directly affects safety, compliance and operational continuity, making robust risk assessment essential. Despite recent development and improvement on numerous reliability risk assessment techniques, system failures continue to be potent, creating concerns on scope, applicability and limitations. This paper will dive deep into evaluating recent methods, the advantages and disadvantages of the application itself, while critically assessing the research gaps. Here, the techniques are categorized across the software development lifecycle (SDLC), to bridge methods to phase-specific reliability needs. Consequently, the paper addresses methodological synthesis of recent practices, identifies segments where existing techniques fail to live up to expectations, and summarize future research directions for achieving more robust and adaptive reliability risk assessment.

Author 1: Lehka Subramanium
Author 2: Saadah Hassan
Author 3: Mohd. Hafeez Osman
Author 4: Hazura Zulzalil

Keywords: Reliability; risk assessment; SDLC

PDF

Paper 11: WiTS: A Wi-Fi-Based Human Action Recognition via Spatio-Temporal Hybrid Neural Network

Abstract: Human action recognition has many applications in different scenarios. With the advancement of wireless sensing and the widespread deployment of Wi-Fi devices, the perception technology of Wi-Fi channel state information (CSI) has shown great potential. Related studies identified actions by capturing specific attenuation and distortion features caused by human posture on CSI. These methods are less susceptible to the effects of lighting and object occlusion. However, they have yet to adequately extract information within CSI. The challenge of enhancing model performance through the comprehensive utilization of information features within different dimensions remains an imperative area. To address this, a spatio-temporal hybrid neural network model named WiTS is proposed. It integrates the advantages of different neural networks, using CNN to extract spatial features, combining TCN and Bi-LSTM for dual temporal dimension modeling, and incorporating Transformer's global attention mechanism to achieve comprehensive extraction and multi-level fusion of spatio-temporal features. Additionally, this study further optimizes the original WiTS model from three aspects. The Experiment on WiAR and CSIAR datasets show that the model achieves average accuracy rates of 95.75% and 96.71%, respectively, with F1-scores exceeding 96%. The model has only 2.19 million parameters and less than 560 million FLOPs, offering significant advantages in terms of lightweight design, making it suitable for deployment on limited-computing edge terminals while meeting real-time requirements.

Author 1: Pengcheng Gao

Keywords: Wi-Fi CSI; human action recognition; deep learning

PDF

Paper 12: Automated Scoliosis Diagnosis in Spinal Imaging: Laboratory Validation, Clinical Limitations, and Systematic Implementation Challenge Review

Abstract: Technological advances in automated medical imaging diagnosis have created translation gaps between laboratory achievements and clinical implementation, with traditional manual Cobb angle measurement requiring considerable time with inevitable measurement errors. This review analyzes translation challenges in automated diagnosis systems using scoliosis assessment as a case study, examining 55 articles from 1948-2025 across three domains: Cobb angle measurement, classification, and segmentation. Despite research investment, fully automated approaches have not surpassed semi-automated performance in comparable validation studies. Within the 23 Cobb angle measurement studies, traditional methods outperform sophisticated deep learning systems with average error rates of 1.8° ± 0.4° MAD versus 4.2° ± 1.8° MAE, while validation degradation occurs with performance dropping from 95.28% to 85.9% when transitioning to real-world datasets. Non-standard classification achieves high accuracy but lacks clinical utility, while standard systems struggle with automation, revealing a translation paradox where technical sophistication does not correlate with clinical adoptability. Main problems include testing method gaps, performance drops, different automation approaches, and cost issues. This review recommends standard testing methods and step-by-step clinical implementation to help these systems work in real clinics.

Author 1: Ervin Gubin Moung
Author 2: Xie Aishu
Author 3: Ali Farzamnia

Keywords: Automated diagnosis; medical imaging; scoliosis; Cobb angle; clinical implementation; artificial intelligence

PDF

Paper 13: Adaptive Trust-Based Fault Tolerance for Multi-Drone Systems: Theory and Application in Agriculture

Abstract: This paper presents RobotTrust, an adaptive trust framework for fault-tolerant coordination in multi-drone systems for precision agriculture. The study aims to improve mission reliability under sensor/actuator faults and uncertain interactions by combining a structured fault taxonomy (behavioral, actuator, sensor) with team-based recovery and an adaptive trust model that integrates direct experience with filtered indirect recommendations. We formalize trust computation (direct, recommended, and global trust) and introduce safeguards such as a minimum-trust threshold and weighted fusion to curb misinformation propagation. The framework is evaluated in simulation using the AgriFleet drone team and is compared against the TReconf baseline across three metrics: (i) time-step efficiency for task completion, (ii) RMSD between predicted and true trustworthiness, and (iii) interaction quality (preference for reliable peers). Results show 20–40% faster task completion, lower RMSD (more accurate trust estimation), and selective interaction patterns that prioritize dependable agents while limiting exposure to unreliable ones. These findings indicate that RobotTrust enhances responsiveness and robustness in decentralized, fault-prone environments typical of agricultural deployments. The work contributes a practical, generalizable approach to trust-aware coordination in multi-robot systems and outlines directions for context-aware weighting, explainable trust signals, heterogeneous teams, adversarial robustness, and large-scale field trials.

Author 1: Atef GHARBI
Author 2: Faheed A. F. Alrslani

Keywords: Adaptive trust model; trust-aware robotics; multi-drone coordination; fault-tolerant systems; precision agriculture applications

PDF

Paper 14: Method for Person Re-Identification with 2D-to-3D Image (Image-to-Video) Conversion

Abstract: A method for person re-identification using image-to-video conversion tools is proposed. The proposed method involves matching two images taken from different viewpoints: a reference image captured in advance and a current image captured in real-time to identify the person in concern in the current image, whose image is matched to the reference image. The 2D current image is converted into a 3D representation, from which synthetic images are generated at multiple viewpoints. By comparing the generated images and the reference image, person re-identification can be done. Experiments have demonstrated that the proposed method significantly improves identification accuracy. By accounting for changes in appearance due to different viewpoints and utilizing advanced image conversion, one of the main challenges in person re-identification is addressed. This approach offers a promising solution for applications requiring high accuracy in identifying individuals across varying perspectives.

Author 1: Kohei Arai

Keywords: Person re-identification; identification performance; 2D-to-3D image conversion method; TRIPO; CSM; KLING

PDF

Paper 15: Design of a Modular Architecture Based on AI and Blockchain for Personalized Microcredits Using Open Finance

Abstract: This paper presents the design and validation of a modular architecture for smart microcredits, aimed at expanding credit access for populations excluded from the traditional financial system. The solution integrates three key technological components: data acquisition through Open Finance, automated risk assessment using Artificial Intelligence (AI) models, and the execution of smart contracts on blockchain. A functional prototype was developed to process applications manually submitted by users without prior financial history, utilizing a LightGBM model trained on real, anonymized data. The model was integrated into the system workflow to generate automatic credit conditions and register decisions on the blockchain without direct human intervention. During the validation phase, the model achieved an Area Under the Curve (AUC) of 0.94, supporting its discriminative power within the automated flow. The overall technical validation demonstrates the feasibility of offering personalized, traceable, and secure credit services through open and decentralized technologies. The use of alternative unstructured data, as well as the expansion into production environments, is proposed as a future line of development. In our system, Open Finance provides consented financial data off-chain; the ML model estimates default probability and outputs an eligibility decision; a rule engine maps the score to personalized loan terms; and blockchain smart contracts only record loan terms and execution events on-chain (no personal data). This separation ensures auditability (on-chain) and privacy (off-chain).

Author 1: Pedro Hidalgo
Author 2: Ciro Rodríguez
Author 3: Luis Bravo
Author 4: Cesar Angulo

Keywords: Smart microcredits; artificial intelligence; open finance; blockchain; smart contracts; financial inclusion

PDF

Paper 16: Mathematical Representation of Netflow Analysis Decision Making Based on Production Logic

Abstract: In the sphere of NetFlow traffic analysis, accurate detection of anomalous behavior in real time remains a critical challenge. This study will present a mathematical representation for decision making in NetFlow analysis, production implementation logic for automated expert knowledge. A specialized software system will be developed for collecting and processing NetFlow traffic events in real time cyberspace. NetFlow data will be accumulated in PCAP (Packet Capture) format and converted to .csv using a three-step algorithmic sequence: packet reading, key feature extraction, and output formatting. A total of 89 different packet features will be identified, grouped into 10 categories, including flow statistics, payload features, inter-arrival time, and TCP flags. Based on these features, 110 frequently occurring network processes will be synthesized and identified into seven sets, such as classic cyber threats, application-level cyber threats, anomalies, and normal netflows. Each event will be formally expressed using Boolean production rules in the IF...THEN format, linking subsets of feature vectors (F) to certain network events (A). These production rules form the knowledge base of the expert system, which allows for the efficient identification of cyber threats such as DDoS attacks, port scanning, spoofing, and covert channels. The architecture will ensure systematic analysis for the early detection and identification of netflow anomalies, facilitating to the solidity and protection of complex information systems. The suggested decision-making representation based on production logic will ensure scalable and explainable synthesis and/or analysis, opening the way to the creation of intelligent structures for identifying cyber threats in cyberspace. The accuracy of the results obtained was checked using a confusion matrix.

Author 1: Alimdzhan Babadzhanov
Author 2: Inomjon Yarashov
Author 3: Maruf Juraev
Author 4: Alisher Otakhonov
Author 5: Adilbay Kudaybergenov
Author 6: Rustam Utemuratov

Keywords: Production rules; packet features; PCAP; network events; netflow; production logic; confusion matrix

PDF

Paper 17: AI-Enabled Demand Forecasting, Technological Capability, and Supply Chain Performance: Empirical Evidence from the Global Logistics Sector

Abstract: This study advances understanding of artificial intelligence (AI) integration within supply chain management, with a particular emphasis on AI-enabled demand forecasting. The research examines 1) the extent of adoption of AI-driven forecasting practices, 2) the role of technological and organizational readiness, captured through data infrastructure, workforce skills, and management support, as antecedents, and 3) the mediating effect of AI adoption on the relationship between readiness and supply chain performance. Grounded in the resource-based view and technology adoption theory, a conceptual model was developed and empirically validated using data from global logistics firms, with structural equation modeling applied as the primary analytical technique. The findings confirm that readiness factors significantly foster AI adoption, which in turn exerts both a direct effect on supply chain performance and a mediating effect linking readiness to performance. By focusing on the global logistics sector and empirically validating this mediating mechanism, the study provides novel insights into how firms can translate technological readiness into superior operational outcomes, offering theoretical contributions to AI assimilation literature and practical guidance for managers.

Author 1: Mohamed Amine Frikha
Author 2: Mariem Mrad

Keywords: AI-enabled demand forecasting; technological capability; data infrastructure; workforce skills; management support; supply chain performance; artificial intelligence

PDF

Paper 18: A Starfish Optimization Algorithm-Based Federated Learning Approach for Financial Risk Prediction in Manufacturing Enterprises

Abstract: During digital transformation, manufacturing enterprises encounter challenges such as the high cost of smart devices, operational interruptions, and increased technology expenses, raising their financial risks. Addressing the digital transformation challenges confronting manufacturing enterprises necessitates developing an intelligent financial risk prediction system leveraging AI technologies like big data and deep learning, enabling enterprises to mitigate financial exposure. In addition, some data of some manufacturing enterprises cannot be disclosed and shared due to the involvement of trade secrets and shareholder interests. To address these challenges, this study proposes a federated learning (FL)-based framework for predicting financial risk in manufacturing enterprises. Without sharing data, each client (manufacturing enterprise) in the FL framework uses deep learning models to train financial risk prediction models through a central server federation. In this study, the proposed FL framework employs a deep learning model based on a neural Turing machine (NTM) with a long short-term memory (LSTM) controller. In addition, in order to improve the prediction accuracy of the hybrid NTM-FL model, an improved starfish optimization algorithm (ISFOA) was used to optimize the structure of the NTM model. Finally, the experimental results showed that the ISFOA-based NTM-FL (ISFOA-NTM-FL) model improved the prediction accuracy by 26.32% compared to the other three financial risk prediction models.

Author 1: Bin Liu
Author 2: Liang Chen
Author 3: Haitong Jiang
Author 4: Rui Ma

Keywords: Deep learning; neural Turing machine; prediction; starfish optimization algorithm; federated learning

PDF

Paper 19: User Requirements of Adaptive Learning Through Digital Game-Based Learning: User-Centered Design Approach to Enhance the Language Literacy Development

Abstract: This study aims to elicit the user requirements of digital game-based learning among the primary students through adaptive digital game-based learning, with a focus on enhancing language literacy. Hence, acquiring the user requirements through user-centered design approach is emphasized to identify the specifications and provide practical insights for language learning and digital literacy skills. Further, the requirement specifications are specifically aligned to promote the quality of education for designing the digital game-based learning of language literacy, considering the game elements such as feedback mechanism, player’s profile, game rules, game genre, game environment, rewards, adaptive language learning contents, integration of virtual tutor through artificial intelligence, activities and challenges. This paper presents a qualitative analysis of the results from a controlled study that investigates the potential of digital game-based learning through adaptive learning for the enhancement of language proficiency. Thus, it can contribute to the broader field of digital game-based learning by expanding the understanding of adaptive learning for language literacy as an optimal strategy where the primary school students can experience the influence of artificial intelligence technology development nowadays.

Author 1: Nur Atiqah Zaini
Author 2: Tengku Siti Meriam Tengku Wook
Author 3: Mohd Nor Akmal Khalid
Author 4: Shahrul Azman Mohd Noah

Keywords: Digital game-based learning; adaptive learning; artificial intelligence; language literacy; optimization; primary students; user experience

PDF

Paper 20: Towards More Effective Automatic Question Generation: A Hybrid Approach for Extracting Informative Sentences

Abstract: Informative Sentence Extraction (ISE) is one of the crucial components in Automatic Question Generation (AQG) and directly influences the quality and relevancy of the generated questions. Instructional texts often contain not only informative but also irrelevant sentences. This results in the creation of poor-quality or distorted questions when irrelevant, non-informative sentences have been used as input. Therefore, the basic problem discussed in this paper is how to provide a systematic method for filtering out such sentences and retaining those that are pedagogically valuable. The purpose of ISE is to filter out irrelevant, low-quality information and retain only the factually dense sentences, express key concepts and are contextually significant. This paper proposes a hybrid approach for extracting informative sentences that combines lexical, statistical, and semantic criteria to identify informative sentences suitable for generating educational questions. The proposed approach consists of two modules: the first module employs four techniques in order to evaluate the informativeness of sentences, which are the keyword-based scoring, Named Entity Recognition (NER), information gain (IG) and Sentence-BERT (SBERT). The second module utilizes multiple fusion strategies to integrate the results derived from the informative sentence extraction techniques. The preprocessed sentences extracted from educational materials were ranked and filtered based on their informativeness coverage. The evaluation results indicate that the hybrid approach can improve the extraction of informative sentences rather than using individual methods. Such a contribution is important for enhancing the performance of downstream tasks in AQG systems, such as distractor generation and question formulation.

Author 1: Engy Yehia
Author 2: Neama Hassan
Author 3: Sayed AbdelGaber

Keywords: Automatic Question Generation (AQG); informative sentence extraction; NER; SBERT; question answering; information gain; fusion strategies

PDF

Paper 21: An Integrated Evaluation Using Enhanced Panel Factor Model and Machine Learning: Assessing the Level and Structure of Regional Coordinated Development in the Guangdong-Hong Kong-Macao Greater Bay Area

Abstract: Regional sustainable and coordinated development has become a central issue in the backdrop of a reshaped global economic landscape. Therefore, it is particularly important to evaluate the level of regional coordinated development effectively. This study aimed to validate and assess the effectiveness of machine learning algorithms and the Enhanced Panel Factor Model for evaluating regional coordinated development. To this end, based on panel data from 11 cities in the Guangdong–Hong Kong–Macao Greater Bay Area for 2005–2023, we constructed a four-dimensional composite indicator system covering economic growth, structural optimization, innovation-driven development, and social development. First, we employ a factor model to achieve dimensionality reduction and extract latent factors. SPSS and the JiekeLi platform are used for visualization, and finally, we combine LASSO regression with linear regression to build predictive models to verify the explanatory power of key factors for regional coordination. The findings indicate that the traditional factor model performs robustly in structural identification, whereas machine learning methods have advantages in variable selection and fitting accuracy. The empirical results show that the overall level of coordination in the Greater Bay Area has steadily improved; however, substantial disparities among cities remain. This study demonstrates a new pathway that integrates econometrics and machine learning for the comprehensive evaluation of regional development levels. It also conducts a comparative analysis of the applicability and effectiveness of these two methods, thereby offering significant theoretical and practical value.

Author 1: Li Shi
Author 2: Ting Nie

Keywords: Comprehensive evaluation; machine learning; regional coordinated development; comparative analysis

PDF

Paper 22: CrypTen-FL: A Secure Federated Learning Framework for Multi-Disease Prediction from MIMIC-IV Using Encrypted EHRs

Abstract: The increasing demand for privacy-preserving machine learning in healthcare has driven the need for federated approaches that ensure data confidentiality across institutions. In this work, we present CrypTen-FL, a secure federated learning framework for disease prediction using the MIMIC-IV electronic health record (EHR) dataset. CrypTen-FL enables collaborative model training across multiple hospitals without sharing raw patient data, thereby addressing critical privacy concerns through the integration of Secure Multi-Party Computation (SMPC) using CrypTen and differential privacy mechanisms. We adopt a Transformer-based neural architecture to effectively capture the temporal and high-dimensional nature of EHR data, enabling accurate prediction of multiple clinically significant conditions. The framework incorporates decentralized key generation, secure aggregation, and cross-institutional evaluation to assess generalization performance and robustness. Experimental results demonstrate that CrypTen-FL achieves competitive predictive performance while offering strong privacy guarantees, paving the way for secure and scalable AI applications in real-world healthcare settings.

Author 1: Himanshu
Author 2: Pushpendra Singh

Keywords: Federated learning; secure multi-party computation; electronic health records; disease prediction; MIMIC-IV

PDF

Paper 23: Is Metaverse Technology Ready to Welcome Online Banking Users?

Abstract: The advent of the Web 3.0 era has precipitated the gradual emergence of Metaverse technology as the frontier of next-generation interactive technology transformation in banking. Nevertheless, a considerable disparity exists between the user experience of Metaverse banking in virtual interactive environments and traditional online banks. Consequently, in the nascent stages of Metaverse banking development, examining how existing banking users adopt Metaverse technology and exploring related controversies from an information systems implementation perspective is imperative. The present study employs a systematic literature review methodology, screening 19 relevant articles published between 2020 and 2024 from two leading academic databases, Web of Science (WOS) and Scopus. The review identifies factors hindering current bank users' adoption of Metaverse banking, such as privacy concerns and lack of social norms, while elucidating motivational drivers in Metaverse contexts, including usability and perceived enjoyment. Moreover, from a banking application perspective, this study proposes implementation recommendations for the initial deployment of Metaverse banking, including lowering user adoption barriers and enhancing immersive experiences. The study draws upon contemporary customer needs to analyse key decision-making considerations when using Metaverse banking services. It highlights key adoption barriers and drivers for sustained usage, emphasising potential challenges in the initial development phase, such as privacy leaks and user behaviour monitoring in virtual environments. Furthermore, it identifies unexplored research gaps in the early implementation stage of Metaverse banking. This review synthesises current knowledge on Metaverse technology in banking and outlines practical considerations and strategic directions for its integration within the industry.

Author 1: Yantu Ma
Author 2: Dalbir Singh
Author 3: Siok Yee Tan
Author 4: Meng Chun Lam
Author 5: Ab Ghani Nur Laili
Author 6: Haodong Guan
Author 7: Fengjin Lei
Author 8: Ahmad Sufril Azlan Mohamed

Keywords: Banking; human-computer interaction; information systems; Metaverse; virtual reality

PDF

Paper 24: Optimizing Image Retrieval: A Two-Step Content-Based Image Retrieval System Using Bag of Visual Words and Color Coherence Vectors

Abstract: Content-Based Image Retrieval (CBIR) systems play a crucial role in efficiently managing and retrieving images from large datasets based on visual content. This paper presents a novel bi-layer CBIR system that integrates Bag of Visual Words (BoVW) and Color Coherence Vector (CCV) methods to enhance image retrieval accuracy and performance by leveraging the strengths of both feature extraction techniques. In the first layer, the BoVW approach extracts local features and represents images as histograms of visual word occurrences, facilitating efficient initial filtering. In the second layer, CCV features are extracted from the top retrieved images to capture the spatial coherence of colour regions, providing a detailed colour signature. By combining the merits of both layers, the proposed system achieves higher retrieval precision and recall compared to the traditional single-layer approaches. Experimental results demonstrate the effectiveness of the bi-layer CBIR system in retrieving relevant images with improved accuracy, making it a valuable tool for application in image databases, digital libraries, and multimedia content management.

Author 1: Muhammad Sauood
Author 2: Muhammad Suzuri Hitam
Author 3: Wan Nural Jawahir Hj Wan Yussof

Keywords: CBIR system; Bag of Visual Words; color coherence vectors; bi-layer CBIR; two-step CBIR feature fusion; feature extraction

PDF

Paper 25: An Aggregated Dataset of Agile User Stories and Use Case Taxonomy for AI-Driven Research

Abstract: Agile methodologies are considered revolutionary approaches in the development of systems and software. With the rapid advancement of artificial intelligence, natural language processing, and large language models, there is an increasing demand for high-quality datasets to support the design and development of intelligent, practical, and effective automation tools. However, researchers in Agile Requirements Engineering face significant challenges due to the limited availability of datasets, particularly those involving user stories. This paper presents a dataset of over 10K user stories collected from academic sources and publicly accessible online repositories. These stories represent requirements formulated in accordance with Agile principles. The process of collecting and classifying data, as well as its use in a prior research project focused on identifying non-functional requirements, is described. The dataset was validated with substantial inter-annotator agreement and has been successfully employed in prior experiments, where a fine-tuned pre-trained language model achieved F1 scores above 93% in classifying non-functional requirements. Additionally, a structured taxonomy of potential research and practical use cases for this dataset is proposed, aiming to support researchers and practitioners in areas such as requirements analysis, automated generative tasks using generative language models, model development, and educational purposes.

Author 1: Abdulrahim Alhaizaey
Author 2: Majed Al-Mashari

Keywords: Agile software development; requirements engineering; user stories; natural language processing; datasets; large language models; generative language models

PDF

Paper 26: Comparative Evaluation of Centrality Measures for Detecting Significant Nodes in Social Networks

Abstract: Social networks are certainly a crucial platform for bringing people together globally. Detecting the significant nodes inside the social network remains an open problem because of the broad variety of network sizes. To solve this problem, different centrality measures have been introduced. Detecting the significant nodes is essential for speeding up or slowing down the spread of information, managing diseases and rumors, and more. This paper presents a comparative evaluation of 12 centrality measures to determine the most effective measure on the basis of accuracy, differentiation capability, and runtime. To validate performance, a series of experiments is conducted on four social networks using the validation metrics such as monotonicity, the SIR model, and Kendall tau. The experimental outcomes indicate that the gravity-based measures have superior accuracy and differentiation capability as compared to other measures. Finally, this paper outlines future research directions for enhancements based on centrality measures.

Author 1: Hardeep Singh
Author 2: Supreet Kaur
Author 3: Karman Singh Sethi
Author 4: Jai Sharma

Keywords: Social networks; centrality measures; significant nodes; validation metrics; extended gravity

PDF

Paper 27: Neural Networks for Pest Diagnosis in Agriculture: A Global Literature Review

Abstract: Agricultural pests severely reduce global crop yields. To mitigate these losses, pest identification systems based on artificial intelligence have gained importance. This review analyzes worldwide advances in the use of neural networks for agricultural pest diagnosis, covering studies from 2007 to February 2024 retrieved from the Scopus database. Data were processed in Minitab 19 and spreadsheets, and keywords were mapped with VOSviewer. Results show that India and China lead scientific output, with research focused on corn, tomato, rice, and wheat. The most common architectures are ResNet, YOLO, and VGG-16/19, achieving performance metrics of up to 99 %. The review highlights the strong relationship between economic development and the adoption of neural networks. These findings provide researchers, agricultural engineers, and policymakers with a global perspective to guide future AI-based pest management strategies and support automation, especially in developing countries.

Author 1: Heling Kristtel Masgo Ventura
Author 2: Italo Maldonado Ramírez
Author 3: Roberto Carlos Santa Cruz Acosta
Author 4: Wilfredo Ruiz Camacho
Author 5: Juan Eduardo Suarez Rivadeneira
Author 6: José Celso Paredes Carranza
Author 7: Mayra Pamela Musayón Díaz
Author 8: Cesar R. Balcazar Zumaeta
Author 9: Carlos Luis Lobatón Arenas
Author 10: Juan Alberto Rojas Castillo
Author 11: Eli Morales-Rojas

Keywords: Neural networks; pests; agriculture; developing countries

PDF

Paper 28: Evaluating User Experience in a Public Sector Digital System Through Nielsen’s Heuristic Approach

Abstract: The digital transformation in public administration has encouraged the Indonesian National Police (Polri) to adopt a digital government application for managing official documents electronically. Despite its functional benefits, users have reported several usability issues such as non-intuitive navigation, inconsistent interface design, inadequate system feedback, and insufficient documentation. To systematically address these problems, this study combines Jakob Nielsen’s Heuristic Evaluation (HE) with Partial Least Squares Structural Equation Modeling (PLS-SEM), offering a hybrid methodological approach that is rarely applied in public sector UX evaluation. Data were collected through a structured questionnaire distributed to 156 active users of the application. The instrument measured ten heuristic principles and user experience dimensions using a four-point Likert scale. The results reveal that all heuristic dimensions scored within the “good” range with mean values between 3.19 and 3.38, classified as cosmetic issues under Nielsen’s severity scale. More importantly, the analysis shows that only three heuristics—Match Between System and the Real World, Help Users Recognize Diagnose and Recover from Errors, and Help and Documentation—have a significant positive impact on user experience perceptions. Together, these heuristics explain 90.9 per cent of the variance in user experience, highlighting their critical role in shaping user-centered digital government systems. This study advances existing evaluation models by demonstrating the effectiveness of integrating heuristic evaluation with quantitative SEM-based analysis, bridging diagnostic insights with statistical rigor. The findings provide a prioritized roadmap for improving the application’s interface and emphasize the importance of user-centered design for enhancing the adoption and effectiveness of public sector digital systems.

Author 1: Nagyan Yosse Wibisono
Author 2: Viany Utami Tjhin

Keywords: Usability evaluation; heuristic evaluation; user experience; digital government; PLS-SEM; public sector technology; UX mining; usability modeling

PDF

Paper 29: NEBULA Framework: An Adaptive Framework for Unstructured Description to Solve Cold Start Problem

Abstract: The cold start problem is one of the main challenges in recommendation systems, especially when the system has to provide recommendations for new items that do not yet have a history of interaction. Although various approaches have been developed, most still use conventional interaction-based methods, which are not optimal in providing accurate recommendations for new items that only have minimal and unstructured descriptive information. This research aims to provide recommendations for new items that lack interaction history and have unstructured descriptive information by addressing the cold start problem more adaptively. The proposed model is based on Named Entity Recognition (NER) and metadata representation as an adaptive framework capable of adjusting recommendation methods based on the availability of initial information. For new items, the system utilizes basic attributes such as product type, materials, and origin, and employs an adaptive approach for rating prediction. Testing results demonstrate system performance with an Accuracy of 0.967, Precision of 0.838, Recall of 0.846, F1-score of 0.842, and an average Mean Absolute Error (MAE) of 0.159. This adaptive framework proved to be superior to conventional approaches, with improvements in Precision of 15.59%, Recall of 17.50%, F1-score of 16.54%, and a significant reduction in MAE. Additionally, the Kappa value of 0.69 indicates a high level of agreement (substantial agreement) among validators. These findings demonstrate that the system is not only more accurate in recommending new items but also more reliable under minimal data conditions, thereby enhancing user confidence. Overall, this NER and metadata-based framework can serve as an effective solution for addressing the cold start problem and improving recommendation quality during the initial stages.

Author 1: I Gusti Agung Gede Arya Kadyanan
Author 2: Ni Made Ary Esta Dewi Wirastuti
Author 3: Gede Sukadarmika
Author 4: Ngurah Agus Sanjaya ER

Keywords: Cold start; adaptive framework; recommender system; NER; unstructured description

PDF

Paper 30: Analysis of Factors Affecting Continuance Intention in Indonesian Digital Banks

Abstract: Indonesian Digital Banks are currently competing to get more customers with their own mobile application, digital finance ecosystem and their promotion method. The research aims to find out the factors that influence customer satisfaction when using the digital bank application. The variables used in this study are System Quality, Service Quality, Information Quality, Perceived Advantage Effort Expectancy, Digital Ecosystem, User Satisfaction, and Continuance Intention in the mobile application. The results of the study concluded what factors influence satisfaction to develop a better mobile application for an Indonesian digital bank. The research model and questionnaire use a modified research model of the successful information system model DeLone and McLane and in analyzing the results of the questionnaire, researchers used descriptive statistics and Structural Equation Model (SEM) analysis using SMART PLS V.4. The results of the management of these data conducted that there are 11 significant direct effects on User Satisfaction or Continuance Intention, and 6 direct effects that were not significant. There were also 8 significant indirect effects, mainly where User Satisfaction mediated the impact of other variables on Continuance Intention. The results of this study are expected to provide a better reference for Indonesian digital banks to improve their mobile app services and maintain customer loyalty.

Author 1: Soros Lie
Author 2: Viany Utami Tjhin

Keywords: Digital banking; DeLone and McLean; structural equation model; continuance intention; user satisfaction

PDF

Paper 31: Generating a Trading Strategy Using Candlestick Patterns with Machine Learning

Abstract: This study examines the application of machine learning (ML) algorithms for multi-day stock price prediction on the Nigerian Stock Exchange (NSE) from 2013 to 2023, to inform trading strategies. Utilizing candlestick patterns and technical indicators, including Simple Moving Average (SMA), Exponential Moving Average (EMA), and Volume Rate of Change (VROC), as input features, the models were trained to capture historical price dynamics. Among the evaluated algorithms, Ridge Regression demonstrated superior performance, achieving a Mean Absolute Error (MAE) of 0.0366 over a three-day forecasting horizon, while effectively mitigating overfitting and handling market volatility. In contrast, Decision Tree, Lasso, Support Vector Regressor (SVR), and K-Nearest Neighbors (KNN) models exhibited limitations due to sensitivity to data noise and overfitting. A recursive multi-step forecasting approach further enhanced prediction accuracy by incorporating temporal dependencies. However, backtesting revealed that predictive accuracy alone did not guarantee profitable trading outcomes, emphasizing the need to integrate market conditions, risk management, and strategy design. The findings underscore the importance of robust feature engineering and data preprocessing in financial ML applications. While Ridge Regression shows promise for stock price forecasting, successful trading strategies require a holistic framework that accounts for broader market factors. Future research should explore hybrid modeling techniques and additional exogenous variables to improve robustness.

Author 1: Hussaina Bala Malami
Author 2: Badamasi Imam Ya’u
Author 3: Fatima Umar Zambuk
Author 4: Mohannad Alkanan
Author 5: Osman Elwasila
Author 6: Mohammad Shuaib Mir
Author 7: Mohammed Nasir Danmalam Bawa
Author 8: Yonis Gulzar

Keywords: Nigerian stock trading; machine learning; pattern recognition; candlestick

PDF

Paper 32: Advancing Speech Enhancement with Generative Adversarial Network-Autoencoder: A Robust Adversarial Autoencoder Approach

Abstract: In day-to-day life, the speech signals are often noisy and distorted by background noise. These signals are not suitable for use in different audio-operated applications directly, as they are distorted. The use of these noisy voice signals can degrade the performance of the speech communication system. There are a huge number of applications nowadays that we use for various purposes, which utilize voice as input. Our study focuses on speech enhancement, which involves a combination of Generative Adversarial Networks (GAN) and Autoencoders (AE). The required features are extracted by using the MFCC algorithm from the MUSAN dataset. The features extracted with MFCC are paired samples of clean and noisy speech. The main architecture is a combination of GAN and AE. The Generator is trained to reconstruct clean speech features from noisy speech signal inputs. On the other hand, the discriminator is trained to tell the difference between real clean samples and samples that are generated by the generator. The adversarial training approach continuously improves the performance of the generator to produce good-quality and more intelligent speech. The MUSAN dataset used for the experiment contains data of noisy speech. As a result, we note that the model performs very well and shows robustness across multiple types of noise samples. The AE is used for feature reconstruction, and the GAN for generating fake samples. This combination of GAN and AE resulted in a good solution for speech enhancement in a noisy and distorted environment.

Author 1: Mandar Diwakar
Author 2: Brijedra Gupta

Keywords: Speech enhancement; Generative Adversarial Network (GAN); Autoencoder (AE); MFCC; noise robustness; adversarial training

PDF

Paper 33: Enhancing Smart City Safety: Deep Learning Approaches for Automatic Vehicle Accident Recognition

Abstract: Traffic accidents have significant societal impacts due to the substantial human and material losses they cause. Recently, numerous AI-based traffic surveillance technologies, such as Saher, have been implemented to improve traffic safety in Saudi Arabia. The prompt detection of vehicle accidents is crucial for enhancing the response time of accident management systems, thereby reducing the number of injuries resulting from collisions. This study evaluates various deep learning algorithms to determine the most effective method for detecting and classifying car accidents. Multiple deep-learning models were trained and tested using an extensive dataset of car accident images, allowing for the accurate identification and classification of different types of accidents. Among the six pre-trained models analyzed, ResNet-101 achieved the highest accuracy, with a classification rate of 93%. For accident detection, YOLOv5 attained a mean Average Precision (mAP) of 97.8%, indicating superior performance compared to YOLOv8 and YOLOv9, and highlighting its capability to effectively detect accidents in video footage. The research’s primary goal is to enhance urban safety by enabling rapid accident detection, which supports timely emergency responses, minimizes fatalities, and contributes to the development of safer and more resilient smart cities.

Author 1: Ahad AlNemari
Author 2: Shahad AlOtaibi
Author 3: Majd Jada
Author 4: Aeshah AlHarthi
Author 5: Sara AlThuwaybi
Author 6: Wojoud AlNemari
Author 7: Nadan Marran
Author 8: Abdulmajeed Alsufyani

Keywords: Accident detection; deep learning algorithms; ResNet-101; traffic safety; YOLOv5; YOLOv9

PDF

Paper 34: Prioritizing Non-Functional Requirements and Influencing Factors for API Quality Framework: An Industry Approach

Abstract: Application Programming Interface (API) management is currently a trending research area; however, APIs require careful attention to Non-Functional Requirements (NFRs) to ensure system performance, maintainability, security, and resiliency. The software industry struggles to maintain API quality, especially NFRs, due to a focus on functional aspects in standards like the OpenAPI Specification (OAS). Similarly, standards, such as ISO/IEC 25010:2023, evaluate the quality of general software but offer limited guidance on addressing API challenges. Based on the industry perspective, this paper prioritizes the most critical quality attributes and their influencing factors for APIs, supporting the development of a Non-Functional Requirement Quality Framework for APIs (NFRQF-API). We adopted ISO/IEC 25010 as our reference standard and surveyed industry experts. Eleven NFRs are added in the survey, including nine from ISO/IEC 25010 and two additional attributes, Observability and Resiliency, identified through the literature review. A structured survey tool has been validated, pilot-tested, and distributed to 38 API practitioners, with data analyzed through IBM Statistical Package for the Social Sciences (IBM SPSS). SPSS demonstrates strong internal consistency (α > 0.7) across items within each group. Additionally, Maintainability (4.29) and Resiliency (4.20) have been identified as core NFRs, while Interaction Capability (3.18), Flexibility (3.18), and Safety (2.93) have lower scores based on their mean calculation. The remaining six NFRs are moderately significant, highlighting their ongoing importance. These findings, based on NFR classification, establish a solid foundation for developing a Quality Framework for APIs aligned with modern Software engineering requirements. The article supports researchers and practitioners to build a strong understanding towards NFR prioritization, which is a crucial step for API quality management.

Author 1: Aumir Shabbir
Author 2: Aziz Deraman
Author 3: Mohamad Nor Bin Hassan
Author 4: Kamal Uddin Sarker
Author 5: Shahid Kamal

Keywords: Non-Functional Requirements (NFRs); Application Programming Interface (API); software development practices; API quality; Non-Functional Requirement Quality Framework for APIs (NFRQF-API); ISO/IEC 25010

PDF

Paper 35: Perceived Usefulness and Perceived Ease of Use as Predictors of Attitude Toward IoT Adoption Among Rice Farmers

Abstract: This study investigates key drivers influencing rice farmers’ attitudes toward Internet of Things (IoT) adoption in Indonesia, using the Technology Acceptance Model (TAM) as an analytical lens. Specifically, it evaluates the predictive roles of Perceived Usefulness (PU) and Perceived Ease of Use (PEOU), both of which are posited to shape users’ Attitude Toward Usage (ATT). Survey data were obtained from 62 smallholder farmers in Bandung Regency and examined through Partial Least Squares Structural Equation Modeling (PLS-SEM). The study confirms that both PU and PEOU significantly contribute to the formation of favorable farmer attitudes, with the model showing high explanatory strength (R² = 0.723). PU captures the perceived benefit of IoT for improving productivity and efficiency, while PEOU reflects user-friendly design and ease of integration into existing agricultural routines. These findings extend TAM’s validity in rural, low-tech farming contexts and offer actionable insights for technology developers, government agencies, and agricultural organizations seeking to foster digital transformation. By confirming the relevance of usability and perceived value, this study supports targeted design and communication strategies that align with farmers’ expectations. It also lays the groundwork for broader ASEAN agricultural resilience efforts by emphasizing inclusive technology pathways. Future research may incorporate sociocultural dimensions and systemic barriers to expand the model's applicability in diverse farming environments.

Author 1: Hermin Arrang
Author 2: Sek Yong Wee
Author 3: Nazrulazar Bin Bahaman
Author 4: Jack Febrian Rusdi

Keywords: IoT; attitude towards IoT; Perceived Usefulness; Perceived Ease of Use; technology adoption

PDF

Paper 36: An Analytical Review of Environmental and Machine Learning Approaches in Dengue Prediction

Abstract: In recent years, dengue has gained prominence as a priority public health challenge due to increasing incidences of spread. The main objective of this systematic literature review (SLR) is to explore the use of environmental factors and machine learning (ML) techniques to combat dengue, based on studies published between 2020 and 2024. For this purpose, 56 studies were selected from a balanced distribution of PubMed, Web of Science, Scopus and Springer Link, under the Preferred Reporting Items for Systematic Reviews and meta-analyses (PRISMA) method. The results obtained made it possible to determine that the climatological variables, such as temperature difference, humidity concentration and rainfall volume, are conditioning factors in the spread of the dengue virus. As for ML models, Random Forest and Support Vector Machines proved to be more accurate than traditional methods in detecting risk areas. The highest scientific production corresponded to the year 2024, with 25% of the studies, while India, with 14.29%, and the United States, with 12.50%, stood out as the countries with the highest contribution. In conclusion, ML techniques have enormous potential for strengthening early detection systems and optimizing resources in high-risk areas, but further research is needed in this field due to the lack of data availability and replicability of models.

Author 1: Orlando Iparraguirre-Villanueva
Author 2: Juan Chavez-Perez
Author 3: Eddier Flores-Idrugo
Author 4: Luis Chauca-Huete

Keywords: Public health analytics; machine learning models; disease prediction; environmental risk factors; dengue surveillance; health data analysis

PDF

Paper 37: Assessing the Effectiveness of MCR-KSM for Waiting Waste Reduction: An Empirical Study

Abstract: Modern Code Review (MCR) is a well-known and widely adopted quality assurance activity to develop quality software. Although it is a core activity for improving code quality, it generates various types of waste, including waiting waste, defect waste, and composite solution waste. Besides all other wastes, the waiting waste is the most critical one, leading to mental distress, delayed code merges, and project delays. Researchers have made efforts to reduce the production of waiting waste by providing various automated code review tools, techniques and models, one of them is the MCR Knowledge Sharing Model (MCR-KSM). The model claims that it supports sustainable software engineering by minimizing waiting waste reduction during MCR activities. This study aims to evaluate the effectiveness of MCR-KSM with respect to the reduction of waiting waste produced during MCR activities. The experiment methodology is employed for this purpose. This paper presents the experimental investigation approach along with the results. The experiment was conducted in dual sessions with 28 graduate students having similar educational and industrial experience. The tools and techniques, such as SPSS paired t-test and value stream mapping, are used for experimental data management and analysis. The study results revealed that the model significantly helps in the reduction of waiting waste production. The conducted study has implications for investigators to extend the research with different parameters and settings.

Author 1: Nargis Fatima
Author 2: Sumaira Nazir
Author 3: Suriayati Chuprat

Keywords: Modern code review; wastes; waiting waste; software quality; automated code review; sustainable software engineering

PDF

Paper 38: A Review of Attention-Enhanced GRU Models with STL Decomposition for Food Loss Forecasting

Abstract: Forecasting food loss with high accuracy is crucial for improving global food security, optimising supply chains, and supporting sustainability goals. However, conventional time series models and standard deep learning techniques, including recurrent neural networks (RNNs), often fall short in handling the irregularity, seasonality, and complexity inherent in food loss data. While Gated Recurrent Units (GRUs) offer advantages over traditional RNNs, such as mitigating vanishing gradients, they still face limitations in modelling long-range dependencies and noisy sequences. This paper reviews recent advancements aimed at overcoming these challenges by enhancing GRU-based models with attention mechanisms and seasonal-trend decomposition using Loess (STL). Evidence from related domains shows that attention mechanisms improve the capture of long-term dependencies and interpretability, while STL decomposition strengthens stability and accuracy by isolating seasonal and trend components. Hybrid GRU models that combine both approaches consistently outperform standalone methods, highlighting their promise for robust and interpretable forecasting. Though underexplored in the context of food loss, this paper identifies the research gap and advocates for domain-specific GRU–attention–STL architectures, offering a foundation for future empirical work to enable timely interventions and foster resilient, data-driven food systems.

Author 1: Ru Poh Tan
Author 2: Siew Mooi Lim
Author 3: Kuan Yew Leong
Author 4: Shee Chia Lee
Author 5: Siaw Hong Liew
Author 6: Jun Kit Chaw

Keywords: GRU; food loss forecasting; attention mechanism; seasonal decomposition; STL; loess; time series; deep learning

PDF

Paper 39: An Incremental LSTM Ensemble for Online Intrusion Detection in Software-Defined Networks

Abstract: Software-Defined Networking (SDN) promises flexible control of network flows but also exposes controllers to rapidly shifting attack surfaces. Conventional intrusion-detection engines, trained once and deployed statically, falter when traffic patterns drift. We introduce an adaptive intrusion detection system that couples a mini-batch incremental learning scheme with a five-member ensemble of Long Short-Term Memory (LSTM) classifiers. Each model trains on successive data partitions drawn from the InSDN dataset, while a lightweight tracker monitors accuracy and “age.” A weighted-voting rule penalizing stale models in proportion to their lifetime lets the ensemble down-rank obsolete learners without full retraining. When the tracker flags slippage, only the most dated models are refreshed, limiting computational load and preserving service continuity. Across four streaming iterations, the system sustains a mean detection accuracy of 95.8% and a 3.2% false-positive rate, recovering quickly from concept drift that drives individual models to baseline performance. Comparative analysis against three recent SDN IDS baselines shows improvements of up to 14 percentage points in accuracy and 0.48 in F-score, without sacrificing latency (≈50 ms). These results indicate that modest, well-timed retraining rather than continual online updates can keep an SDN IDS both nimble and efficient. The approach offers a practical roadmap for securing programmable networks that evolve by the hour.

Author 1: Raed Basfar
Author 2: Mohamed Y. Dahab
Author 3: Abdullah Marish Ali
Author 4: Fathy Eassa
Author 5: Kholoud Bajunaied

Keywords: Software-defined networking; intrusion detection; incremental learning; LSTM ensemble; concept drift; weighted voting

PDF

Paper 40: Comprehensive Analysis of YOLOv8 + DeepSORT for Vehicle Tracking: HOTA and CLEAR-Based Evaluation

Abstract: This paper offers a thorough comparative investigation of the performance of a vehicle multi-object tracking system, incorporating various versions of the YOLOv8 detector (from ‘n’ to ‘x’) alongside the DeepSORT tracking algorithm. This study systematically assesses the impact of the trade-off between detector speed and accuracy on tracking metrics, utilising a real-world traffic video dataset from Bali. The assessment is performed utilising two fundamentally distinct metric frameworks: the traditional CLEAR metric (which includes MOTA) and the contemporary Higher Order Tracking Accuracy (HOTA) metric. The findings indicate that although the larger YOLOv8 model markedly enhances detection recall, particularly for smaller and more difficult items like motorcycles, tracking issues persist. The dual metric study provides significant insights: the HOTA measure demonstrates that car tracking has more associative stability (higher AssA scores) compared to motorbike tracking, which frequently experiences track fragmentation. In contrast, the detection-biased MOTA metric produces somewhat paradoxical outcomes, as motorbikes receive elevated scores due to enhanced detection accuracy (fewer false positives), therefore obscuring deficiencies in tracking consistency. This study concludes that HOTA offers a more comprehensive evaluation by differentiating between detection and association performance, so demonstrating that detection-only metrics like MOTA can yield an imperfect representation of actual tracking ability. These findings underscore the necessity of matching detector architecture and evaluation criteria with specific application requirements, particularly in safety-critical systems where identity consistency is essential.

Author 1: I Nyoman Eddy Indrayana
Author 2: Made Sudarma
Author 3: I Ketut Gede Darma Putra
Author 4: Anak Agung Kompiang Oka Sudana

Keywords: Multi-object tracking; higher order tracking accuracy metric; CLEAR Metric; YOLOv8

PDF

Paper 41: Sentence-Level Indonesian Sign Language (BISINDO) Recognition Using 3D CNN-LSTM and 3D CNN-BiLSTM Models

Abstract: Sign Language Recognition (SLR) has been an active area of research, but sentence-level SLR remains relatively underexplored. While most studies focus on recognizing individual signs, understanding full sentences presents greater challenges. This research proposes a sentence-level SLR using a combination of 3D Convolutional Neural Networks (3D CNN) for spatio-temporal feature extraction with sequential modeling using Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM). Connectionist Temporal Classification (CTC) is also used to enable training without word-level annotations. In this study, we used the Indonesian Sign Language (BISINDO) dataset, specifically the DKI Jakarta version, consisting of 900 videos representing 30 sentences, which was expanded to 3600 videos through data augmentation techniques such as speed variation and brightness adjustments. All videos underwent preprocessing to ensure data quality, and Bayesian Optimization was applied for hyperparameter tuning to obtain optimal configurations for each model. Both models were trained with CTC loss and evaluated using Word Error Rate (WER). The 3DCNN-LSTM model achieved a WER result of 59.21%, while the 3DCNN-BiLSTM presents a significantly better performance with a WER of 2.77%. Despite these promising results, the models’ ability to generalize across different signers may require further research, as the dataset used in this research involved only a single signer.

Author 1: Katriel Larissa Wiguna
Author 2: Rojali

Keywords: Sign Language Recognition; BISINDO (Indonesian Sign Language); 3D Convolutional Neural Network (3D CNN); Long Short-Term Memory (LSTM) Network; Bidirectional Long Short-Term Memory (BiLSTM); Connectionist Temporal Classification (CTC)

PDF

Paper 42: Formal Verification of a Blockchain-Based Security Model for Personal Data Sharing Using the Dolev-Yao Model and ProVerif

Abstract: Secure personal data sharing remains a critical challenge in decentralized systems due to concerns over privacy, compliance, and trust. This paper presents the formal verification of a Blockchain-Based Security Model (BSM) designed to address these challenges through a multi-layered architecture. The proposed model integrates Chaincode-as-a-Service (CCaaS) on Hyperledger Fabric to ensure modular, maintainable, and scalable execution of smart contracts. A Flask-based API serves as the secure gateway for data operations and identity management. Sensitive data is stored off-chain using InterPlanetary File System (IPFS), preserving decentralization while minimizing on-chain bloat. Access control is enforced using efficient cryptographic techniques, while Intel SGX (or simulated enclaves) safeguards secure data processing and decryption within trusted execution environments. To further enhance privacy guarantees, Zero-Knowledge Proofs (ZKPs) are optionally integrated to enable verifiable claims without disclosing raw data. For assurance of correctness and security, the BSM is formally modeled using the Dolev-Yao attacker model and verified through ProVerif, focusing on key security properties such as confidentiality, integrity, authentication, and accountability. The findings confirm that the proposed model satisfies stringent security goals and is robust against symbolic adversaries. This work contributes a verifiable and extensible framework for privacy-preserving data sharing in sectors such as healthcare, finance, and government. To the best of our knowledge, this is among the first works to formally verify a blockchain-based security model that simultaneously integrates modular chaincode execution (CCaaS), trusted hardware enclaves (Intel SGX), decentralized off-chain storage (IPFS), and optional Zero-Knowledge Proofs (ZKPs) with a unified framework for personal data sharing.

Author 1: Godwin Mandinyenya
Author 2: Vusumuzi Malele

Keywords: Blockchain; security model; Chaincode-as-a-Service; InterPlanetary File System; Intel Software Guard Extensions; Zero-Knowledge Proofs ProVerif; formal verification; Dolev-Yao

PDF

Paper 43: Enhancing Cybersecurity Programs in Small and Medium Enterprises (SMEs): A Systematic Literature Review

Abstract: Small and Medium Enterprises (SMEs) in Malaysia face increasing cybersecurity risks, yet their adoption of Security Education, Training, and Awareness (SETA) programs remains limited. Unlike prior reviews that focus broadly on SMEs, this study contributes novelty by systematically synthesizing empirical evidence within the Malaysian context. Guided by the PRISMA framework and supported by NVivo analysis, 57 studies published between 2019 and 2025 were examined to classify both the importance of SETA and the barriers to its implementation. The thematic analysis revealed six recurring domains of challenges: financial constraints, human resource limitations, management support, cultural resistance, technical infrastructure, and legal/data protection. Beyond consolidating fragmented insights, the study provides a taxonomy of challenges and practical recommendations such as modular training, role-specific awareness, and leveraging national initiatives. While this review offers structured guidance for policymakers and practitioners, its descriptive nature without empirical SME validation is a limitation, highlighting the need for future applied studies.

Author 1: Eliana Ludin
Author 2: Masnizah Mohd
Author 3: Fariza Fauzi

Keywords: Cybersecurity program; Security Education; Training; Awareness (SETA); systematic evaluation; Malaysian SMEs; NVivo; PRISMA

PDF

Paper 44: Hybrid Fuzzy–PPO Control for Precision UAV Spraying

Abstract: Precision agriculture increasingly relies on autonomous UAVs for tasks, such as crop monitoring and targeted pesticide spraying. However, maintaining stable flight and precise spray delivery under varying payloads and wind disturbances remains challenging. This paper proposes a hybrid control architecture that combines interpretable Mamdani fuzzy logic controllers with a deep reinforcement learning (DRL) agent (Proximal Policy Optimization, PPO). The fuzzy controllers encode expert-crafted rules for baseline altitude and attitude stabilization, while the PPO agent adaptively adjusts setpoints to optimize spray coverage and energy efficiency. We train the agent in a realistic PyBullet simulator with dynamic payload and wind conditions. In simulated precision-spraying trials, our hybrid controller outperformed both a conventional PID-based controller and a pure PPO controller. Specifically, it achieved roughly 2–3× faster disturbance rejection, near-zero overshoot, and ~30% faster settling than the baselines, resulting in more uniform coverage and reduced pesticide use. These results demonstrate that fusing fuzzy logic with deep PPO yields a UAV spray controller that is both high-performance and robust for precision agriculture applications.

Author 1: Ahmad B. Alkhodre
Author 2: Adnan Ahmed Abi Sen
Author 3: Yazed Alsaawy
Author 4: Nour Mahmoud Bahbouh
Author 5: Moahmmad Benaida

Keywords: UAVs; precision agriculture; UAV spraying; fuzzy logic control; reinforcement learning; Proximal Policy Optimization (PPO); hybrid control

PDF

Paper 45: Dynamic Assessment and Optimization Strategy for Brand Tourism Competitiveness in the Yangtze River Delta City Cluster Based on Entropy Weight-TOPSIS

Abstract: In the context of the integrated, high-quality development of the Yangtze River Delta City Cluster (YRDCC), brand tourism competitiveness is a key indicator of cities’ attractiveness and regional synergy. However, most existing studies focus on static comparisons and fail to dynamically assess competitiveness trends among cities. This study uses 27 cities in the YRDCC from 2019 to 2023 as a sample and applies the entropy weight-TOPSIS method for dynamic analysis of brand tourism competitiveness. This method integrates objective weights and relative performance across multiple indicators, enabling a comprehensive identification of city differences in resource allocation, brand communication, and service capacity. The findings reveal that Shanghai and Hangzhou lead in brand tourism competitiveness due to their strong economic foundations, rich tourism resources, and continuous brand development, playing a regional demonstration role. Suzhou and Nanjing have solid foundations but require improvements in brand internationalization and tourism experience. In contrast, Chuzhou and Chizhou lag behind due to insufficient industrial support, weak infrastructure, and low brand recognition. The study recommends enhancing brand tourism competitiveness by strengthening regional cooperation, promoting differentiated development, cultivating local brand identities, and advocating for green tourism, thereby providing a sustainable development model and empirical support for tourism development in China’s city clusters.

Author 1: Dongmei Wang
Author 2: Daoyi Wu

Keywords: Yangtze River Delta City Cluster; brand tourism competitiveness; entropy weight-TOPSIS; dynamic assessment

PDF

Paper 46: Edge-Integrated IoT and Computer Vision Framework for Real-Time Urban Flood Monitoring and Prediction

Abstract: Urban flash floods pose a critical threat to rapidly growing cities in India, where unplanned development, climate variability, and inadequate drainage amplify risks. Guwahati, in Northeast India, experiences recurrent inundation during monsoons, disrupting livelihoods and damaging infrastructure. This study presents an integrated IoT and AI-enabled framework for urban flood monitoring and prediction. A LoRa-based IoT sensor network was deployed to capture localized hydrological and meteorological parameters, overcoming the limitations of coarse weather APIs. Rainfall forecasting was implemented at the edge layer using Random Forest, XGBoost, CatBoost, and K-Nearest Neighbors, fused through a fuzzy logic model that achieved 92.4% accuracy, surpassing individual classifiers. In parallel, a computer vision pipeline detected drainage blockages from geotagged user images, with EfficientNetB0-U-Net achieving ~91% accuracy, outperforming ResNet50, InceptionV3, and MobileNetV2. By combining rainfall prediction, IoT sensing, and blockage detection, the proposed framework delivers a holistic, low-cost, and scalable early warning system, marking a novel contribution toward resilient urban flood management in resource-constrained settings.

Author 1: Rupesh Mandal
Author 2: Bobby Sharma
Author 3: Dibyajyoti Chutia

Keywords: Urban flood prediction; IoT sensor networks; edge computing; fuzzy logic fusion; hydraulic blockage detection; computer vision

PDF

Paper 47: Privacy-Aware ML Framework for Dynamic Query Formation in Multi-Dimensional Data

Abstract: Interactive data exploration at scale remains constrained by 1) weak adaptability to shifting query workloads, 2) limited and post hoc error guarantees, 3) poor scalability under dynamic, high-dimensional data, 4) sparse user guidance during query formulation, and 5) non-trivial system overheads from learned or probabilistic components. We propose an end-to-end, privacy-aware framework that dynamically forms SQL queries for multi-dimensional data using randomized signals derived from personal web usage. The method integrates: 1) on-device user modeling that converts browsing interactions into preference embeddings under local differential privacy; 2) a constrained-randomization layer that enforces coverage and diversity to avoid filter bubbles while remaining responsive to user intent; 3) a contextual bandit policy (with optional deep reinforcement learning extension) that selects or completes query templates using signals from user profiles, session context, and data synopses; and 4) an error-aware AQP executor combining stratified/pilot sampling, synopsis reuse, and confidence-interval gating with automatic sample escalation. This design directly addresses the above limitations: the bandit adapts online to workload shifts; the AQP layer provides pre-execution feasibility checks and per-query error control; synopsis reuse and AB-tree–style random sampling maintain low latency under updates; and a guidance module (predictive autocompletion with information-gain scoring) reduces user effort while preserving exploration diversity. To evaluate effectiveness, we introduce a privacy-preserving training regimen (federated updates over DP-noised profiles) and a novel benchmark protocol measuring time-to-insight, error compliance under differential privacy, session diversity, and latency against strong baselines. The result is an ML-driven exploration loop that achieves error-bounded interactivity, robust personalization, and scalable performance on evolving, high-dimensional datasets, while providing evaluation metrics that capture both user experience and privacy-preserving guarantees.

Author 1: B Bhavani
Author 2: Haritha Donavalli

Keywords: Dynamic query formation; Approximate Query Processing (AQP); local differential privacy; contextual bandits; reinforcement learning; constrained randomization; multi-dimensional data exploration

PDF

Paper 48: Performance Analysis of Spectrogram-Based Versus Raw Waveform-Based Deep Learning Models for Smoker Detection from Cough Audio

Abstract: The classification of cough sounds for smoker detection represents a challenging task in audio processing that compares different data representation methods. This study presents a performance analysis of two prominent deep learning approaches: a spectrogram-based model, the Audio Spectrogram Transformer (AST), and a raw waveform-based model, Wav2Vec2. We used 7,561 smoker and 7,561 non-smoker samples from the CODA TB DREAM Challenge dataset. Both models were trained with five-fold cross-validation and data augmentation (SpecAugment for AST; noise, pitch, and time shifts for Wav2Vec2). The raw waveform-based Wav2Vec2 model achieved the best performance, with an average accuracy of 86.5%, an F1-score of 0.862, and an Area Under the Curve (AUC) of 0.945, completing training in approximately 49 minutes per fold. In contrast, the spectrogram-based AST model reached around 76-77% accuracy and an AUC of 0.85 in approximately 78 minutes per fold. These findings indicate that the raw waveform-based approach is significantly more effective and computationally efficient than the spectrogram-based approach for this task, offering a robust method for non-invasive smoker classification through the analysis of vocal biomarkers.

Author 1: Widi Nugroho
Author 2: Alhadi Bustamam
Author 3: Rinaldi Anwar Buyung

Keywords: Smoker detection; cough audio classification; deep learning; Audio Spectrogram Transformer; Wav2Vec2; vocal biomarker

PDF

Paper 49: ProjectNavigator: A Software Project Management Approach Selection Assistant

Abstract: In software projects, project management approaches are crucial. Selecting a suitable management approach based on the specific project characteristics becomes the key to the success of the project. However, software projects are becoming more and more complex, and project managers tend to rely on subjective judgment to select the project management approaches. At present, project managers lack a systematic method or tool that can help them quickly and accurately select the most suitable project management approach to reduce project risks and improve the success rate. The objective of this research is to propose a tool to assist project managers in selecting the most suitable project management approach based on the specific project characteristics. This research will collect and analyze existing project management approaches and their applicable scenarios to extract relevant influencing factors. Then, a recommendation tool is developed to compare and recommend the most suitable project management approach. Finally, the usability and effectiveness of the tool will be validated through expert evaluation and usability testing. Through this tool, project managers can quickly analyze and compare the suitability of different management approaches and obtain specific guidance and suggestions, significantly improving the success rate of projects.

Author 1: Lin Dongzhi
Author 2: Salfarina Abdullah

Keywords: Software projects; project management; recommendation tool; expert evaluation; usability testing

PDF

Paper 50: A Hybrid RoBERTa-BiGRU-Attention Model for Accurate and Context-Aware Figurative Language Detection

Abstract: Figurative language, especially sarcasm, poses strong challenges for Natural Language Processing (NLP) models because of its implicit, context-sensitive nature. Both traditional and transformer models tend to find it difficult to identify these subtle forms, particularly when dealing with imbalanced datasets or without mechanisms for targeted interpretability. For overcoming these shortcomings, this study recommends a hybrid deep learning architecture that integrates RoBERTa for high-contextual embedding, Bidirectional Gated Recurrent Units (BiGRU) to capture bidirectional sequential relations, and an attention mechanism allowing the model to focus on the most informative parts of the input text. This integration enhances semantic understanding and classification accuracy compared to current solutions. The model is trained and tested on the benchmark News Headlines Dataset for Sarcasm Detection using binary cross-entropy loss minimized with Adam, along with dropout and learning rate scheduling to avoid overfitting. Experimental results yield better performance, attaining an accuracy of 92.4%, a precision of 91.1%, a recall of 93.2%, and an F1-score of 92.1%. These results outperform baseline techniques such as BiLSTM with attention and fine-tuned BERT variants. Implementation uses PyTorch and Hugging Face Transformers, ensuring reproducibility and extensibility. While effective, the model faces challenges with figurative expressions requiring external world knowledge or cultural context beyond pretrained embeddings. Future work aims to integrate external knowledge graphs and extend the model to multilingual and cross-domain scenarios. This hybrid framework advances figurative language detection, contributing to the broader goal of enhancing AI’s nuanced understanding and interpretability of human language.

Author 1: Sreeja Balakrishnan
Author 2: Rahul Suryodai
Author 3: S. Manochitra
Author 4: Jasgurpreet Singh Chohan
Author 5: Karaka Ramakrishna Reddy
Author 6: A. Smitha Kranthi
Author 7: Ritu Sharma

Keywords: Figurative language detection; sarcasm classification; RoBERTa-BiGRU-Attention model; contextual embeddings; Natural Language Processing

PDF

Paper 51: Enhancing Code Quality Through Automated Refactoring Using Transformer-Based Language Models

Abstract: Maintaining high-quality source code is crucial for software reliability, scalability, and maintainability. Traditional refactoring methods, which involve manual code improvement or rule-based automation, often fall short due to their inability to understand the contextual semantics of code. These approaches are rigid, language-specific, and prone to inconsistencies, especially in large and complex codebases. As a result, developers spend significant time and effort identifying code smells, restructuring poorly written segments, and ensuring behavior preservation shows an accuracy of 97%. To address these limitations, this study proposes an automated code refactoring framework powered by Transformer-based language models. Leveraging models such as CodeT5, which are pre-trained on massive code corpora, this approach captures both syntax and semantic patterns to suggest intelligent, context-aware code transformations. The model is fine-tuned using a curated dataset of original and refactored code pairs to learn efficient refactoring strategies. The methodology involves preprocessing raw source code, tokenizing it for model input, and generating improved versions of the code using the trained Transformer model. Output suggestions are validated using Abstract Syntax Tree (AST) analysis and unit testing to ensure behavioral equivalence. Code quality improvements are quantified using metrics like maintainability index, cyclomatic complexity, and duplication rate. Experimental results demonstrate that the proposed method significantly enhances code readability and maintainability while reducing developer effort, outperforming traditional rule-based refactoring tools.

Author 1: A. Sri Lakshmi
Author 2: E. S. Sharmila Sigamany
Author 3: Roopa Traisa
Author 4: Raman Kumar
Author 5: Karaka Ramakrishna Reddy
Author 6: Jasgurpreet Singh Chohan
Author 7: Aseel Smerat

Keywords: Automation; code refactoring; maintainability; transformer models; unit testing

PDF

Paper 52: Real-Time Biomechanical Squat and Deadlift Posture Analysis Using Google Machine Learning Kit

Abstract: This project presents the development of a mobile application for real-time posture analysis during squat and deadlift exercises, using Google Machine Learning (ML) Kit pose detection. Proper exercise form is critical in preventing injuries, underscoring the need for systems that provide immediate feedback, an aspect often missing in existing fitness applications. This study addresses that gap by designing an app that not only guides users through motion analysis but also incorporates a safety mechanism to detect sudden falls. The system employs algorithms to process landmarks, calculate joint angles, count repetitions, and trigger emergency alerts. Two groups of bodybuilders confirmed the usability and accuracy in real-time biomechanical squat and deadlift posture analysis. These findings contribute to the field of AI-driven fitness by introducing a non-wearable, mobile-based solution for guided strength training. In addition, it offers societal benefits as an AI-powered fitness coach that aims to promote public health.

Author 1: Liew Yee Jie
Author 2: Ting Tin Tin
Author 3: Chaw Jun Kit
Author 4: Ali Aitizaz
Author 5: Ayodeji Olalekan Salau
Author 6: Omolayo M. Ikumapayi
Author 7: Lim Siew Mooi

Keywords: Pose detection; squat and deadlift; Google ML Kit; fitness; posture analysis; emergency; public health

PDF

Paper 53: A Novel CNN-Based Feature Fusion Framework for Breast Cancer Ultrasound Image Classification

Abstract: Breast cancer remains a major global health concern and is among the leading causes of cancer-related deaths in women. Timely and precise diagnosis significantly improves treatment outcomes and patient survival rates. This paper presents a novel deep learning-based framework for breast cancer classification using ultrasound imagery, built upon the concatenation of two pre-trained Convolutional Neural Network (CNN) models: VGG19 and EfficientNetB0. By leveraging transfer learning and combining heterogeneous feature representations, the proposed method enhances the discriminative power of the extracted features. The model is evaluated on a publicly available benchmark ultrasound dataset and assessed through standard performance indicators, including accuracy, precision, recall, F1-score, and Area Under the Curve (AUC). In addition, Gradient-weighted Class Activation Mapping (Grad-CAM) is employed to generate interpretability heatmaps, visually highlighting regions that contribute most to classification outcomes. The experimental findings reveal that the integrated architecture outperforms several existing approaches as well as individual CNN baselines. This study contributes to the growing field of AI-assisted medical diagnostics and demonstrates the effectiveness of model fusion in ultrasound-based breast cancer detection.

Author 1: Mobarak Zourhri
Author 2: Bouchaib Cherradi
Author 3: Mohamed El Khaili

Keywords: Breast cancer classification; ultrasound imaging; Convolutional Neural Networks (CNN); transfer learning; model fusion; Grad-CAM; deep learning

PDF

Paper 54: Machine Learning for Recommender Systems Under Implicit Feedback and Class Imbalance

Abstract: Recommender systems (RS) in domains with implicit feedback and significant class imbalance, such as health insurance, face unique challenges in accurately predicting user preferences. This study proposes a machine learning framework leveraging tree-based ensemble methods to address these limitations. We conducted a comprehensive comparative analysis of algorithms, including Decision Trees, Random Forest, Gradient Boosting Machines, CatBoost, Extra Trees, HistGradient Boosting, and XGBoost to identify the most effective approach for handling data skew and complex feature interactions. The model was trained on a real-world dataset from an international insurance broker, containing demographic profiles and purchase histories. After extensive preprocessing and class rebalancing, the models were optimized and evaluated on a separate test set. Among these, XGBoost verified superior performance, achieving remarkable results with a precision of 97.23% and an accuracy of 97.51%. The model presented robust generalization capabilities and convergence stability, with no signs of overfitting. Concretely, these performances translate into an increased ability for insurers to reliably identify customer needs from limited behavioral data, thus improving the relevance of personalized offers. These findings highlight the efficacy of XGBoost in treatment datasets with unbalanced implicit feedback and its capacity as an effective solution for complex recommendation problems. This work contributes a practical and scalable framework for improving personalized recommendations in data-constrained environments.

Author 1: Younes KOULOU
Author 2: Norelislam EL HAMI

Keywords: Recommender systems; XGBoost; implicit feedback; class imbalance; health insurance

PDF

Paper 55: Mapping Elderly Residential Research During the Onset of Baby Boomer Aging: A Bibliometric Analysis

Abstract: As global population aging accelerates, housing for older adults has emerged as a critical interdisciplinary research topic. Understanding how academic attention has evolved in this field is essential for informing policy and guiding future research. This study conducted a bibliometric analysis of 2,141 English-language publications related to elderly residential indexed in the Web of Science Core Collection from 2002 to 2021. It systematically examined publication trends, leading countries and institutions, key subject areas, collaboration networks, and journal co-citation patterns. The results show that the United States, Australia, and China are the top contributors in terms of publication volume, while countries like Sweden and New Zealand demonstrate high research intensity on a per capita basis. Over time, the research focus has shifted from clinical geriatrics and nursing toward environmental sciences, urban planning, and public health, indicating an increasing interdisciplinary integration. Collaboration network analysis highlights the central roles of institutions in Australia and Hong Kong in facilitating international research partnerships. This study maps the global knowledge landscape of elderly residential research and provides a foundation for future policy development, interdisciplinary collaboration, and scholarly inquiry.

Author 1: Keyi Xiao
Author 2: Han Wang
Author 3: Yan Ma

Keywords: Aging; housing for the elderly; long-term care; environment design; bibliometrics

PDF

Paper 56: Strategic Decision Support in Financial Management Using Deep Learning-Based Stock Price Prediction Models

Abstract: The Strategic decision support creates a strong and smart decision support system for financial management through the correct forecast of stock prices with deep learning. Statistical models and shallow machine learning techniques tend to be ineffective in modeling the nonlinear relationships, sequential interdependencies, and time-dependent volatility typical of financial data; consequently, poor prediction quality and untrustworthy investment choices. To overcome these constraints, introduce a new hybrid deep learning architecture based on the Temporal Fusion Transformer (TFT) combined with Bidirectional Long Short-Term Memory (BiLSTM) networks. The proposed hybrid model is expected to complement time-series forecasting by simultaneously utilizing attention mechanisms for explainability and sequence memory functions for richer temporal understanding. The model is trained and tested with the publicly available Stock Market Dataset on Kaggle, which includes stock history from various companies. The whole process is carried out on the Python platform using TensorFlow along with relevant libraries for data preprocessing, feature scaling, and model training. The new TFT-BiLSTM model surpasses conventional models through an accuracy level of 93.4% and an F1-score of 94.2%, demonstrating its precision and generalization power. The system provides strategic benefits in financial planning and risk management. Financial analysts, investors, fintech, and portfolio managers may take advantage of our prediction system to make rational buy/sell judgments, minimize risks, and maximize asset allocations. By synthesizing state-of-the-art deep learning models and public financial data, our framework illustrates that accurate stock price prediction can be an effective mechanism for supporting decision-making in financial markets.

Author 1: Layth Almahadeen
Author 2: Chinnapareddy Venkata Krishna Reddy
Author 3: Roopa Traisa
Author 4: Mukhamadiev Sanjar Isoevich
Author 5: Lavanya Kongala
Author 6: Janvi Anand Rathi
Author 7: Revati Ramrao Rautrao

Keywords: Stock price forecasting; deep learning; temporal fusion transformer; financial decision support; BiLSTM

PDF

Paper 57: Advancements in Texture Analysis and Classification: A Bibliometric Review of Entropy-Based Approaches

Abstract: Entropy-based texture analysis has gained significant attention in medical imaging, computer vision, and material science. The purpose of this paper is to provide a bibliometric review that maps the evolution, key contributors, research trends, and emerging themes of entropy-based texture analysis from 1980 to 2025. Using the Scopus database, 1,482 articles were initially retrieved and refined to 1,226 documents for analysis. VOSviewer was employed for bibliometric mapping, examining publication trends, authorship networks, keyword co-occurrence, and citation patterns. Results indicate a notable increase in research activity between 2004 and 2021, followed by a decline in recent years. The analysis highlights leading contributors, with significant work focusing on medical imaging applications such as radiomics and tumor heterogeneity assessment. While Shannon entropy remains widely used, newer measures like sample entropy, permutation entropy, and dispersion entropy are gaining attention. The study also identifies major research clusters, demonstrating the interdisciplinary nature of entropy-based texture analysis across medicine, engineering, and artificial intelligence. Despite database and language limitations, this review provides valuable insights into the field’s evolution and future directions, encouraging further interdisciplinary collaborations and advancements.

Author 1: Muqaddas Abid
Author 2: Muhammad Suzuri Hitam
Author 3: Rozniza Ali
Author 4: Muhammad Hammad

Keywords: Artificial intelligence; bibliometric review; entropy; research trends; texture analysis

PDF

Paper 58: A Review of Visualization Techniques for Duplicate Detection in Cancer Datasets

Abstract: As clinical cancer research increasingly depends on large, diverse datasets, concerns about data duplication have grown. Duplicates can undermine data integrity, skew analytical results, and reduce the reproducibility of studies. This review explores how visualization can play a critical role in identifying and managing duplicates in non-image clinical cancer data. Drawing from literature in biomedical informatics, data quality, and visual analytics, it synthesizes current approaches and highlights key challenges. Using a scoping review methodology, we analyzed studies published over the past two decades, focusing on non-image clinical datasets. Studies were selected based on relevance to duplicate detection and visualization, excluding those centered on image or video data. Major datasets like The Cancer Genome Atlas (TCGA), The Cancer Imaging Archive (TCIA), and the North American Association of Central Cancer Registries (NAACCR) are examined to show how duplication occurs across genomic, clinical, and registry data. The review assesses existing visualization techniques based on their scalability, interactivity, integration with deduplication algorithms, and how well they address core data quality dimensions. While some tools offer scalable and interactive features, few provide clear visual representations of duplicates, especially those involving complex temporal and multidimensional patterns. Several methodological gaps are identified, including limited integration of data quality metrics, inadequate support for tracking changes over time, and a lack of standardized evaluation frameworks. To address these issues, the review advocates for the development of practical, user-friendly visualization tools that combine duplicate detection with key indicators of data quality. By offering a more complete and intuitive view of clinical datasets, such tools can help researchers and clinicians make better-informed decisions, ultimately improving the reliability and impact of cancer research. Bridging the gap between technical detection and visual understanding is essential for advancing data-driven healthcare and ensuring high-quality, reproducible outcomes.

Author 1: Nurul A. Emran
Author 2: Ruhaila Maskat

Keywords: Duplicate detection; data duplication; visualization; deduplication; TCGA; TCIA; NAACCR

PDF

Paper 59: Hybrid Real Time Facial Emotions Recognition on Autistic Individuals

Abstract: Communication and social interaction issues are frequently linked to autism, which can have an impact on quality of life, work, and education. Opportunities to lessen these difficulties are presented by assistive technology, especially those that facilitate individualized and encouraging engagement. Facial expression recognition (FER) is essential to these systems, but current methods are still inadequate for autism-specific situations even though they achieve high accuracy on benchmark datasets like CK+. Because autistic people usually exhibit aberrant, subtle, or ambiguous facial expressions—which deviate from the common patterns used to train traditional FER models—this limitation occurs. In this work, we suggest a hybrid model that combines an LSTM network for temporal modeling of video sequences with three pretrained convolutional neural networks (EfficientNetB0, ResNet50, and MobileNetV2) for spatial feature extraction. Although the model performs well on CK+, its applicability to autism is still limited by the lack of relevant datasets and the use of artificial intelligence (AI)-generated videos rather than authentic recordings. The critical need for more comprehensive data and adaptive model designs tailored to autistic populations is highlighted by these findings.

Author 1: Fatima Ezzahrae El Rhatassi
Author 2: Btihal El Ghali
Author 3: Najima Daoudi

Keywords: Facial emotion recognition; video; frames; spatial and temporal features; pretrained models; LSTM; autistic individuals

PDF

Paper 60: Integrating YOLOv8 and IoT in a Computer Vision System for Child Detection in Smart Cities

Abstract: In an era marked by technological advancements aimed at establishing smart cities, technology increasingly focuses on enhancing aspects related to crowd management. The widespread deployment of CCTV systems, combined with the integration of computer vision, has enabled accurate insights into crowd density estimation. Our research highlights the potential benefits of child detection across various domains that serve governments and business decision-making. Leveraging Internet of Things (IoT) devices to collect real-time data and employing artificial intelligence (AI) based on deep learning through computer vision is powerful in such domains. In this paper, we propose an IoT architecture that facilitates intelligence and decision-making in two phases: 1) a deep learning model with object detection and image segmentation capabilities using YOLOv8, and 2) a tracking/counting algorithm for estimating child density based on DeepSORT. Our implementation efficiently identified and classified children in extracted images with an accuracy rate of up to 98%. Also, our model outperformed the other two solutions proposed by previous studies in terms of mAP@50, Precision, and Recall metrics. The results provide valuable insights for businesses aiming to refine site selection and guide governments in improving urban planning and safety, thereby fostering sustainable and intelligent urban development.

Author 1: Modhawi Alotaibi
Author 2: Atheer Alruwaythi
Author 3: Sara Alenazi
Author 4: Maisaa Alsaedi

Keywords: Computer vision; Internet of Things; deep learning; YOLOv8; DeepSORT

PDF

Paper 61: D.M.A.I.H.: Deepfake-Inspired Few-Shot Learning Approach with Stable Diffusion for Digital Mourning

Abstract: Digital mourning (deuil numérique) is the use of digital and AI-based technologies to preserve, recontextualize, and extend the memory of deceased loved ones through personalized and meaningful virtual representations. The digital mourning process requires innovative technologies capable of preserving the memory of deceased loved ones in meaningful and humanized ways. This paper proposes a novel generative approach, D.M.A.I.H. (Digital Mourning with Artificial Intelligence for Healing), for digital grief, with a focus on moral support and the mental health of bereaved relatives, using Stable Diffusion with a few-shot learning adaptation mechanism. The system takes as input a small set of personal references (e.g. a portrait, contextual images such as the person’s home, and a short descriptive script) and outputs high-quality, photorealistic images of the deceased in different meaningful contexts, process closely related to deepfake generation but redirected here toward therapeutic and commemorative purposes. Unlike traditional generative models requiring large datasets, Few-shot personalization is leveraged to adapt Stable Diffusion to each individual with minimal data, enabling the generation of personalized digital albums. Experimental results show that the model consistently preserves identity in the images it produces, and contextual control ensures emotional resonance. In particular, identity similarity scores for the generated images ranged from 0.88 to 0.93, with an average score of 0.91, testifying to strong identity preservation across all outputs. The innovation of this study is a foundation for AI-based memorialization, balancing technological innovation with concerns over privacy, authenticity and cultural sensitivity, and psychological comfort.

Author 1: Btissam Acim
Author 2: Hamid Ouhnni
Author 3: Nassim Kharmoum
Author 4: Soumia Ziti

Keywords: Stable diffusion; few-shot learning; deepfake; Artificial Intelligence (AI); generative AI; digital mourning

PDF

Paper 62: Re-engineering Grid-Based Quorum Replication into Binary Vote Assignment on Cloud: A Scalable Approach for Strong Consistency in Cloud Databases

Abstract: The growth of cloud computing has heightened the demand for replication strategies that ensure strong consistency, high availability, and low communication cost across distributed infrastructures. Existing systems such as DynamoDB, FoundationDB, and GeoGauss illustrate different design trade-offs but face limitations in balancing latency, correctness, and resilience under dynamic workloads. This study proposes the Binary Vote Assignment in Cloud (BVAC), a cloud-native replication algorithm re-engineered from the Binary Vote Assignment on Grid Quorum (BVAGQ). BVAC organizes replicas in a logical grid structure and employs binary voting weights with a Commit Coordination (BCC) mechanism to enforce quorum-validated commits, representing a form of quorum-based replication. This design maintains serializable consistency, minimizes replication conflicts, and reduces low communication cost through fixed-size quorums of three to five replicas. Experimental results demonstrate that BVAC sustains fault tolerance, achieves cloud database replication efficiency, and sustains high data availability via multiple valid quorum paths. By avoiding the heavy coordination cost and infrastructure footprint of current systems, BVAC provides a scalable and cost-efficient replication strategy tailored for modern cloud workloads. The study establishes BVAC as an advancement in distributed data management and a foundation for future adaptive and multi-cloud replication frameworks.

Author 1: Ainul Azila Che Fauzi
Author 2: Noor Ashafiqa
Author 3: Asiah Mat
Author 4: Syerina Azlin Md Nasir
Author 5: A. Noraziah

Keywords: Binary Vote Assignment in Cloud (BVAC); cloud database replication; fault tolerance; high availability; quorum-based replication; strong consistency

PDF

Paper 63: Optimized Random Forest for High-Accuracy Autism Spectrum Disorder Detection via Phenotypic Data

Abstract: Autism Spectrum Disorder (ASD) is a mental disorder with a neurological condition noticed in patients by their persistent deficits in social communication and interaction, along with the possibility of the presence of repetitive motor behaviors or activities. Early diagnosis of this disorder is crucial for improving patients’ cognitive, emotional, and social development. Numerous studies on detecting autism exist; however, data limitations and imbalance affect their model generalization. This study proposes a new intelligent computational model for mental healthcare in individuals with ASD, utilizing machine learning (ML) to address these shortcomings. The proposed model enhances the random forest (RF) algorithm by setting optimal parameters and encompasses two key pipelines: 1) the data pipeline and 2) the learning pipeline. We first gathered a multi-source dataset, implemented integration and preprocessing via ML algorithms. The phenotypic data used were collected from 19 different sites and merged to ensure the diversity of the data used. The resulting dataset is subsequently fed into the learning pipeline, where a supervised ML algorithm is employed to create a trained computational model for detecting ASD. The model is based on tuning the RF algorithm by finding the optimal values for five key hyperparameters. After tuning the model, the accuracy of detecting ASD from phenotypic data reached 96.86%, with a sensitivity of 97.14% and a false positive rate of 3.39%. Comparing the tuned RF model with different ML models verified that tuning and optimizing RF achieves a preeminent classification accuracy for ASD detection using phenotypic data, as the accuracy of RF without tuning is 95.06%. In addition, to validate the tuned RF model’s real-world applicability, a separate qualitative study was conducted on five independent, narrative-based case studies, where the model accurately classified four of them by translating descriptive language into quantitative features.

Author 1: Mohamed Gawish
Author 2: Nada S. El-Askary
Author 3: Mohamed Mabrouk Morsey
Author 4: Abeer M. Mahmoud
Author 5: Mostafa Aref
Author 6: Taha Ibrahim El-Arif

Keywords: Mental healthcare; ASD; phenotypic data; ABIDE- II; random forest; hyperparameter optimizations

PDF

Paper 64: Robust Control of Cyber-Physical Teleoperation Systems for Synchronized Healthcare Supply Chain Management

Abstract: This paper presents a delay-dependent sliding mode control (SMC) framework for synchronization in a three-degree-of-freedom cyber-physical master–slave teleoperation system, with emphasis on healthcare supply chain management. Communication delays pose a critical challenge, often leading to instability, desynchronization, and inaccurate inventory records. Such discrepancies compromise patient safety and hinder reliable forecasting of high-value medical supplies. The proposed approach integrates a decentralized synchronization scheme with a delay-dependent SMC method to ensure robustness against uncertainties and network-induced disruptions. System constraints, including variable communication delays up to 0.4 s and measurement errors of 20%, are explicitly addressed. A graph-theoretic coupling structure is employed to mitigate these challenges and improve multi-agent coordination. Simulation results demonstrate a 15–20% reduction in synchronization error relative to baseline controllers, while eliminating mismatches between physical supply usage and digital inventory records. The findings confirm the controller’s practical utility in enhancing both clinical precision and healthcare supply chain efficiency.

Author 1: Mariem Mrad
Author 2: Mohamed Amine Frikha

Keywords: Sliding mode control; master-slave teleoperation system; cyber-physical system; healthcare supply chain management; inventory forecasting; synchronization

PDF

Paper 65: Sentiment Analysis Revisited: A Multi-Metric Comparative Study

Abstract: Sentiment analysis is a fundamental task in natural language processing with wide-ranging applications, from customer feedback monitoring to healthcare and social media analytics. While recent research has mainly emphasized predictive accuracy, computational efficiency has remained largely overlooked, despite its importance for large-scale and real-time deployment. This study addresses this gap by conducting a comparative evaluation of classical machine learning algorithms (Logistic Regression, Naïve Bayes, Random Forest) and deep learning architectures [Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM)]. Experiments were carried out on two benchmark datasets, IMDB and Yelp Polarity, with evaluation based on accuracy, precision, recall, F1-score, training time, and a novel Efficiency Score. Results on IMDB show that Logistic Regression and LSTM both achieved 88% accuracy, but with radically different costs: Logistic Regression trained in 0.25 seconds, whereas LSTM required more than 2600 seconds. On Yelp Polarity, Logistic Regression improved to 91.6% accuracy, outperforming LSTM (86.2%) while remaining over 300 times faster. By integrating both predictive metrics and efficiency measures, the Efficiency Score highlighted the practical advantages of Logistic Regression and Naïve Bayes in resource-constrained environments. This dual evaluation framework demonstrates that classical models remain highly competitive when both accuracy and efficiency are considered, providing a practical alternative to computationally expensive neural architectures and offering practitioners clear guidelines for model selection under real-world constraints.

Author 1: Kamal Walji
Author 2: Allae Erraissi
Author 3: Abdelali ZAKRANI
Author 4: Mouad Banane

Keywords: Sentiment analysis; natural language processing; machine learning; deep learning logistic regression; random forest; Naive Bayes; LSTM; CNN; Efficiency Score

PDF

Paper 66: Enhanced Crow Search Algorithm with Cooperative Island Strategy for Energy-Aware Routing in Wireless Sensor Networks

Abstract: Energy efficiency is a fundamental problem experienced by Wireless Sensor Networks (WSNs), as limited battery power affects network lifespan and reliability. This paper develops a novel energy-efficient routing protocol based on an Enhanced Crow Search Algorithm (ECSA) optimization approach to optimize cluster head selection. The proposed ECSA combines a cooperative island model and an adaptive tournament selection procedure to overcome traditional Crow Search Algorithm (CSA) disadvantages caused by low population diversity, a slow convergence rate, and undesirable exploration-exploitation tradeoffs. A multi-objective fitness function is constructed by analyzing residual energy and remaining battery life, distance to the base station, packet delivery rate, throughput, and path loss to achieve overall network design optimality. Sensor nodes are organized optimally to reduce power consumption and prolong the system's lifespan. The experimental results demonstrate that, for a network of 100 nodes, the proposed ECSA-based routing protocol significantly outperforms recent metaheuristic approaches. Specifically, ECSA achieved 22% lower optimization cost than CSA, 28.2% than Black Widow Optimization (BWO), 26.3% than Grey Wolf Optimizer (GWO), and 30% than Whale Optimization Algorithm (WOA). It further attained 4.8–10.8% higher throughput, 24.4–40.3% lower path loss, 4.5–13.7% higher packet delivery ratio, and 40.1–109.1% more alive nodes compared to these benchmarks. These results confirm that ECSA provides superior energy efficiency, reliability, and robustness for large-scale WSN deployments.

Author 1: Xiangqian LI
Author 2: Xuemei ZHOU

Keywords: Wireless sensor networks; energy efficiency; cluster head selection; Crow Search; island model; routing; optimization

PDF

Paper 67: From Review to Practice: A Comparative Study and Decision-Support Framework for Sentiment Classification Models

Abstract: Sentiment classification is a core task in natural language processing (NLP), enabling automated interpretation of opinionated text across domains, such as social media, e-commerce, and healthcare. While numerous models have been proposed—from classical machine learning algorithms to deep neural networks and transformer architectures—their adoption is often hindered by trade-offs in performance, interpretability, and computational cost. This paper presents a threefold contribution: 1) a structured review of over 30 peer-reviewed studies that compare sentiment classifiers across five analytical dimensions—accuracy, robustness, interpretability, efficiency, and context adaptability; 2) a lightweight empirical benchmark on the IMDb dataset, evaluating Naïve Bayes, linear SVM, and LSTM; and 3) a practitioner-oriented decision-support framework comprising a model selection flowchart and recommendation matrix. The experimental results show that SVM achieved the highest F1-score (0.8329), while Naïve Bayes provided strong performance with minimal training time, and LSTM underperformed under constrained conditions. We further highlight persistent challenges in benchmarking consistency, model explainability, and cross-lingual adaptability. The paper concludes with actionable future directions, including hybrid architectures, low-resource deployment strategies, and inclusive NLP systems for diverse user populations. To our knowledge, this is the first study that unifies systematic review, empirical validation, and practical decision tools in the field of sentiment classification.

Author 1: Kamal Walji
Author 2: Allae Erraissi
Author 3: Abdelali ZAKRANI
Author 4: Mouad Banane

Keywords: Sentiment analysis; text classification; machine learning; deep learning; transformer models; BERT; LSTM; random forest; hybrid approaches; model evaluation; interpretability; natural language processing

PDF

Paper 68: A Weighted Scoring Model of Heuristic-Based Workload Scheduling Approaches in Edge-Cloud Environments

Abstract: Hybrid edge–cloud computing has emerged as a promising paradigm to meet the demands of latency-sensitive and resource-aware applications by combining the low-latency benefits of edge nodes with the scalability of cloud infrastructure. Efficient workload scheduling in such environments remains a critical challenge due to the heterogeneity of resources, dynamic network conditions, and diverse application requirements. This paper presents a comprehensive survey and comparative analysis of heuristic and metaheuristic scheduling algorithms tailored for edge–cloud systems. Seven representative algorithms, including Greedy Resource-Aware Heuristics (GRAH), Heterogeneous Earliest Finish Time (HEFT), Min-Min/Max-Min, Genetic Algorithm, Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Tabu Search, are evaluated against seven key criteria: latency awareness, energy efficiency, scalability, scheduling accuracy, implementation complexity, resource utilization, and adaptability. The evaluation is literature-driven and structured through a Weighted Scoring Model (WSM), which synthesizes findings from prior simulation-based and experiment-based studies into a comparative framework. Results indicate that Greedy Resource-Aware Heuristics offer the best trade-off for real-time, dynamic scenarios, while optimization-based methods, like GA and Tabu Search, provide superior accuracy and resource balance at the cost of increased complexity. The findings highlight critical trade-offs and offer guidance on selecting appropriate scheduling strategies based on application-specific goals and system constraints.

Author 1: Hasnae NOUHAS
Author 2: Abdessamad BELANGOUR
Author 3: Mahmoud NASSAR

Keywords: Edge computing; cloud computing; workload scheduling; heuristic algorithms; metaheuristics; scheduling optimization; ACO; PSO; HEFT; Tabu Search; GA; Greedy Resource-Aware Heuristics; Min-Min; Max-Min; WSM; Weighted Scoring Model

PDF

Paper 69: Optimization of Convolutional Neural Network Algorithm for Indonesian Sign Language Classification

Abstract: Sign language serves as a primary mode of communication for individuals who are deaf or speech impaired, using hand gestures to convey meaning visually. While it facilitates communication among the deaf community, it presents challenges for interaction with those who rely on spoken language. This study aims to recognize hand signs representing the letters A to Y (excluding J and Z) in the Indonesian Sign Language (SIBI) using image-based input. A custom dataset was collected through personal photo shoots and used to train a Convolutional Neural Network (CNN) implemented in Python using the TensorFlow library. The study also focuses on optimizing the CNN architecture to achieve high classification accuracy. Evaluation using a confusion matrix on the test data resulted in an overall accuracy of 87.1%, while real-time testing achieved an accuracy of 90.25%. The number of convolutional filters and dropout rates was adjusted to prevent underfitting and overfitting during model training.

Author 1: Alvin Bintang Rebrastya
Author 2: Sumarni Adi
Author 3: Hanif Al Fatta
Author 4: Windha Mega Pradnya Dhuhita
Author 5: Ika Nur Fajri
Author 6: Muhammad Hanafi

Keywords: Indonesian Sign Language; hand sign recognition; image classification; Convolutional Neural Network

PDF

Paper 70: Control System of Ocean Wave Simulator Using PID-Salp Swarm Algorithm

Abstract: This paper presents a control system optimization of an ocean wave simulator using a meta-heuristic optimization. The proposed control system involves finding leg length trajectories by an Inverse Kinematics (IK) to be used as references for a Proportional-Integral-Derivative (PID). PID gains are tuned using a Salp Swarm Algorithm (SSA) with a Root Mean Square Error (RMSE) of leg position errors as a performance index. A Stewart platform dynamic is modeled in Simscape Multibody, integrated with a trajectory generator, an IK module, and a control system diagram block in Simulink models. The Simulink model of Stewart platform dynamics is then called in the optimization procedure by writing the programming code in MATLAB. Results show that the SSA outperforms other meta-heuristic methods, namely a Genetic Algorithm (GA) and a Particle Swarm Optimization (PSO), achieving the lowest fitness value, 16.8% and 8.7% lower than GA and PSO, respectively. Moreover, the SSA avoids boundary-trapping issues encountered by the PSO, which is stuck at its upper bound. The SSA has successfully enhanced the simplified version of the PID control system, where the scenario of the simplified PID-SSA scenario achieves better tracking error performance than the full PID-SSA configuration. The proposed approach contributes to the advancement of marine simulation technologies, supporting innovation in ocean engineering and sustainable maritime applications.

Author 1: Affiani Machmudah
Author 2: Juchen Li
Author 3: Mahmud Iwan Solihin
Author 4: Chiong Meng Choung
Author 5: Wibowo Harso Nugroho
Author 6: Ahmad Syafiul Mujahid
Author 7: Sahlan
Author 8: Abdul Ghofur

Keywords: Marine simulation technologies; Stewart platform; ocean wave; control system; meta-heuristic optimization; Salp Swarm Algorithm

PDF

Paper 71: Enhanced IoT Security Using Machine Learning Technology

Abstract: This paper examines the enhancement of security measures for the Internet of Things (IoT) systems through the application of Machine Learning (ML) techniques. As the number of IoT devices continues to rise, ensuring their security has become increasingly critical, given that conventional methods frequently struggle to identify advanced threats. This study explores the implementation of several ML algorithms, including Random Forest (RF), Decision Trees (DT), Support Vector Machines (SVM), and Convolutional Neural Networks (CNN), to identify anomalies and intrusions within IoT networks. By conducting a comprehensive review of existing research and experiments, it highlights the effectiveness of ML in enhancing IoT security, with high detection rates for various threats, including botnet attacks, Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) incidents, and intrusion attempts. DoS/DDoS attacks and many types of botnets are the most devastating attacks that have been spreading for a long time, and they are still branching out in new ways against IoT networks. They can damage IoT services and prevent these services from being used by legitimate users. Therefore, securing IoT networks becomes a significant concern. The proposed model is used to increasingly monitor network traffic for any deviations from standard patterns IoT networks. This paper also stresses the necessity of utilising suitable datasets and feature selection techniques to enhance the efficacy of ML models. To train our model, we have utilized a dataset called the IoT23 dataset, which is one of the most recent datasets that has many IoT scenarios and anomalous activities. Further-more, we utilised two types of feature selection algorithms, the Correlation-based Feature Selection (CFS) algorithm and the Genetic Algorithm (GA), and then we compared the results of these algorithms when training our model. The best performances were obtained with DT and RF classifiers when they were trained with features selected by CFS However, for training and testing times metrics, DT performance was superior across both feature selection methods.

Author 1: Rawan Yousef Bukhowah
Author 2: Alanoud Khaled Bu Dookhi
Author 3: Mounir Frikha

Keywords: Internet of Things; Artificial Intelligence; machine learning; deep learning; security

PDF

Paper 72: DAE-IDS: A Domain-Aware Ensemble Intrusion Detection System with Explainable AI for Industrial IoT Networks

Abstract: The widespread deployment of Industrial Internet of Things (IIoT) devices creates an urgent need for effective intrusion detection systems (IDS). However, two critical challenges limit current approaches: severe class imbalance in network traffic data that hampers detection of rare attacks, and the “black-box” nature of machine learning models that undermines trust in security-critical applications. This study presents a Domain-Aware Ensemble Intrusion Detection System (DAE-IDS) equipped with explainable AI, addressing both challenges through frequency-aware ensemble learning and computationally efficient interpretability mechanisms. Using the Edge-IIoTset dataset containing 80 features across 12 classes, attacks were categorized into three frequency groups: majority attacks (5 classes), middle-frequency attacks (4 classes), and minority attacks (3 classes). Specialized Random Forest models (50 trees each with class weighting) tailored to each frequency group, then developed a domain-aware ensemble that routes traffic to the most appropriate specialized model based on attack frequency patterns. To enhance interpretability, SHAP explanations added using an optimized approach that combines interventional TreeExplainer with instance subsampling (300 samples per model) and top-k feature prioritization. This optimization reduced SHAP computation time by 60% while maintaining full interpretability. The domain-aware ensemble achieved superior performance with a macro-F1 score of 1.00, demonstrating significant improvements in rare-attack detection compared to traditional approaches. SHAP analysis revealed attack-specific discriminative features, providing action-able insights for security analysts. This framework successfully bridges the accuracy-interpretability trade-off in IIoT security applications, enabling trustworthy intrusion detection suitable for resource-constrained edge environments. The attack-frequency specialization approach offers a practical solution for handling class imbalance while maintaining model transparency through efficient explainability mechanisms.

Author 1: Saifur Rahman

Keywords: Intrusion detection systems; IoT security; Explainable AI (XAI); class imbalance; frequency-aware ensemble; SHAP interpretability; domain-aware routing; confidence-based ensemble; Edge-IIoTset dataset; optimized random forest

PDF

Paper 73: Enhanced Fuzzy Clustering Approach for Overlapping Community Detection via Structural Neighborhood Similarity

Abstract: The existence of complex networks can be observed in various real-world contexts, such as social, biological, and/or neurological networks. A critical analytical challenge in such networks is community detection, which entails detecting groupings of nodes with dense internal connectivity. Numerous studies have been conducted on the subject of overlapping communities, wherein nodes may concurrently belong to multiple communities. In this paper, we propose an enhanced fuzzy clustering method for overlapping community detection based on neighborhood similarity. The core idea is to observe the community membership as a continuous feature; hence, nodes can belong to more than one community following different levels of affiliations. Our method consists of four stages: first, we find local structural features; then, we make a neighborhood similarity matrix based on common neighbors; next, we give initial fuzzy memberships using an Enhanced Fuzzy C-Means approach; and last, we improve memberships using a local optimization strategy. We evaluated our method on various real-world datasets of differing sizes and determined that it outperforms multiple state-of-the-art techniques, as indicated by overlapping modularity, F-score, and statistical significance assessments. The proposed method is a useful and scalable solution that is easier to understand and more accurate.

Author 1: Faiza Riaz Khawaja
Author 2: Zuping Zhang
Author 3: Abdul Hadi Riaz
Author 4: Abdolraheem Khader
Author 5: Ahmed Hamza Osman
Author 6: Hani Moetque Aljahdali
Author 7: Ali Ahmed

Keywords: Fuzzy clustering; neighborhood similarity; extended modularity; overlapping community; complex networks

PDF

Paper 74: Vision-Based Autonomous Localization of Fall Protection Anchor Points on Transmission Towers Using Multi-View Geometric Perception

Abstract: This paper presents the first systematic investigation into autonomous UAV-mounted fall protection lanyard (FPL) deployment for high-voltage transmission tower inspections, addressing a critical safety gap in the power industry where falls account for 34% of occupational fatalities. We propose a novel geometry-based solution to overcome three fundamental limitations of existing approaches: the isolated processing of UAV imagery without sensor fusion, unreliable 2D-to-3D spatial cor-respondence in anchor point detection, and the high annotation costs of supervised learning methods. Our technical contribution establishes a multi-view geometric perception framework that decomposes the FPL anchoring task into ridge line identification and optimal mounting point selection. The method first develops a spacial edge distance perception algorithm specifically for power inspection drones, which computes structural depth through plane-induced homography transformations of temporally matched line features. Subsequently, a mounting position planning algorithm integrates multiview geometric constraints with practical operational requirements including ladder proximity, diagonal steel avoidance, and temporal stability. Experimental validation on real-world power infrastructure data demonstrates superior performance compared to learning-based alternatives, achieving 10.98 MAE in positioning accuracy while maintaining 80ms processing efficiency for real-time operation. The proposed approach eliminates dependency on manual climbing and expert annotations, offering both theoretical advancements in stereo-environment perception for complex structures and immediate field applicability for safer power grid maintenance. This work represents the first formal proposal and comprehensive solution for autonomous FPL deployment in transmission tower inspection scenarios.

Author 1: Chunqing Yang
Author 2: Yu Peng
Author 3: Jian Yu
Author 4: Dongfeng Yu
Author 5: Rui Liu
Author 6: Jiahui Chen

Keywords: Fall protection lanyard; transmission tower inspection; anchor point localization; multiview geometry; spacial edge distance perception; homography transformation

PDF

Paper 75: A New Hybrid Approach Based on Discrete Wavelet Transform and Deep Learning for Traffic Sign Recognition in Autonomous Vehicles

Abstract: The rapid advancement of autonomous vehicles has led to the widespread integration of advanced driver assistance systems, significantly improving vehicle control, safety, and compliance with traffic regulations. A crucial aspect of these systems is the reliable detection and recognition of traffic signs, which play a key role in managing urban traffic flow and ensuring road safety. However, traffic sign recognition remains a challenging task due to varying lighting conditions, occlusions, and diverse sign appearances. This paper presents a novel hybrid approach for efficient traffic sign recognition tailored to the needs of autonomous driving. The proposed method combines the Discrete Wavelet Transform for robust feature extraction with the powerful classification capabilities of Convolutional Neural Networks within a Deep Learning framework. The DWT effectively captures essential image characteristics while reducing noise and irrelevant details, providing a compact yet informative feature set for the CNN classifier. Extensive experiments were conducted to evaluate the performance of the system in real-world conditions. The proposed approach achieved an impressive recognition precision of 98%, demonstrating its ability to interpret and respond to traffic signs with high reliability. The results confirm the method’s robustness, real-time efficiency, and suitability for deployment in intelligent transportation systems and autonomous vehicles. Overall, this study highlights the complementary strengths of DWT and CNN within the broader context of Deep Learning, offering a significant improvement over conventional traffic sign recognition techniques. The proposed system represents a promising step toward enhancing the perception capabilities of autonomous vehicles, contributing to safer and more reliable navigation in complex traffic environments.

Author 1: Rim Trabelsi
Author 2: Khaled Nouri

Keywords: Safety; discrete wavelet transform; traffic sign recognition; autonomous vehicles; deep learning

PDF

Paper 76: A Computer-Aided Diagnosis System for Ulcerative Colitis Classification Using Vision Transformer

Abstract: An unhealthy digestive condition that inflames the colon is called ulcerative colitis (UC). Utilising colonoscopy information to assess disease severity is a laborious process that concentrates on the most severe anomalies. The severity of this condition can significantly impact a patient’s quality of life. Current diagnostic methods, primarily colonoscopy, for assessing UC severity are subjective and prone to inter-observer variability, hindering accurate staging and personalized treatment. Colonoscopies are currently used by doctors to diagnose the severity of ulcerative colitis, yet this might be imprecise due to physician variance. As such, to deliver optimal outcomes, automated and precise technology is required. The current study introduces UC-visionNet, an automated approach that classifies ulcerative colitis severity based on colonoscopy image analysis using vision transfer techniques. UC-visionNet makes use of vision transformers, which are pre-trained deep learning models that have shown to be quite successful in image analysis applications. To classify ulcerative colitis severity, these models are “fine-tuned” using the LIMUC (Labeled Images for Ulcerative Colitis) dataset. Compared to conventional colonoscopy procedures, using UC-visionNet for image analysis may be faster, enhancing patient satisfaction and increasing healthcare effectiveness. In contrast to state-of-the-art techniques, the suggested model performs quantitatively better on the LIMUS dataset. After using Vision transformer (ViT) on the LIMUC dataset, the current study attained a 96% training accuracy. UC-visionNet offers a promising automated solution for accurate and efficient UC severity classification.

Author 1: Dharmendra Gupta
Author 2: Jayesh Gangrade
Author 3: Yadvendra Pratap Singh
Author 4: Shweta Gangrade

Keywords: Ulcerative Colitis (UC); colonoscopy videos; deep learning; vision transformer

PDF

Paper 77: Graph Neural Networks with Shapley-Value Explanations for Hierarchical Recommendation Systems

Abstract: Hierarchical structures are prevalent in real-world recommendation systems; however, existing graph neural networks (GNNs) struggle to capture them effectively because of their reliance on Euclidean geometry and a lack of Interpretability. This paper presents a novel architecture, Hyperbolic Graph Neural Networks with Shapley-Value Explanations (HGNN-SV), which simultaneously addresses both challenges in hierarchical recommendation tasks. Our method combines Poincar´e ball hyperbolic embeddings with Shapley-value-based feature attributions, enabling accurate modelling of tree-like user–item relationships while offering transparent, theoretically grounded explanations for each recommendation. Experiments on the Amazon Product Reviews and MovieLens 1M datasets demonstrated strong performance across multiple evaluation metrics. On MovieLens-1M, HGNN-SV achieved a Precision@10 of 0.822, Recall@10 of 0.785, and F1-Score@10 of 0.803. For Amazon Product Reviews, the method attained a Precision@10 of 0.785, Recall@10 of 0.730, and F1-Score@10 of 0.756. A comparative evaluation against leading baselines, including LightGCN, Hyperbolic GCN, GNNShap, and MAGE, shows that our unified approach consistently outperforms existing methods across all metrics. Moreover, the generated Shapley attribution closely aligned with semantic item hierarchies, as validated through systematic evaluation. By bridging the gap between geometric expressiveness and interpretability, our approach establishes a new benchmark for trustworthy, high-fidelity hierarchical recommendation systems.

Author 1: Redwane Nesmaoui
Author 2: Mouad Louhichi
Author 3: Mohamed Lazaar

Keywords: Hyperbolic graph neural networks; Shapley value; explainable recommendation; hierarchical recommendation systems; interpretability; Explainable AI (XAI); Poincar´e ball embeddings; graph neural networks; feature attribution; hyperbolic geometry; user-item; graph embeddings

PDF

Paper 78: Benchmarking Large Language Models for Hate Speech Detection in Arabic Dialects: Focus on the Saudi Dialects

Abstract: This study investigates the effectiveness of large language models (LLMs) in detecting Arabic hate speech, with a particular focus on prompt-based learning and the sociolinguistic challenges of Saudi dialects. We evaluate four LLMs, GPT-4o, LLaMA3, Gemma2, and ALLaM, using zero-shot, one-shot, and three-shot prompting strategies. The results show that all models benefit from in-context examples, with GPT-4o achieving the highest overall performance across all prompting settings. A detailed error analysis reveals persistent challenges, particularly in detecting implicit hate, handling dialectal variation, and interpreting culturally embedded expressions. We also highlight limitations related to topic bias and annotation ambiguity, which further complicate model evaluation. Overall, the findings offer key insights for evaluating LLMs in low-resource settings and addressing the unique linguistic complexities of Arabic dialects.

Author 1: Omaima Fallatah

Keywords: Arabic hate speech detection; large language models (LLMs); in-context learning; Arabic NLP

PDF

Paper 79: Task-Oriented Evaluation of Assamese Tokenizers Using Sentiment Classification

Abstract: Tokenization is a foundational step in the NLP pipeline, and its design strongly influences the performance of transformer-based models, particularly for morphologically rich and low-resource languages such as Assamese. While most tokenizers are traditionally assessed using intrinsic metrics, their practical impact on downstream tasks has remained underexplored. This study systematically evaluates nine subword tokenizer configurations—spanning Byte-Pair Encoding (BPE), WordPiece, and Unigram algorithms with vocabulary sizes of 8K, 16K, and 32K—on sentiment classification in Assamese. Each tokenizer was integrated into a BERT-base-multilingual-cased model by replacing the default tokenizer and reinitializing the embedding layer. On a manually curated dataset, na¨ıve fine-tuning proved unstable under class imbalance, but a class-weighted loss restored effective training and exposed clear performance differences across tokenizers. WordPiece consistently outperformed BPE and Unigram, with the wordpiece 16k configuration achieving a weighted F1-score of 0.4897 across 10 random seeds. This score was statistically comparable to mBERT (0.4919) and competitive with larger multilingual baselines such as XLM-R (0.4978), despite relying on a far smaller, Assamese-specific vocabulary. These findings underscore that tokenizer choice is not a neutral preprocessing step but a critical design decision, highlighting the importance of downstream evaluation when developing practical NLP pipelines for low-resource languages.

Author 1: Basab Nath
Author 2: Sagar Tamang
Author 3: Osman Elwasila
Author 4: Yonis Gulzar

Keywords: Assamese NLP; tokenization; subword tokenization; sentiment analysis; low-resource languages; BERT; class imbalance

PDF

Paper 80: Leveraging Distance-Optimized Transformers for High-Performance Arabic Short Answers Grading

Abstract: This study presents comprehensive distance-optimized transformer architecture for Automated Arabic Short Answers Grading (AASAG) that systematically evaluates multiple semantic similarity measures. Short answer grading—assessment of responses typically 1-3 sentences long requiring conceptual understanding rather than factual recall—poses significant challenges in Arabic due to morphological complexity and limited computational resources. Our approach integrates pre-trained Arabic transformer models (AraBERT v02) with four distinct distance algorithms: cosine similarity, Manhattan distance, Euclidean distance, and dot-product calculations within a Siamese network architecture. Through systematic evaluation across three progressively enhanced datasets (original AR-ASAG, SemEvalaugmented, and reference-integrated versions), our distance-optimized approach achieves state-of-the-art performance with correlation coefficients of 0.7998, representing a 5.5% improvement over existing methods. This advancement significantly outperforms traditional vector space models (0.7037 correlation), BERT-based approaches (0.7616), and hybrid semantic analysis methods (0.745), establishing new benchmarks for Arabic educational assessment technology.

Author 1: Hatem M. Noaman
Author 2: Mohsen Rashwan
Author 3: Hazem Raafat

Keywords: Automatic Arabic Short Answers Grading; Arabic language processing; educational technology; pre-trained language models; semantic similarity

PDF

Paper 81: A FOREX Trading System Based on Semi-Supervised News Classification, Market Sentiment Analysis, and GRU-CNN Deep Learning Models

Abstract: Investors access the foreign exchange market (FOREX) not only to preserve their wealth but also to generate profits and achieve specific financial goals. It is one of the largest financial markets that investors rely on, and it is based on fluctuations in currency exchange rates to make a profit in different time cycles: short-term, medium-term, and long-term. In this article, we propose an automated FOREX trading system that combines two artificial intelligence algorithms: the first to classify news by pertinence (semi-supervised) and then to analyze market sentiment. This algorithm plays a crucial role in replacing the traditional fundamental analysis, which is based on macroeconomic factors, political events, and news headlines. The use of GAN-BERT helped improve performance in classification tasks with limited labeled data and reduced execution time. This algorithm demonstrates impressive results, achieving a high accuracy of 97.5%, which makes its output data more reliable for use in the second algorithm, a combination of two deep learning models: the Gated Recurrent Unit (GRU) and the Convolutional Neural Network (CNN). We enrich the dataset used in this phase with additional technical indicators and features that may help explain market fluctuations. We evaluated our final algorithm over multiple time frames and several windows; the results were impressive and confirmed by back-testing its potential profitability and risk.

Author 1: Nabil MABROUK
Author 2: Marouane CHIHAB
Author 3: Younes CHIHAB

Keywords: FOREX; trading; semi-supervised classification; sentiment analysis; machine learning; deep learning; RNN; CNN; GRU

PDF

Paper 82: Scalable Formal Verification of Modular Concurrent Systems: A Survey of Techniques, Tools and Challenges

Abstract: The increasing complexity of distributed and con-current systems raises pressing challenges for ensuring correctness and reliability. Formal verification, and in particular model checking, offers a rigorous foundation to validate system properties, yet suffers from the well-known state space explosion problem. This difficulty is especially acute in modular architectures, where local behaviors intertwine with synchronization across components. This paper provides a structured survey of the main techniques designed to overcome these challenges, including state space reduction, abstraction, compositional reasoning, symbolic approaches, and distributed verification. We also review representative tools such as SPIN, NuSMV, LTSmin, DiVinE, and STORM, assessing their capabilities and limitations in handling modular and concurrent models. Building on this landscape, we position the Reduced Distributed State Space (RDSS) as a novel framework that addresses key scalability limits. RDSS reduces global complexity into module-specific meta-graphs, ensures stuttering equivalence, and enables local model checking without exploring the full global state space. Comparative experiments demonstrate significant gains over existing approaches, particularly for systems where modules are not all synchronized on the same transitions. We conclude by identifying open challenges and future research directions, including distributed implementations, AI-driven heuristics, and hybrid reductions. Our survey underscores the importance of structural awareness in modern verification workflows and establishes RDSS as a promising foundation for scalable verification of modular concurrent systems.

Author 1: Sawsen Khlifa
Author 2: Chiheb Ameur Abid
Author 3: Asma ben Letaifa
Author 4: Belhassen Zouari

Keywords: Distributed systems; state space; Modular Petri Net; formal verification; state explosion problem; model checking; temporal logic; reduction techniques; RDSS; ROS2; scalability; modularity; model checking

PDF

Paper 83: New Explainable Overlapping Co-Clustering for Recommender Systems: Capturing Multifaceted Preferences with Enhanced Interpretability

Abstract: Recommender systems have become critical tools in reducing information overload by providing personalized recommendations across several application domains including commerce, industry, education, academic research, etc. Clustering-based recommender systems, which use the clustering technique to group similar users or items to generate suggestions, have shown high accuracy and efficiency. However, conventional clustering methods often fail to address several challenges such as ignoring the possibility that a user may have different item preferences, limited interpretability of generated suggestions, and the inability to tailor recommendation list sizes to individual user needs. To address all these issues, we propose in this work a new recommender system based on Overlapping Co-clustering and Modularity Maximisation (OCCMM). The proposed method allows to take into account that users may have several item preferences by building overlapping clusters rather than the conventional non-overlapping model. Also, the proposed method adopts a simultaneous clustering of items and users to facilitate the generation and interpretation of suggestions through using the co-clustering technique. Furthermore, OCCMM enables an adjustment of recommendation list sizes based on an easy tunning parameter δ. Experiments conducted in three real-world datasets demonstrated the effectiveness of OCCMM in achieving better performance in terms of accuracy and interpretability compared to conventional existing methods.

Author 1: Chiheb Eddine Ben Ncir
Author 2: Mohammed Ibrahim Alattas

Keywords: Clustering-based recommender systems; modularity maximization; overlapping co-clustering; multiple-user-preferences; recommendation interpretability

PDF

Paper 84: Applying a Lightweight Graphics Library to Visually Corroborate Learning in Programming Introduction Courses

Abstract: Learning to program in first-year courses is challenging because the link between source code and program behaviour is not immediately visible to novices. This paper reports on the deployment of UFramework, a lightweight graphics library developed in C++/Visual Studio, designed to help students visually corroborate their learning by observing the on–screen effects of their own algorithms. The experience was implemented in a Structured Programming module that combines lectures, labs, and project-based assignments. We 1) describe the design principles and architecture of the library, 2) present a portfolio of progressively scaffolded assignments (shooter prototype, grid map parsing, spatial quadrants), 3) outline the assessment rubric and its alignment with intended learning outcomes, and 4) report multi-year descriptive evidence that includes pass rates and qualitative reflections. Results show improved student engagement and higher pass rates in the most recent cohorts, together with qualitative evidence of increased motivation and clearer problem decomposition. While the findings are limited to a single institution and remain descriptive, they suggest that lightweight, visual-first workflows can lower barriers to learning programming and foster computational thinking competencies such as decomposition, abstraction, algorithmic design, and debugging. Future work should include controlled comparisons and broader validation to strengthen internal validity and explore the applicability of this approach to non-visual domains.

Author 1: Claudia De La Fuente
Author 2: Cristian Vidal-Silva
Author 3: Liza Jeg´o-Mendoza
Author 4: Patricia Pedrero-Valenzuela

Keywords: Structured programming; visual corroboration; self-regulated learning; metacognition; program visualization; project-based learning

PDF

Paper 85: Mental Health Monitoring in Neurodivergent Children Using NeuroSky TGAM1: Real-Time EEG Signal Processing for Cognitive and Emotional Assessment

Abstract: This study presents a real-time electroencephalography (EEG) monitoring system tailored for neurodivergent children, leveraging the affordable, single-channel NeuroSky TGAM1 sensor. We introduce a robust signal processing pipeline based on spectral power density analysis (from Delta to Gamma bands) to identify discrete cognitive-emotional states during therapy sessions. The system demonstrates 82.3% accuracy in classifying focused attention, emotional distress, and calm engagement. Crucially, our wearable implementation provides objective biomarkers for personalizing mental health interventions, effectively bridging biomedical engineering and child psychiatry. We illustrate the system’s adaptability across various therapeutic contexts; notably, our findings reveal compelling neural response patterns during dolphin-assisted therapy for children with Autism Spectrum Disorder (ASD). This low-cost, scalable solution shows significant potential for objectively evaluating therapeutic efficacy in populations with ADHD and ASD, moving beyond subjective assessments towards data-driven care.

Author 1: Erika Yolanda Aguilar Del Villar
Author 2: Jes´us Jaime Moreno Escobar
Author 3: Claudia Hern´andez Aguilar

Keywords: EEG; neurodivergent children; wearable; spectral power density analysis; therapy sessions

PDF

Paper 86: A Weakly Supervised MIL Approach to Fake News Detection via Propagation Tree Analysis

Abstract: This paper presents a weakly supervised Multiple Instance Learning (MIL) framework for fake news detection in social media, leveraging propagation tree analysis to model the spread of misinformation across online networks. Unlike traditional text-based or graph-based methods, our approach captures fine-grained post-level stances (support, denial, question, comment) and aggregates them to infer news veracity using a novel hierarchical attention mechanism. The framework in-corporates social network dynamics of information diffusion, offering deeper insights into how user interactions amplify or suppress misinformation. We evaluate our model on benchmark datasets, including PolitiFact and GossipCop from FakeNewsNet, comprising over 23,000 news articles and hundreds of thousands of user engagements, as well as on the SemEval-8 dataset for binary classification of true vs. fake news. Our method achieves up to 94.3% accuracy and 91.7% F1-score, outperforming state-of-the-art machine learning and deep learning baselines. Ablation studies further validate the contribution of stance aggregation and attention-based propagation modeling. These results highlight the effectiveness of integrating stance detection, propagation structures, and weakly supervised learning for scalable and interpretable fake news verification in online environments.

Author 1: Shariq Bashir

Keywords: Identifying fake news; social network analysis; post stance detection; deep learning; information retrieval; multiple instance learning

PDF

Paper 87: Hierarchical Adaptive Gap-Run TID Compression for Large-Scale Frequent Itemset Mining

Abstract: Frequent itemset mining faces the prominent problems of high storage space requirements and low efficiency in a large-scale transaction data environment.The traditional Eclat algorithm usually uses bitmap or sparse array to represent a single transaction identifier (TID), which is difficult to adapt to the changes of dense and sparse transaction data at the same time; Although the existing hybrid representation schemes can partly alleviate this problem, the additional computational overhead caused by frequent data structure switching and the inherent space waste of bitmap structure have not been fundamentally solved. Therefore, this article proposes a HiAGL-FIM algorithm based on Hierarchical Adaptive Gap Run Transaction Identifier List (HAGL-TID). This algorithm adaptively selects Gap List or Run List for transaction identifier encoding through continuity ratio, and designs an efficient TID intersection operation method, completely eliminating dependence on bitmap structure and effectively reducing memory consumption and intersection calculation overhead. The experimental results show that HiAGL-FIM has significant advantages in terms of running time, memory usage, and data scalability compared to classical algorithms such as Eclat, FP Growth, and dEclat. Especially when the transaction data scale reaches millions, it shows a more significant performance improvement, demonstrating the effectiveness and practical value of our method.

Author 1: Xin Dai
Author 2: Chenjiao Liu
Author 3: Xue Hao
Author 4: Qichen Su

Keywords: Frequent itemset mining; pure Eclat; Hierarchical Adaptive Gap-Run List (HAGL-TID); large-scale transaction data

PDF

Paper 88: Optimizing Energy Efficiency and Increasing Scalability in 6G-IoT Networks Through SDN, Duty Cycling, and AI-Driven Slicing

Abstract: As sixth-generation (6G) and Internet of Things (IoT) networks expand rapidly, concerns are growing about their energy consumption and scalability. This is primarily because more devices are being connected, resulting in increased energy consumption energy consumption.This study examines three primary strategies for optimizing energy efficiency and improving scalability in 6G-IoT networks. This research looks at three experimental setups: 1) using software-defined networking (SDN) with dynamic slicing to organize devices based on when they are most and least used, 2) duty cycling, which turns devices on and off to save energy, and 3) AI-optimized network slicing that uses both convolutional neural networks (CNN) and bidirectional long short-term memory (BiLSTM) models. In the first setup, SDN with dynamic slicing helped reduce unnecessary power consumption by matching device activity to peak times. As more devices were added, this method kept energy use low and improved the network’s ability to handle growth without requiring significantly more power. This resulted in a 66.28 percent decrease in power usage. In the second setup, duty cycling allowed only some devices to be active at a time, which reduced power use by over 60 percent during slow periods. In the third setup, the CNN-BiLSTM model effectively classified service types and reduced power use by 60.14 percent. While these methods were not combined into a single solution, each utilized slicing techniques to more effectively allocate resources and manage power.

Author 1: Marwah Albeladi
Author 2: Kamal Jambi
Author 3: Fathy E. Eassa
Author 4: Maher Khemakhem

Keywords: 6G-IoT; energy efficiency; scalability; SDN; duty cycling; network slicing; CNN; BiLSTM; AI-driven optimization

PDF

Paper 89: Chaotic Compressed Sensing for Secure Image Transmission in LoRa IoT Systems

Abstract: Transmitting image data reliably over long distances with low cost and minimal storage consumption is critical for LoRa-enabled IoT devices. Conventional methods often rely on high-power consumption or computationally intensive hardware, rendering them unsuitable for cost-sensitive and resource-limited IoT deployments. This paper presents a hybrid compressed sensing approach designed for efficient image transmission in LoRa-based IoT systems. The proposed method utilizes a chaotic map-based sensing matrix to enhance randomness and incoherence in the sampling process, which also serves as an encryption key to secure the transmitted data. While wavelet transform is combined with Total Variation (TV) minimization to accurately recover high-quality images from the sparse measurements on the reconstruction side. The system is implemented on low-power development boards, with the ESP32-CAM used for image capture and initial compression, and the CubeCell-AB01 handling LoRa-based wireless transmission. Experimental results demonstrate significant reductions in data size and transmission cost, while preserving image fidelity and enhancing data security, making the proposed method well-suited for resource-constrained IoT applications.

Author 1: Chatchai Wannaboon
Author 2: Shamsul Ammry Bin Shamsul Ridzwan
Author 3: Sorawit Fong-In

Keywords: Secure image transmission; compressed sensing; chaotic maps; long-range radio signals; LoRa

PDF

Paper 90: Predictive Models in Mental Health Based on Unsupervised Data Clustering

Abstract: In the university context, students’ mental health has been progressively affected over time. The objective of this research was to develop a predictive model of machine learning based on the K-Means algorithm, with the purpose of identifying and classifying mental health profiles among university students. For the construction of this model, the standard Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology was applied, which encompasses five stages: business understanding, data understanding, data preparation, modeling, and evaluation. The results obtained suggest that the generated clusters produce consistent groupings in key variables such as screen time, hours of sleep, and level of physical activity, allowing the characterization of different student profiles. This approach provides valuable information for designing academic support strategies and programs aimed at students’ well-being and mental health. The early identification of behavioral patterns and lifestyle habits enables educational institutions to implement preventive and personalized measures, fostering improved academic performance and university adaptation.

Author 1: Inoc Rubio Paucar
Author 2: Cesar Yactayo-Arias
Author 3: Laberiano Andrade-Arenas

Keywords: Behavioral patterns; clustering; machine learning; mental health; university students

PDF

Paper 91: TomDetLeaf: A Realistic Multi-Source Dataset for Real-Time Tomato Leaf Detection

Abstract: Plant diseases remain a major threat to crop productivity, especially where timely diagnosis is difficult. This paper introduces TomDetLeaf, a new annotated dataset designed for tomato leaf detection in diverse agricultural environments, supporting the development of generalizable deep learning models for edge AI deployment. Unlike existing datasets such as PlantVillage, which consist mainly of single-leaf images captured under controlled conditions, TomDetLeaf integrates heterogeneous sources including the Taiwan dataset, climate-controlled green-houses, hydroponic systems and farm environments. The dataset combines single-leaf and multi-leaf images, realistic backgrounds and varying illumination, addressing a key gap that limits the real-world robustness of current models. To demonstrate its utility, we trained and evaluated YOLOv8 on both the original Taiwan dataset and our proposed TomDetLeaf. Results show that YOLOv8 trained on TomDetLeaf achieved 88.3% mAP@0.5, 81.8% precision, and 82.7% recall, exceeding the Taiwan-subset baseline of 77.4% mAP@0.5, 81.6% precision, and 67.6% recall. This validates the contribution of TomDetLeaf in improving detection accuracy and generalization under realistic conditions. By providing a diverse, deployment-ready dataset, this work bridges the gap between theoretical benchmarks and practical real-time applications.

Author 1: Yassmine Ben Dhiab
Author 2: Mohamed Ould-Elhassen Aoueileyine
Author 3: Abdallah Namoun
Author 4: Ridha Bouallegue

Keywords: Tomato leaf detection; smart agriculture; dataset; tomato leaf dataset; real-time inference; Edge AI; object detection

PDF

Paper 92: Comparative Analysis of Statistical, Machine Learning, and Deep Learning Approaches for Frost Prediction in the Peruvian Altiplano

Abstract: Frost events represent a critical climatic hazard for agricultural systems in the Peruvian highlands, impacting approximately 74% of rural communities in the Puno region. This research addresses the question of whether machine learning (ML) and deep learning (DL) approaches can significantly outperform traditional statistical methods for frost prediction in extreme high-altitude tropical conditions, achieving sufficient accuracy for operational early warning systems. We present a comprehensive evaluation of twelve forecasting models for predicting daily minimum temperatures, utilizing NASA POWER satellite data (2000-2025) from thirteen meteorological stations across the Alti-plano plateau (121,056 observations). The study implements and compares traditional statistical approaches (SARIMAX, Holt-Winters, Prophet, STL+ARIMA), machine learning algorithms (Random Forest, Support Vector Machines, XGBoost), deep neural network architectures (Multilayer Perceptron, LSTM, 1D-CNN), a hybrid SARIMA+ANN model, and an optimized ensemble approach. The ensemble model, integrating XGBoost, LSTM, and Random Forest through weighted averaging, demonstrated superior performance with RMSE=1.65°C and TSS=0.87, representing a 35% improvement over the best-performing statistical method. Individual analysis revealed XGBoost achieved RMSE=1.78°C with exceptional feature interaction modeling, while LSTM networks exhibited remarkable temporal pattern recognition with recall=0.88 for frost event detection. These findings validate the effectiveness of nonlinear approaches for operational forecasting under extreme climatic conditions and offer a robust framework for early warning systems that could substantially mitigate agricultural losses in vulnerable high-altitude communities.

Author 1: Fred Torres-Cruz
Author 2: Dina Maribel Yana-Yucra
Author 3: Richar Andre Vilca-Solorzano

Keywords: Frost prediction; machine learning; deep learning; ensemble methods; Altiplano; agricultural early warning systems

PDF

Paper 93: Towards the Hybrid Approach for Predicting Stroke Risk: A Feature Augmented Model

Abstract: This project addresses the critical challenge of stroke prediction by developing a hybrid model that integrates the strengths of the Random Forest (RF) and Support Vector Machine (SVM) algorithms. Stroke risk is highly influenced by lifestyle-related factors such as smoking, hypertension, heart disease, and elevated body mass index (BMI). Although existing models, such as standalone Random Forest classifiers, offer moderate predictive performance, achieving an accuracy of approximately 74.53%, they often fall short in clinical reliability. The proposed hybrid model improves prediction accuracy by leveraging Random Forest to capture complex, nonlinear relationships and determine feature importance, while SVM enhances performance in high-dimensional spaces by establishing precise decision boundaries. This study also includes a comprehensive literature review that evaluates existing algorithms, their implementation in current systems, and cross-domain insights, ultimately forming the development of a novel conceptual framework. The anticipated outcome is a robust, data-driven predictive tool that enhances clinical decision-making and supports early intervention strategies. By combining complementary machine learning techniques, this hybrid approach aims to set a new benchmark in stroke risk assessment and contribute meaningfully to patient care in modern healthcare environments towards sustainable public health.

Author 1: Ting Tin Tin
Author 2: Wong Jia Qian
Author 3: Ali Aitizaz
Author 4: Ayodeji Olalekan Salau
Author 5: Omolayo M. Ikumapayi
Author 6: Sunday A. Afolalu

Keywords: Public health; Random Forest; Support Vector Machine; hybrid model; stroke prediction

PDF

Paper 94: Socio-Technical Factors Influencing Business Intelligence Adoption in SMEs

Abstract: This study explores the major challenges that Small and Medium-sized Enterprises (SMEs) encounter when adopting Business Intelligence Systems (BIS), particularly in complex socio-political environments, such as Libya. It aims to understand how internal constraints, like limited financial capacity, resistance to change among management, and weak knowledge-sharing practices, combined with external socio-political factors, influence BIS adoption in developing economies. A cross-sectional survey approach was employed, targeting 297 SME owners and managers in Libya. Data was collected using a structured questionnaire and analyzed with SmartPLS to examine the relationships among key variables: facilitating conditions, information quality, perceived ease of adoption, perceived usefulness, and social influence. The findings highlight that social influence, especially from peers and industry experts, plays a crucial role in shaping SMEs’ adoption behavior. Moreover, the quality of information emerged as a significant determinant in the successful adoption of BIS. The study offers both practical and policy-level insights, suggesting that with the right support, BIS adoption can significantly enhance SMEs’ competitiveness, decision-making capabilities, and operational efficiency.

Author 1: Ibrahim Abdusalam Abubaker Alsibhawi
Author 2: Hazura Binti Mohamed
Author 3: Jamaiah Binti Yahaya

Keywords: Information quality; social influences; perceived usefulness of Business Intelligence Adoption; perceived ease of adoption of Business Intelligence System; Business Intelligence System adoption

PDF

Paper 95: An Improved BFT Algorithm in Traceability Data for Supply Chain

Abstract: BFT (Byzantine Fault Tolerance) is a set of fault-tolerance techniques used in distributed computing. In traditional supply chain traceability processes, centralized databases are vulnerable to errors and data tampering. By integrating the BFT consensus algorithm with an alliance blockchain, it is possible to address security challenges such as data deletion, misuse, application attacks, and reduced efficiency in storing supply chain traceability data. This approach represents a future trend for the secure and efficient storage and management of supply chain traceability information.

Author 1: Zhiyong Liang
Author 2: Rongwang Jiang
Author 3: Ming Yang
Author 4: Boxiong Yang

Keywords: BFT; Consortium Blockchain; consensus; Tendermint; traceability data; supply chain

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computer Vision Conference
  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org