Welcome to ITCSE 2025

14th International Conference on Information Technology Convergence and Services (ITCSE 2025)

September 20 ~ 21, 2025, Copenhagen, Denmark



Accepted Papers
Analysis of Efficiency and Security of Existing Bft Paxos-based Algorithms

Illia Melnyk1,2, Oleksandr Kurbatov2, Oleg Fomenko2, Volodymyr Dubinin2, Yaroslav Panasenko2.
1National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2Distributed Lab, Kyiv, Ukraine.

ABSTRACT

Byzantine fault tolerant consensus plays a critical role in maintaining the reliability of distributed systems. This paper surveys and evaluates five Paxos-based algorithms – Byzantine classic Paxos consensus, Castro-Liskov algorithm, Byzantine generalized Paxos consensus, Byzantine vertical Paxos, and Optimistic Byzantine Agreement – comparing their efficiency in terms of process requirements, communication rounds, and message complexity, as well as their resilience against Byzantine behaviors. Through detailed examination of protocol structures and performance trade-offs, we identify the strengths and limitations of each approach under typical and adversarial conditions. Our analysis reveals that the two-phase Byzantine classic Paxos consensus protocol achieves an optimal balance of simplicity, low process overhead, and robust security guarantees, making it a compelling choice for practical Byzantine fault tolerant deployments. We conclude with recommendations for selecting an appropriate consensus algorithm based on system constraints.

Keywords

Byzantine fault tolerance, Paxos-based consensus, Communication complexity, Process requirements, Distributed system security.


Privacy-aware White and Black List Searching for Fraud Analysis

William J Buchanan1, Hisham Ali1, Jamie Gilchrist2, Zakwan Jaroucheh2, Dmitri Timosenko2 and Nanik Ramchandani2.
1Blockpass ID Lab, Edinburgh Napier University, Edinburgh, 2LastingAsset, Edinburgh Napier University, Edinburgh.

ABSTRACT

In many areas of cybersecurity, we require access to Personally Identifiable Information (PII), such as names, postal addresses and email addresses. Unfortunately, this can lead to data breaches, especially in relation to data compliance regulations such as GDPR. An Internet Protocol (IP) address is an identifier that is assigned to a networked device to enable it to communicate over networks that use IP. Thus, in applications which are privacy-aware, we may aim to hide the IP address while aiming to determine if the address comes from a blacklist. One solution to this is to use homomorphic encryption to match an encrypted version of an IP address to a blacklisted network list. This matching allows us to encrypt the IP address and match it to an encrypted version of a blacklist. In this paper, we use the OpenFHE library [1] to encrypt network addresses with the BFV homomorphic encryption scheme. In order to assess the performance overhead of BFV, we implement a matching method using the OpenFHE library and compare it against partial homomorphic schemes, including Paillier, Damgard-Jurik, Okamo -Uchiyama, Naccache-Stern and Benaloh. The main findings are that the BFV method compares favourably against the partial homomorphic methods in most cases


Smart Distributed Uav-based Forest Fire Monitoring: A Secure Iot Approach to Real-time Data Analytics

Luigi La Spada1, Nida Zeeshan1, Makhabbat Bakyt2, Kazybek bi Zhanibek3 and Saya Santeyeva3, 1School of Computing, Engineering and the Built Environment, Edinburgh Napier University, 10 Colinton Road, Edinburgh, EH10 5DT, United Kingdom, 2Department of Information Security, Faculty of Information Technology, L.N. Gumilyov Eurasian National University, Astana, 010000, Kazakhstan, 33Department of IT Engineering and Artificial Intelligence, Almaty University of Power Engineering and Telecommunications named after Gumarbek Daukeyev, Almaty, 050000, Kazakhstan.

ABSTRACT

Presented is an advanced geoinformation system for monitoring and forecasting forest fires, utilizing unmanned aerial vehicles (UAVs) and a novel lightweight neural network-based encryption technique. The system incorporates an innovative aerospace data processing algorithm that achieves a fire detection accuracy of 98.7% and forecasts fire spread with an average prediction error of 12.5 m and a maximum error of 28.5 m. Notably, the proposed encryption method secures data transmission from the UAV to the ground station and operates 20% faster than the conventional AES-128 standard. Experimental results validate the systems capability to accurately detect fire incidents, efficiently predict their spread, and reliably safeguard transmitted information. Although effective in monitoring extensive forest areas and facilitating prompt emergency responses, its accuracy is somewhat constrained by factors such as UAV altitude and image resolution. Future research will aim to develop adaptive UAV control strategies and incorporate multi-sensor fusion techniques to further enhance performance.

Keywords

Forest Fires, UAV, Geographical Information System, Neural Network, Data Encryption, Aerospace Data, Intelligent Processing.


Anamorphic Cryptography using Baby-Step Giant-Step Recovery

William J. Buchanan and Jamie Gilchrist, Blockpass ID Lab, Edinburgh Napier University, Edinburgh.

ABSTRACT

In 2022, Persianom, Phan and Yung outlined the creation of Anamorphic Cryptography. With this, we can create a public key to encrypt data, and then have two secret keys. These secret keys are used to decrypt the cipher into different messages. So, one secret key is given to the Dictator (who must be able to decrypt all the messages), and the other is given to Alice. Alice can then decrypt the ciphertext to a secret message that the Dictator cannot see. This paper outlines the implementation of Anamorphic Cryptography using ECC (Elliptic Curve Cryptography), such as with the secp256k1 curve. This gives considerable performance improvements over discrete logarithm-based methods with regard to security for a particular bit length. Overall, it outlines how the secret message sent to Alice is hidden within the random nonce value, which is used within the encryption process, and which is cancelled out when the Dictator decrypts the ciphertext. It also shows that the BSGS (Baby-step Giant-step) variant significantly outperforms unoptimised elliptic curve methods.

Social-aware Self-organizing Networks for Aging Well: A Distributed Model for Human-centric Support

Carlotta Conversi1 and Vittorianna Perrotta2, 1Department of Social Science, University of Urbino Carlo Bo, Urbino, Italy, 2Department of Economics, University Tor Vergata, Rome, Italy

ABSTRACT

This paper explores the potential of Social-Aware Self-Organizing Networks (SA-SONs) as an adaptive model to support psychosocial well-being in aging populations. By connecting young volunteers, smart nodes, and local environments, SA-SONs dynamically match relational needs and social opportunities through lightweight, decentralized mechanisms. This approach enables responsive and human-centered coordination of low-intensity care and community engagement. The paper introduces a conceptual architecture, discusses key challenges such as trust, privacy, and variability of human nodes, and suggests future directions for research and pilot implementation in socially diverse environments.

Keywords

Human Nodes, Social Awareness, Network Protocols, Well-Being.


No Masks Needed: Explainable Ai for Deriving Segmentation From Classification

Mosong Ma1, Tania Stathaki1 and Michalis Lazarou2, 1Department of Electrical and Electronic Engineering, Imperial College London, London, UK. 2Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, UK.

ABSTRACT

Medical image segmentation is vital for modern healthcare and is a key element of computer-aided diagnosis. While recent advancements in computer vision have explored unsupervised segmentation using pre-trained models, these methods have not been translated well to the medical imaging domain. In this work, we introduce a novel approach that fine-tunes pre-trained models specifically for medical images, achieving accurate segmentation with extensive processing. Our method integrates Explainable AI to generate relevance scores, enhancing the segmentation process. Unlike traditional methods that excel in standard benchmarks but falter in medical applications, our approach achieves improved results on datasets like CBIS-DDSM, NuInsSeg and Kvasir-SEG.

Keywords

Medical Image Segmentation, Explainable AI, Transfer Learning.


A Conceptual Recommendation Approach and Experimental Evaluation for Search and Reuse of Object-oriented Design Patterns

Tarek Sboui1, 2 and Saida AISSI3, 41Department of Geology, Faculty of Science of Tunis, University of Tunis El-Manar, Tunis 1068, Tunisia, 2GREEN-TEAM Laboratory, INAT, Tunis 1082, Tunisia, 3ESPRIT School of Business, Airana, Tunisia, 4SMART Lab laboratory, University of Tunis, Higher Institute of Management, Tunis 2000, Tunisia

ABSTRACT

A design pattern is a well-known solution to a recurring design problem in a given context. Reusing design patterns helps information system developers in saving time and gaining quality when developing object-oriented systems. However, it may be difficult to find the relevant pattern and to reuse it because the design patterns are scattered over various sources and have a high level of abstraction. In this paper, we present a novel approach that allows finding and recommending relevant design patterns relative to a particular design problem, the proposed approach is based on a new strategy for representing and indexing design patterns based on conceptual graphs, and a semantic similarity measure between concepts of these graphs. We also develop a new tool called Design Pattern Retrieval and Reuse (DePaRR) which implements the approach and automates the search and reuse of design patterns by recommending relevant design patterns to information system developers. The presented approach is described theoretically and validated by experiments.

Keywords

Design pattern, Retrieval and reuse, Conceptual graph, Semantic similarity.


Enhancing Student Performance Classification using Weighted Cost-effective Random Forest (WCERF)

Shoukath TK1 and Midhunchakkaravarthy, Faulty of AI Computing & Multimedia, Lincoln University College, Malaysia

ABSTRACT

Classifying student performance is an essential component of educational data mining, assisting educators in recognizing at-risk students and enhancing learning interventions. Imbalanced data sets can be challenging for conventional machine learning algorithms like Random Forest, which might result in low classification performance for underrepresented groups. This paper offers a Weighted Cost-Effective Random Forest (WCERF) model to solve this issue; it combines cost-sensitive learning with an optimal weighting technique to improve classification performance. The main goal is to create a stronger predictive model that precisely categorizes students depending on several academic and non-academic criteria, therefore enabling early interventions for academic improvement. The approach consists of applying WCERF with customized class weight changes to reduce class imbalance after pre-processing an educational dataset including student demographic information, academic records, and socio-economic factors. Performance evaluation measures like accuracy, precision, recall, and F1-score offer insights into the models effectiveness. WCERFs accuracy was 0.5729, precision score was 0.4732, recall score was 0.4117, and F1-score was 0.3669 without cross-validation. Although these findings show small increases in managing class imbalance, more changes are required to maximize classification output. This paper emphasizes WCERFs capacity to deliver fairer educational insights, balance misclassification costs, and enhance minority class projections. The study emphasizes WCERFs potential in improving student performance classification and stresses the importance of future work on hyper parameter tuning, feature selection, and cross-validation techniques to increase its predictive power and relevance in various educational settings.

Keywords

Student Performance; Classification; Weighted Cost-Effective Random Forest Algorithm; Accuracy.


Stream Processing in Decentralized Architectures: Challenges and Adaptive Solutions Across Cloud, Fog, and Edge

Alireza Faghihi Moghaddam, Department of Computer Science, Uppsala University, Sweden

ABSTRACT

In recent years, the rapid development of data-driven applications has posed significant challenges for data computation in different domains. Handling and processing continuous data streams have become essential for building data-driven organizations, which places a high burden on traditional computing. As a traditional centralized method, cloud computing often struggles with application latency, mainly because of the geographic distance and network bandwidth. The increasing scale and complexity of data, characterized by high volume, velocity, and variety, demand computational infrastructures that are powerful, adaptive, and efficient in terms of processing. Fog and Edge computing are two decentralized network solutions that move the computation closer to the data source, lowering network traffic while improving the response time. Edge computing performs computations within IoT devices, resulting in real-time data processing and subsequently transferring less time-critical data to the cloud. In contrast, Fog Computing utilizes fog nodes with high computational power for data processing and data storage. These nodes are within the same local network, and make Fog Computing a better choice when working with a large number of IoT devices, the need for local computational power, and storage. Both fog and edge computing rely on cloud infrastructure for long-term data storage and larger computations.This study provides a comprehensive comparative analysis of the Fog, Edge, and Cloud computing paradigms, with a particular focus on their applicability to real-time data stream processing. to determine their strengths and ideal use cases in a table and showcase their advantages and disadvantages in stream processing.

Keywords

Stream processing, Edge computing, Fog computing, Cloud computing.


Comparison of Shopee and Tokopedia Sentiment Analysis: Random Forest

Theresia Vania Davita Suyana and Sfenrianto, Department of Information System, Bina Nusantara University, Kemanggisan, Indonesia

ABSTRACT

This study investigates sentiment analysis on user reviews of Shopee and Tokopedia—two major e-commerce platforms in Indonesia—using the Random Forest algorithm. Data collected from the Google Play Store in April 2025 were evenly sampled and processed using the CRISP-DM methodology, with TF-IDF for feature extraction. The Random Forest model achieved 91% accuracy on Shopee and 84% on Tokopedia, showing stronger performance on larger or more diverse datasets. It was particularly effective in detecting negative sentiment, with fewer false positives, though it had a slightly higher false negative rate. Overall, Random Forest offers a stable, interpretable, and reliable baseline for sentiment classification in e-commerce contexts.

Keywords

Sentiment Analysis, Random Forest, Shopee, Tokopedia, E-Commerce, Machine Learning.


Structuring Prompting Workflows for Weak Signal Detection: A Use Case in the E-commerce Sector

Nikolay Khlopov and Olga Shaeva, Algorithm Trend Intelligence, Lille, France

ABSTRACT

This paper introduces a structured methodology for leveraging large language models (LLMs) to detect and analyze weak signals of change within the e-commerce sector. Focusing on the design of targeted prompting workflows, we demonstrate how generative AI can support trend research by retrieving early indicators of innovation across diverse markets and geographies. The proposed framework, GenAI+TW, outlines a multi-stage prompting process that combines contextual constraints, verification rules, and semantic refinement to surface non-obvious use cases of emerging practices such as Video Commerce. Through a real-world scenario involving online retail and marketplace platforms, we show how precise prompting can reduce noise, enhance interpretability, and support strategic foresight in knowledge-intensive environments. Rather than positioning LLMs as generators of insight, we argue for their role as structured tools for extracting weak signals with business relevance. The methodology is adaptable and scalable, offering practical value for foresight teams, researchers, and innovation managers working at the intersection of AI and e-commerce transformation. Methodology presented in this paper is built upon practical experience in Algorithm Trend Intelligence Trendwatching project for enterprise clients. It offers practical value for foresight teams and innovation managers, addressing specific corporate client requests like finding market differentiators. This application of LLMs in trend research is a developing area that is not yet fully codified, with this paper contributing a novel, structured approach for tasks like LLM-assisted data retrieval and qualitative data filtering and verification.

Keywords

LLM prompting, weak signals, copiloting, e-commerce, strategic foresight.