International Journal of Advanced Network, Monitoring and Controls
The International Journal of Advanced Network, Monitoring and Controls (IJANMC) is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills, especially in the fields of advanced network, future network, monitoring, sensors and controls. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed, Only original articles will be published. [Aims & Scope]
The Journal is open to all international universities and research institutes to report the newest achievements of computer networks, internet of things, inspection and control technologies.
Before December 2025, the IJANMC journal was published by Paradigm Publishing Services. All papers can be found at this website, andthe latest issue.
Aiming at the issues of high computational cost and limited generalization ability of ResNet50 in classifying images, this study advances an optimization strategy based on transfer learning. The model is initialized with transfer learning to reduce computational burden, and data augmentation techniques are employed to enhance generalization ability. Additionally, label smoothing is introduced to optimize the cross-entropy loss, thereby reducing sensitivity to noisy labels. The training process is further optimized using cosine annealing learning rate decay. Experimental findings reveal that the optimized ResNet50 model achieves a 6.25% improvement in classification accuracy on the CIFAR-10 dataset, validating the validity of the suggested methods.
In the field of crop target detection, traditional target detection algorithms are often difficult to achieve satisfactory accuracy due to factors such as dense distribution of species and poor imaging quality, which brings many inconveniences and challenges in practical agricultural production applications. To address this situation, the study introduces an enhanced YOLOv7 algorithm, incorporating the attention mechanism, with the objective of substantially elevating the overall performance in crop target detection tasks. The improved algorithm can more accurately focus on the key features of crops by cleverly incorporating the attention mechanism, effectively filtering out the interference of complex background and noise, so as to achieve more accurate recognition of various crops. After a large amount of experimental data verification, the improved algorithm can achieve an average recognition accuracy of 80% for a variety of crops, with an average accuracy of 75%, and the highest recognition efficiency is as high as 91% in the detection of some specific crops. In contrast to other prominent crop target detection algorithms, the refined algorithm presented in this paper exhibits remarkable performance benefits. Notably, its target detection efficacy is highly significant, enabling swift and precise identification of crop species.
This study addresses intelligent problem-solving in elementary math competitions by proposing an AORBCO model-based system. It integrates knowledge graphs, rule-based reasoning, and cognitive optimization to simulate human problem-solving processes. The framework systematically analyzes competition problem types, constructs a structured knowledge base, and implements dual-solving modules: rule-template matching and knowledge graph reasoning, supplemented by question bank similarity retrieval. Experimental results demonstrate 15% higher accuracy and 30% faster solving speed compared to conventional methods, with enhanced interpretability. Key innovations include the first application of AORBCO in educational AI, novel knowledge representation methods, and specialized cognitive optimization algorithms. The research provides technical support for personalized math education and advances intelligent tutoring systems. Future work will focus on improving model generalization and exploring multimodal learning integration.
Cancer of the lung is a principal cause of mortality due to cancer on a global scale. Traditional imaging techniques suffer from subjectivity limitations. Meanwhile, convolutional neural networks (CNNs) within deep learning, though highly effective in image classification, still have limitations when dealing with complex and data-scarce medical images. To address this challenge, this paper proposes a data-efficient image Transformer (DeiT) model based on the Transformer architecture with a self-attention mechanism, enhanced through knowledge distillation. This model can capture global information in images and improve the classification accuracy of lung cancer images under small-sample conditions by leveraging a teacher model. Through model training and evaluation, results demonstrate that the DeiT model achieves an impressive prediction accuracy of 99.96% under small-sample medical imaging conditions. This highlights the advantages of the Transformer architecture in medical image analysis. The findings provide a new perspective for early lung cancer detection and underscore the powerful performance of the DeiT model in handling complex small-sample data conditions.
In domains such as medical diagnostics, surveillance technology, and geospatial imaging, the escalating need for ultra-high-definition imagery has exposed the limitations of conventional super-resolution methods. These legacy algorithms often fail to deliver the precision and clarity demanded by modern applications. Therefore, this article proposes an optimization algorithm based on the AWSRN network model, aiming to achieve efficient image super-resolution reconstruction, reduce computational costs, and enhance image realism. Firstly, optimize the internal structure of the network and enhance its feature extraction and fusion capabilities; Secondly, to enhance feature extraction precision, a novel module integrating depth-separable convolution with an attention-based mechanism is proposed. Additionally, a hybrid loss function- merging perceptual quality metrics with adversarial training objectives-is employed to rigorously evaluate the disparity between generated and ground-truth images. The MPTS training strategy further optimizes convergence efficiency. Empirical evaluations demonstrate that the enhanced AWSRN model achieves substantial improvements over its baseline counterpart across multiple upscaling factors, particularly at 4× magnification. Specifically, on the Urban100 benchmark, the proposed method elevates PSNR by 1.06 points and SSIM by 0.0239, while maintaining computational efficiency. These advancements offer valuable insights for high-fidelity image upscaling methodologies.
In order to overcome the problems of traditional K-means algorithm being sensitive to the initial cluster centers and easily affected by noise points, this study proposes an enhanced K-means hybrid clustering algorithm that integrates improved principal component analysis and density optimization. By combining the distance optimization strategy with the density assessment mechanism, a data density evaluation model based on spatial distribution characteristics was established. The algorithm prioritizes data samples with large spacing in high-density areas as the initial cluster center candidate set. It realizes intelligent filtering of abnormal data points while improving the clustering quality, and selects characteristic parameters with higher principal component contribution rates to reconstruct driving conditions, and finally completes the fuel consumption characteristics verification. Experimental data show that the driving conditions constructed by this method have only a 1.17% statistical difference in the speed-acceleration joint probability distribution, and the relative error mean of key characteristic parameters remains at a low level. The research confirms that the constructed driving conditions are statistically significantly consistent with the actual road operation characteristics and can accurately characterize the essential characteristics of traffic flow in a specific area.
Deep learning has emerged as a vital approach for identifying and addressing vulnerabilities in software systems. A key challenge in this process lies in effectively representing code and leveraging AI techniques to capture and interpret its semantics and other intrinsic information. This paper employs bidirectional slicing techniques to extract code slices containing control and data dependencies from program dependency graphs, targeting key points of different vulnerabilities. To represent the node features within the slices, code tokens are mapped to integers and transformed into fixed-length vectors, leveraging Word2vec and BERT models to embed the code nodes and extract structural graph features. The embedded feature matrix is then fed into a Gated Graph Neural Network (GGNN), which aggregates information from nodes and their neighbors to enhance long-term memory of graph-structured data. By iterating through several time steps within GRU units, the final node features are generated. Additionally, edge relationships are used to propagate and aggregate information, further improving the accuracy of vulnerability detection. Experimental results demonstrate that the proposed model achieves an F1-score of 93.25% on the BigVul dataset, showcasing strong detection performance.
Road surface disease detection is a vital component of road maintenance. Traditional deep learning-based detection methods face challenges such as low detection accuracy, high false alarm rates in complex scenarios, and significant missed detection rates for small targets like potholes. To address these limitations, this paper proposes an improved pavement disease detection algorithm based on RT-DETR. First, a lightweight backbone network named LMBANet is constructed by integrating DRB and ADown modules. This network enhances feature extraction capabilities without increasing computational overhead during inference, preserving local details of low-level features while expanding the receptive field to capture long-range semantic information and reduce false detection of diverse defects in complex scenes. Second, an small-target enhanced feature pyramid network is designed using SPDConv and OmniKernel. By feeding large-scale feature maps extracted by the backbone into a feature fusion layer and enhancing multi-scale feature representation through EFKM, this network resolves the high missed detection rate of small targets in the original model. Experimental results demonstrate that on the RDD2020 dataset, the improved network achieves an mAP of 69.2%, representing a 2.1 percentage point improvement over the original network, while simultaneously reducing parameters and computational costs.
This paper proposes a vehicle and pedestrian detection model based on an improved RT-DETR to address the issues of high redundancy in feature extraction and insufficient accuracy for small targets in existing real-time detection models, especially in complicated traffic scenarios. The core of this improved model is to embed a parameter free SimAM (Simple Attention Module) attention mechanism in the backbone network. The SimAM mechanism dynamically generates three-dimensional attention weights through energy functions, effectively enhancing the expression ability of fine-grained features of pedestrians and vehicles. This improvement not only reduces redundant information in the feature extraction process, but also improves the detection accuracy of the model for small targets, enabling the model to more accurately identify and locate small targets when dealing with complex traffic scenes. The experimental results show that on the BDD100K dataset, the improved model achieved an average precision of 73.6%, which is 3.7 percentage points higher than the original RT-DETR, effectively enhancing the model's capability to detect vehicles and pedestrians in intricate environments.
To address the problems of data sparsity and cold start in collaborative filtering algorithms, this paper proposes an improved course recommendation method that integrates knowledge graphs and collaborative filtering. First, the RippleNet model is used to construct a knowledge graph based on course-attribute-relation triples and generate a recommendation list. Then, an item-based collaborative filtering algorithm utilizes users’ historical interaction behavior to produce another recommendation list. Finally, a weighted linear method is employed to fuse the recommendation list generated by the RippleNet-based course knowledge graph and the one generated by collaborative filtering, resulting in the final course recommendation list. Experiments conducted on the public dataset MOOCCube demonstrate that the RippleNet-CF method improves precision, recall, and F1-score, while also effectively mitigating the issue of data sparsity.