Information

Journal of Jilin University (Information Science Edition)
ISSN 1671-5896
CN 22-1344/TN
主 任:田宏志
编 辑:张 洁 刘冬亮 刘俏亮
    赵浩宇
电 话:0431-5152552
E-mail:nhxb@jlu.edu.cn
地 址:长春市东南湖大路5377号
    (130012)
WeChat

WeChat: JLDXXBXXB
随时查询稿件状态
获取最新学术动态
     Adv Search
Highlights
Current Issue
21 October 2024, Volume 42 Issue 5
Design of Miniaturized Frequency Selective Surfaces in Microwave Frequency Band
HUO Jiayu, YAO Zongshan, ZHANG Wenzun, LIU Lie, GAO Bo
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  775-780. 
Abstract ( 49 )   PDF (2775KB) ( 23 )  
In order to enhance the performance of FSS(Frequency Selective Surfaces) and precisely control the propagation characteristics of electromagnetic waves in the microwave frequency range to achieve reflection, transmission, or absorption of electromagnetic waves, a miniaturized FSS for the microwave frequency band is proposed. The unit cell size of the FSS is 0.024λ x 0.024λ, demonstrating excellent miniaturization performance. Within the range of 1 ~10 GHz, the FSS exhibits three passbands with exceptional polarization stability and angle stability, maintaining consistent operating frequencies and bandwidth, while exhibiting good transmission performance. This study on the miniaturized FSS serves as a basis for FSS analysis and provides insights for the design of miniaturized frequency selective surfaces.
Related Articles | Metrics
Research on Partial Shading of Photovoltaic MPPT Based on PSO-GWO Algorithm
XU Aihua, WANG Zhiyu, JIA Haotian, YUAN Wenjun
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  781-789. 
Abstract ( 41 )   PDF (2930KB) ( 13 )  
 Under local shading conditions, the power-voltage characteristic curves of photovoltaic arrays show multiple peaks, and traditional population intelligence optimization suffers from slow convergence, large oscillation amplitude and the tendency to fall into local optimality. To address the above problems, an MPPT (Maximum Power Point Tracking)control method based on the PSO-GWO(Particle Swarm Optimization-Grey Wolf Optimization) algorithm is proposed. The algorithm introduces a convergence factor that varies with the cosine law to balance the global search and local search ability of the GWO algorithm; the PSO algorithm is introduced to improve the information exchange between individual grey wolves and their own experience. Simulation results show that the proposed PSO-GWO algorithm not only converges quickly under local shading conditions, but also has a smaller power output oscillation amplitude, effectively improving the maximum power tracking efficiency and accuracy of the PV(Photovoltaic) array under local shading conditions. 
Related Articles | Metrics
Autonomous Driving Decision-Making at Signal-Free Intersections Based on MAPPO
XU Manchen, YU Di, ZHAO Li, GUO Chendong
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  790-798. 
Abstract ( 44 )   PDF (2926KB) ( 11 )  
 Due to the dense traffic flow and stochastic uncertainty of vehicle behaviors, the scenario of unsignalized intersection poses significant challenges for autonomous driving. An innovative approach for autonomous driving decision-making at unsignalized intersections is proposed based on the MAPPO(Multi-Agent Proximal Policy Optimization) algorithm. Applying the MetaDrive simulation platform to construct a multi-agent simulation environment, we design a reward function that comprehensively considers traffic regulations, safety including arriving safely and occurring collisions, and traffic efficiency considering the maximum and minimum speeds of vehicles at intersections, aiming to achieve safe and efficient autonomous driving decisions. Simulation experiments demonstrate that the proposed decision-making approach exhibits superior stability and convergence during training compared to other algorithms, showcasing higher success rates and safety levels across varying traffic densities. These findings underscore the significant potential of the autonomous driving decision-making solution for addressing challenges in unsignalized intersection environments, thereby advancing research in autonomous driving decision-making under complex road conditions.自动驾驶;智能决策;无信号灯交叉口;MAPPO算法 
Related Articles | Metrics
Method for Predicting Oilfield Development Indicators Based on Informer Fusion Model
ZHANG Qiang, XUE Chenbin, PENG Gu, LU Qing
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  799-807. 
Abstract ( 39 )   PDF (1764KB) ( 11 )  
A fusion model based on material balance equation and Informer is proposed to solve the prediction problem of oilfield development indicators. Firstly, the mechanism model before and after the decline of oil field development production is established through the knowledge of the material balance equation field. Secondly, the established mechanism model is fused with the loss function of the Informer model as a constraint to establish an indicator prediction model that conforms to the physical laws of oil field development. Finally, the actual production data of the oil field is used for experimental analysis. The results indicate that compared to several purely data -driven cyclic structure prediction models, this fusion model has better prediction performance under the same data conditions. The mechanism constraints of this model can guide the training process of the model, so that its rate of convergence is faster, and the prediction at the peak and trough is more accurate. This fusion model has better predictive and generalization abilities, and has a certain degree of physical interpretability. 
Related Articles | Metrics
Unmanned Vehicle Path Planning Based on Improved JPS Algorithm 
HE Jingwu, LI Weidong
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  808-816. 
Abstract ( 41 )   PDF (2880KB) ( 9 )  
To address issues such as excessive turning points and suboptimal paths in traditional JPS(Jump Point Search) algorithms, an improved jump point search algorithm is proposed. First, based on the feasibility of the map, the obstacles are adaptively expanded to ensure a safe distance. Then, an improved heuristic function based on directional factor is integrated. And a key point extraction strategy is proposed to optimize the initial planned path, significantly reducing the number of expanded nodes and turning points while ensuring the shortest path. The experimental results show that compared to traditional JPS algorithms, the proposed ensures a shorter path length and fewer corners, while reducing the number of extended nodes by an average of 19% and improving search speed by an average of 21. 8%. 
Related Articles | Metrics
Application of Composite Neural Network Based on CNN-LSTM in Fault Diagnosis of Oilfield Wastewater System 
ZHONG Yan
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  817-828. 
Abstract ( 38 )   PDF (3816KB) ( 23 )  
This study aims to improve the intelligence and accuracy of fault diagnosis in oilfield wastewater systems. A composite neural network is constructed using convolutional neural networks and long short-term memory networks, and the structure is optimized using Adam and random gradient descent method to improve the convergence speed and fault diagnosis accuracy of the model. The study is validated through relevant experiments, and the experimental results show that the optimization algorithm used in the study improves the accuracy of the model to around 0. 87 and reduces the diagnostic loss rate of the model to around 0. 032. The average detection accuracy of the composite neural network structure reaches 0. 888, with an accuracy value of 0. 883 and a recall rate of 0. 789. The composite neural networks is applied to fault diagnosis of oilfield wastewater systems, can achieve intelligent fault detection, reduce economic costs, and build smart oilfield.
Related Articles | Metrics
 Research on Dung Beetle Optimization Algorithm Based on Mixed Strategy
QIN Xiwen, LENG Chunxiao, DONG Xiaogang
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  829-839. 
Abstract ( 36 )   PDF (4046KB) ( 8 )  
The dung beetle optimization algorithm suffers from the problems of easily falling into local optimum, imbalance between global exploration and local exploitation ability. In order to improve the searching ability of the dung beetle optimization algorithm, a mixed-strategy dung beetle optimization algorithm is proposed. The Sobol sequence is used to initialize the population in order to make the dung beetle population better traverse the whole solution space. The golden sine algorithm is added to the ball-rolling dung beetle position updating stage to improve the convergence speed and searching accuracy. And the hybrid variation operator is introduced for perturbation to improve the algorithm’s ability to jump out of the local optimum. The improved algorithms are tested on eight benchmark functions and compared with the gray wolf optimization algorithm, the whale optimization algorithm and the dung beetle optimization algorithm to verify the effectiveness of the three improved strategies. The results show that the dung beetle optimization algorithm with mixed strategies has significant enhancement in convergence speed, robustness and optimization search accuracy.
Related Articles | Metrics
Design of Dynamic Feature Enhancement Algorithm in 3D Virtual Images
XUE Feng, TAO Haifeng
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  840-846. 
Abstract ( 28 )   PDF (3671KB) ( 5 )  
 To effectively solve the problem of uneven brightness in 3D virtual images, a dynamic feature enhancement algorithm for 3D virtual images is proposed. Median filtering algorithm and wavelet soft thresholding algorithm are combined effectively for denoising of 3D virtual images. By setting new structural elements based on visual selection characteristics, connected particle attributes are constructed, and using hierarchical statistical models to perform color conversion and structural element matching on the image, corresponding mapping subgraphs are obtained, and dynamic features are extracted. The 3D virtual image is inputted into an improved U-net++network, dense connections are used at different layers to enhance the correlation of image features at different levels, and all dynamic features are fused for detail reconstruction to achieve dynamic feature enhancement of the 3D virtual image. According to the experimental results, the proposed algorithm can achieve satisfactory dynamic feature enhancement effects in 3D virtual images. 
Related Articles | Metrics
Method for Recognizing Anomalous Data from Bridge Cable Force Sensors Based on Deep Learning
LIU Yu, WU Honglin, YAN Zeyi, WEN Shiji, ZHANG Lianzhen
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  847-855. 
Abstract ( 44 )   PDF (3074KB) ( 11 )  
Bridge sensor anomaly detection is a method based on sensor technology to monitor the status of bridge structure in real time. Its purpose is to discover the anomalies of the bridge structure in time and recognize them to prevent and avoid accidents. The author proposes an abnormal signal detection and identification method for bridge sensors based on deep learning technology, and by designing an abnormal data detection algorithm for bridge sensors based on the LSTM (Long Short-Term Memoy) network model, it can realize the effective detection of the abnormal data location of the bridge cable sensor, and the precision rate and recall rate of the abnormal data detection can reach 99. 8% and 95. 3%, respectively. By combining the deep learning network and the actual working situation of bridge sensors, we design the abnormal classification algorithm of bridge cable-stayed force sensor based on CNN(Convolution Neural Networks) network model to realize the intelligent identification of 7 types of signals of bridge cable-stayed force sensor data, and the precision rate of identification of multiple abnormal data types and the recall rate can reach more than 90%. Compared with the current bridge sensor anomaly data detection and classification methods, the author's proposed method can realize the accurate detection of bridge sensor anomaly data and intelligent identification of anomaly types, which can provide a guarantee for the accuracy of bridge sensor monitoring data and the effectiveness of later performance index identification. 
Related Articles | Metrics
Employment Position Recommendation Algorithm for University Students Based on User Profile and Bipartite Graph 
HE Jianping, XU Shengchao, HE Minwei
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  856-865. 
Abstract ( 34 )   PDF (3244KB) ( 16 )  
To improve the employment matching and human resource utilization efficiency of college students, many researchers are dedicated to developing effective job recommendation algorithms. However, existing recommendation algorithms often rely solely on a single information source or simple user classification, which can not fully capture the multidimensional features and personalized needs of college students, resulting in poor recommendation performance. Therefore, a job recommendation algorithm for college students based on user profiles and bipartite graphs is proposed. With the aid of the conditional random field model based on the integration of long and short-term memory neural networks, the basic user information is extracted from the archives management system of the university library, based on which the user portrait of university students is generated. The distance between different user profile features is calculated, and the k-means clustering algorithm is used to complete the user profile clustering. The bipartite graph network is used to build the basic job recommendation structure for college students and a preliminary recommendation scheme is designed based on energy distribution. Finally, based on the weighted random forest model, the classification of college students’ employment positions is realized by considering users’ preferences for project features, and the score of the initial recommendation list is revised to obtain accurate recommendation results for college students’ full employment positions. The experimental results show that after the proposed method is applied, a recommendation list of 120 full employment positions for college students is given, and the hit rate of the recommendation result reaches 0. 94. This shows that the research method can accurately obtain the results of college students’ employment position recommendation, so as to improve the employment matching degree and human resource utilization efficiency. 
Related Articles | Metrics
Research on Scoring Method of Skiing Action Based on Human Key Points
MEI Jian, SUN Jiayue, ZOU Qingyu
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  866-873. 
Abstract ( 45 )   PDF (3382KB) ( 20 )  
The training actions of skiing athletes can directly reflect their level, but traditional methods for identifying and evaluating actions have shortcomings such as subjectivity and low accuracy. To achieve accurate analysis of skiing posture, a motion analysis algorithm based on improved OpenPose and YOLOv5(You Only Look Once version 5) is proposed to analyze athletes爷 movements. There are two main improvements. First, CSP-Darknet53(Cross Stage Paritial-Network 53) is used as the external network for OpenPose to reduce the dimension of the input image and extract the feature map. Then, the YOLOv5 algorithm is fused to optimize it. The key points of the human skeleton are extracted to form the human skeleton and compared with the standard action. According to the angle information, the loss function is added to the model to quantify the error between the actual detected action and the standard action. This model achieves accurate and real-time monitoring of athlete action evaluation in training scenarios and can complete preliminary action evaluation. The experimental results show that the detection and recognition accuracy reaches 95%, which can meet the needs of daily skiing training. 
Related Articles | Metrics
Research on Decoding Algorithm of Target LED Array for OCC System 
SUN Tiegang, CAI Wen, LI Zhijun
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  874-880. 
Abstract ( 31 )   PDF (2350KB) ( 7 )  
 Camera is utilized to capture target LED (Light Emitting Diode) image in OCC(Optical Camera Communication) system, the performance degradation of OCC system occurs due to outdoor ambient light interference. The strong sunlight causes great difficulty in decoding at the receiving end of OCC system, in order to solve the problem a Gradient-Harris decoding algorithm based on piecewise linear gray transformation is proposed. A set of OCC experimental system is built, original images are captured by camera at the receiving end of OCC experimental system, and the target LED array region is extracted by standard correlation coefficient matching method. The image of target LED array region is enhanced by segmented linear gray transformation, a Gradient-Harris decoding algorithm is used for shape extraction and state recognition of target LED array. The experimental results show that the proposed Gradient-Harris decoding algorithm based on piecewise linear gray transform is effective for OCC experimental system in strong sunlight environment, the average decoding rate is 128. 08 bit/ s, the average bit error rate is 4. 38 x 10-4, and the maximum communication distance is 55 m. 
Related Articles | Metrics
Improved Method of Medical Images Classification Based on Contrast Learning 
LIU Shifeng, WANG Xin
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  881-888. 
Abstract ( 39 )   PDF (1946KB) ( 17 )  
Medical image classification is an important method to determine the illness of patients and give corresponding treatment advice. As medical image labeling requires relevant professional knowledge, it is difficult to obtain large-scale medical image classification labels. And the development of medical image classification based on deep learning method is limited to some extent. To solve this problem, self-supervised contrast learning is applied to medical image classification tasks in this paper. Contrast learning method is used in pre-training of medical image classification. The features are learned from unlabeled medical images in the pre-training stage to provide prior knowledge for subsequent medical image classification. Experimental results show that the proposed improved method of medical image classification based on self-supervised contrast learning enhances the classification performance of the ResNet. 
Related Articles | Metrics
Algorithm for Identifying Abnormal Data in Communication Networks Based on Multidimensional Features 
JIANG Ning
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  889-893. 
Abstract ( 31 )   PDF (1243KB) ( 15 )  
 To solve the problem of low accuracy in identifying abnormal data in existing methods. An abnormal data recognition algorithm for multi-dimensional feature-based communication network is proposed. The current speed and position of particles in particle swarm optimization algorithm is adjusted to obtain multi-dimensional data samples of communication network. Data features are extracted through clustering analysis in data mining, determining density indicators, and obtaining multidimensional features of the data. The extracted multidimensional features are Introduced into the deep belief network for recognition, and anomaly recognition of communication network data is achieved based on changes in feature spectrum amplitude. The experimental results show that the algorithm can effectively identify abnormal data features in communication networks and has high recognition accuracy. 
Related Articles | Metrics
Optimization Method for Unstructured Big Data Classification Based on Improved ID3 Algorithm
TANG Kailing, ZHENG Hao
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  894-900. 
Abstract ( 29 )   PDF (1481KB) ( 8 )  
During the classification process of unstructured big data, due to the large amount of redundant data in the data, if the redundant data cannot be cleaned in a timely manner, it will reduce the classification accuracy of the data. In order to effectively improve the effectiveness of data classification, a non structured big data classification optimization method based on the improved ID3(Iterative Dichotomiser 3) algorithm is proposed. This method addresses the problem of excessive redundant data and complex data dimensions in unstructured big data sets. It cleans the data and combines supervised identification matrices to achieve data dimensionality reduction; Based on the results of data dimensionality reduction, an improved ID3 algorithm is used to establish a decision tree classification model for data classification. Through this model, unstructured big data is classified and processed to achieve accurate data classification. The experimental results show that when using this method to classify unstructured big data, the classification effect is good and the accuracy is high. 
Related Articles | Metrics
Research on Method of Engine Fault Diagnosis Based on Improved Minimum Entropy Deconvolution 
LI Jing
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  901-907. 
Abstract ( 31 )   PDF (1735KB) ( 12 )  
In the process of 3D reconstruction of digital images, problems such as noise and distortion in the original data lead to low efficiency and accuracy of feature matching. To address this issue, a 3D digital image reconstruction method based on SIFT(Scale-Invariant Feature Transform) feature point extraction algorithm is proposed. The Bilateral filter algorithm is used to eliminate the environmental noise in the digital image, retain the edge information of the digital image, and improve the accuracy of feature point extraction. The SIFT algorithm is used to obtain feature point pairs. Using this feature point pair as the initial patch, a dense matching method for spatial object multi view images is used to achieve 3D reconstruction of digital images. The experimental results show that the proposed method has high feature matching efficiency and accuracy and strong noise reduction ability. The average time required for generating 3D reconstructed images is 26. 74 ms. 
Related Articles | Metrics
Based on Deep Generative Models, Hospital Network Abnormal Information Intrusion Detection Algorithm 
WU Fenglang, LI Xiaoliang
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  908-913. 
Abstract ( 31 )   PDF (2038KB) ( 8 )  
In order to ensure the security management of the hospital information network and avoid medical information leakage, an intrusion detection algorithm for abnormal information in the hospital network based on deep generative model was proposed. Using binary wavelet transform method, multi-scale decomposition of hospital network operation data, combined with adaptive soft threshold denoising coefficient to extract effective data. The Wasserstein distance algorithm and MMD(Maximun Mean Discrepancy) distance algorithm in the optimal transportation theory are used to reduce the dimension of the hospital network data in the depth generative model, input the reduced dimension network normal operation data samples into the anomaly detection model, and extract the sample characteristics. Using the Adam algorithm in deep learning strategy, generate an anomaly information discrimination function, and compare the characteristics of the tested network operation data with the normal network operation data to achieve hospital network anomaly information intrusion detection. The experimental results show that the algorithm can achieve efficient detection of abnormal information intrusion in hospital networks, accurately detect multiple types of network intrusion behaviors, and provide security guarantees for the network operation of medical institutions. 
Related Articles | Metrics
Software Reliability Testing Method Based on Improved G-O Model
LIU Zao, GAO Qinxu, DENG Abei, XIN Shijie, YU Biao
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  914-920. 
Abstract ( 23 )   PDF (1319KB) ( 9 )  
In order to overcome the oversimplified treatment of the defect discovery rate in the traditional G-O (Goel-Okumoto) software reliability model, an improved model that more accurately describes the actual change of the defect discovery rate over time is proposed. Unlike the conventional assumption that it is treated as a constant or monotonic function, the improved model considers the progress of testers’ learning and debugging capabilities and the inherent tendency of the software’s defect discovery rate to decrease over time. Therefore, it assumes that the defect discovery rate first increases before showing a dynamic trend of decline. The model’s effectiveness is verified by applying it to two sets of public software defect detectionda tasets and comparing it with a variety of classic models. Experimental results confirm that the improved G-O model demonstrates excellent performance in both fitting and prediction capabilities, proving its applicability and superiority in software reliability assessment.
Related Articles | Metrics
Research on Construction of Knowledge Graph for Electrical Construction Based on Multi-Source Data Fusion
CHEN Zhengfei, XI Xiao, LI Zhiyong, YANG Hang, ZHANG Xiaocheng, ZHANG Yonggang
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  921-929. 
Abstract ( 30 )   PDF (3546KB) ( 8 )  
In order to solve the problem of heterogeneous data from multiple sources encountered by electric power construction companies in the process of transitioning to whole-process consulting business, as well as the challenge of design managers needing to work together remotely due to unforeseen circumstances, it is proposed to construct a knowledge graph by using the enterprise’s private cloud as the basic environment and combining the technology of fusion of heterogeneous data from multiple sources. The system optimises the data management process and ensures data consistency and availability by integrating diverse data sources from various provinces and cities across the country. It ultimately realises distributed collaborative management of the whole process of consulting business and significantly improves the core competitiveness of the enterprise. At the same time, it effectively solves the problems of data variety, wide range of sources and diverse and inconsistent protocols, improves data quality and accuracy, and unifies the storage architecture to enhance the overall data management efficiency and decision-making support capabilities.
Related Articles | Metrics
Research on AI Modeling Approaches of Financial Transactional Fraud Detection
QIAN Lianghong, WANG Fude, SONG Hailong
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  930-936. 
Abstract ( 33 )   PDF (1313KB) ( 19 )  
To detect transactional fraud in financial services industry and maintain financial security, an end-to- end modeling framework, methodology, and model architecture are proposed for financial transactional data with imbalanced and discrete classes. The framework covers data preprocessing, model training, and model prediction. The performance and efficiency of different models with different numbers of features are compared and validated on a real-world dataset. The results demonstrate that the proposed approach can effectively improve the accuracy and efficiency of financial transactional fraud detection, providing a reference for financial institutions to select models with different types and numbers of features according to their own optimization goals and resource constraints. Tree-based models excel with over 200 features in resource-rich settings, while neural networks are optimal for medium-sized feature sets (100 ~200). Decision trees or logistic regression are suitable for small feature sets in resource-constrained, long-tail scenarios. 
Related Articles | Metrics
Strongly Robust Data Security Algorithms for Edge Computing 
LIU Yangyang, LIU Miao, NIE Zhongwen
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  937-942. 
Abstract ( 27 )   PDF (1195KB) ( 10 )  
The use of distributed deployment of sensors will lead to the edge of the server data distribution single imbalance phenomenon, the model training under edge computing can also result in serious privacy leakage problem due to the data set pollution caused by gradient anomaly. RDSEC(Strongly Robust Data Security Algorithms for Edge Computing) is proposed, encryption algorithm is used to encrypt the parameters of the edge server to protect privacy. If an anomaly is found in the gradient anomaly detection of the edge node, the edge node uploads the gradient with a signal to tell the cloud center if the current parameters uploaded by the edge node are available. The experimental results on CIFAR10 and Fashion data sets show that the algorithm can efficiently aggregate the parameters of edge servers and improve the computing power and accuracy of edge nodes. Under the condition of ensuring data privacy, the robustness, accuracy and training speed of the model are greatly improved, and the high accuracy of edge node is achieved. 
Related Articles | Metrics
Research on Construction and Recommendation of Learner Model Integrating Cognitive Load 
YUAN Man, LU Wenwen
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  943-951. 
Abstract ( 31 )   PDF (2591KB) ( 8 )  
 The current learner model lacks exploration of this dimension of cognitive load, which, as a load generated by the cognitive system during the learning process, has a significant impact on the learning state of learners. Based on the CELTS-11(China E-Learning Technology Standardization-11) proposed by the China E-Learning Technology Standardization Committee, cognitive load is integrated into the learner model as a dimension, and an LMICL ( Learner Model Incorporating Cognitive Load) combining static and dynamic information is constructed. Afterwards, relying on an adaptive learning system, the data of the unmixed cognitive load learner model and the LMICL data were used as the basis for recommending learning resources, resulting in two different learning resource recommendation results. Two classes of learners were randomly selected to learn system, and then their academic performance. The results of cognitive load and satisfaction were used to validate the effectiveness of LMICL , and it was found that the recommendation learning effect based on LMICL was better than that of the learner model without integrating cognitive load.
Related Articles | Metrics
Deep Interactive Image Segmentation Algorithm for Digital Media Based on Edge Detection 
HE Jing, QIU Xinxin, WEN Qiang
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  952-958. 
Abstract ( 27 )   PDF (2500KB) ( 8 )  
Digital media deep interactive images are affected by noise, resulting in poor edge detection performance and affecting segmentation accuracy. Therefore, a digital media deep interactive image segmentation algorithm based on edge detection is proposed. Firstly, the wavelet transform method is used to denoise images in digital media to improve the accuracy of image segmentation. Secondly, Gaussian function and low-pass filter are used to enhance the denoised image, improve the image definition, and facilitate image segmentation. Finally, based on the adaptive threshold algorithm, edge detection is performed on digital media images. There are two thresholds in the pixel collection, the upper threshold and the lower threshold. The high and low thresholds in the pixel set are calculated based on the calculation of their upper and lower thresholds, and edge connections between the two thresholds are implemented to achieve digital media image segmentation. The experimental results show that the proposed method has good denoising effect, high segmentation accuracy, and high segmentation efficiency for segmented digital media images. 
Related Articles | Metrics
 Improved Decision Tree Algorithm for Big Data Classification Optimization 
TANG Lingyi, TANG Yiwen, LI Beibei
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  959-965. 
Abstract ( 41 )   PDF (2820KB) ( 16 )  
Due to the complex structure and features of current massive data, big data exhibits unstructured and small sample characteristics, making it difficult to ensure high accuracy and efficiency in its classification. Therefore, a big data classification optimization method is proposed to improve the decision tree algorithm. A fuzzy decision function is constructed to detect sequence features of big data, and these features inputted into a decision tree model to mine and train rules. The decision tree model is improved using grey wolf optimization algorithm. The big data is classified using the improved model, and then a classifier accuracy objective function is established to achieve accurate classification of big data. The experimental results show that the proposed method achieves the highest accuracy in classification results and the lowest false positive case rate, ensuring the overall high throughput of the algorithm and improving its classification efficiency.
Related Articles | Metrics
Mobile Terminal Access Control Technology Based on EVM Measurement Algorithm
CAO Luhua
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  966-971. 
Abstract ( 27 )   PDF (1569KB) ( 10 )  
In response to the difficulty in determining the characteristics of users and data for mobile terminal access, which leads to high difficulty in access control, an access control technology based on EVM(Error Vector Magnitud) measurement algorithm is proposed to effectively solve the problem. Considering the impact of noise and other interference factors in the environment, the Qos(Quality of Srvice) condition is used as the initial access condition for users. The characteristics of users or data that meet this condition are calculated, and the feature values are converted into weight factors as reference for access control. The EVM measurement algorithm is used to calculate the difference between the internal and external signals of the terminal channel, and the user weight factor is used to derive the access threshold of the mobile terminal. The increasing and decreasing functions between different user threshold values and control values are solved, and precise control of mobile terminal access is achieved according to the priority order of the functions. The experimental data shows that the proposed method has high access control accuracy, and after control, the terminal transmission delay and blocking rate have been significantly improved, and the data arrival rate has also been significantly improved. 
Related Articles | Metrics
 Integration Framework of Library Resourcing and Runtime Deployment for Logging Software
ZHAO Dong, XIAO Chengwen, GUO Yuqing, JI Jie, HU Yougang
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  972-978. 
Abstract ( 22 )   PDF (2378KB) ( 6 )  
 The traditional desktop application library integration method has some limitations in practical applications, such as the expansion of the standard OS directory, the complexity of distribution package making, the need to modify the middle layer library when multiple level library calls are included, and the inconsistency between development and deployment environments. To solve the problems, an integration framework is proposed. The cores of it are managing libraries in the way of managing resources such as images and implement the dynamic deployment of libraries at runtime based on the detection results of constraints and dependencies between libraries. Through the design of the four components, Library resource management, runtime dynamic deployment, runtime dynamic loading and resource manager, and their collaboration, the integration framework for the first time implements the combination of the above two cores. The practical application of CIFLog Integrated Logging Platform method module integration shows that the integration framework can solve the problems existing in the traditional library integration. The applicability of this framework can be applied to the library integration of all desktop applications, providing a new idea for the library integration of desktop applications. 
Related Articles | Metrics
Optimization of Internet of Things Identity Authentication Based on Improved RSA Algorithm
WANG Dezhong
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  979-984. 
Abstract ( 32 )   PDF (1532KB) ( 9 )  
 In view of the low accuracy and efficiency of Internet of Things authentication due to the influence of noise, a new optimization method of Internet of Things authentication based on improved RSA(Rivest-Shamir- Adleman) algorithm was proposed. In this method, a transmission channel model is constructed to obtain user identity information and a noise reduction model is constructed to preprocess user identity data. Based on the processed data, the user identity characteristic information is extracted to build the Internet of Things identity authentication algorithm. On this basis, RSA algorithm is introduced to encrypt and process user identity information data to realize the optimization of Internet of Things identity authentication. In addition, the proposed method is not easily affected by noise environment. Under the condition of noise, the maximum error between the authentication rate and the ideal authentication rate is only 3.7%. Therefore, the proposed method is feasible and effective. 
Related Articles | Metrics
Sorting Algorithm of Web Search Based on Softmax Regression Classification Model
DANG Mihua
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  985-990. 
Abstract ( 37 )   PDF (1106KB) ( 10 )  
 There is a phenomenon of domain drift in webpage search results, where the returned webpage is not related to the search keyword domain, resulting in that users are unable to search for demand information. Therefore, a web search sorting algorithm based on Softmax regression classification model is proposed. Through the Feature selection of web search text, the corresponding feature items are obtained. Using the vector representation model, the selected web search text feature items are converted into formatted data, and the web search text data is balanced to obtain the web search text data set. Using the Softmax regression classification model, the web search text dataset is classified and processed, the types of web search texts is predicted. And the OkapiBM25 algorithm is used to sort web search texts, achieving web search sorting. The experimental results show that the proposed algorithm performs well in web search sorting, effectively improving the accuracy of web search sorting and avoiding domain drift during the process of web search sorting.
Related Articles | Metrics
Challenges and Countermeasures of Information Security in Digital Transformation of Libraries 
ZHANG Shiyue
Journal of Jilin University (Information Science Edition). 2024, 42 (5):  991-996. 
Abstract ( 38 )   PDF (1575KB) ( 5 )  
 With the advancement of information technology, the digital transformation of libraries has become a key avenue for enhancing service efficiency. During this transformation process, information security issues have become increasingly prominent, posing threats to the protection of library resources and the security of user data. This study is to address the information security challenges in the digital transformation of libraries by proposing a comprehensive information security protection system. By analyzing the main information security risks faced by libraries currently, including cyber attacks, copyright disputes, insufficient security awareness among management personnel, and low levels of resource sharing, a four-tier information security protection system is constructed consisting of the user application layer, service platform layer, data center layer, and infrastructure layer. This system can effectively enhance the security and access control of library information resources, and strengthen the security of digital resource access. In the process of digital transformation, libraries must consider information security as a core factor, and build a comprehensive information security protection system through the collaborative work of technology, management, and organization to ensure the security and efficient use of digital resources.
Related Articles | Metrics
Office Online
News
Links