ВОПРОСЫ АВТОМАТИЗАЦИИ И УПРАВЛЕНИЯ ПРОЦЕССАМИ НА ТРАНСПОРТЕ
In this paper, we analyse various technical solutions for autonomous driving. Depending on the role of an autonomous system, different safety integrity levels may be required. We examine the three primary architectures. The first one is simply a support system that only requires a basic integrity, rather than a level. The second one is a simple replacement of the driver, which corresponds to SIL 1 up to SIL 2. The third architecture is an integration of ATO into a safe train protection system, which corresponds to SIL 4.
Improving the management of the technical state of equipment, devices, and installations, whose service life exceeds the respective standard value, is a most important problem of national security, as their proportion already exceeds 60%. The paper presents the findings of an analysis of literature data in the respective subject area that confirms its relevance and significance. Importantly, the findings apply not only to electrical power systems, but to many other industrial systems as well. The primary difficulties associated with solving the problem at hand are, first and foremost, the small quantity of statistical data that characterise operational dependability, their multidimensional and random nature. The authors suggest solving the problem by abandoning average annual dependability indicators in favour of average monthly operational dependability indicators. The authors provide a brief description of the individual solutions as part of the overall problem as regards overhead power lines that together represent a new methodology for managing the technical state of distributed facilities. The research-intensive, cumbersome, and time-consuming calculation algorithms suggest employing intelligent systems. The management of the electrical power system and its individual business units is to receive monthly specialised forms with recommendations as to the ways of improving the dependability of overhead power lines by restoring wear and tear.
SYSTEM ANALYSIS IN DEPENDABILITY AND SAFETY
The growing urban development and expanding metropolitan areas emphasise the importance of sustainable, efficient, and environmentally friendly transportation systems that provide convenience and accessibility. In particular, the construction of subways plays a key role in improving accessibility and stimulating economic progress. Given the high complexity and financial costs that such large projects entail, innovative planning methods are needed to reduce risks and use resources efficiently. This study aims to develop an advanced graphbased planning algorithm that would be able to manage resources as efficiently as possible. Methods. The paper presents the results of a comprehensive study of topical publications as regards the improvement of civil engineering processes and integration of the graph theory to enhance the controllability of complex systems, e.g., as part of subway construction. The study focused on the monographic method of analysis that reveals each element of the examined problems, and the use of the reflexive method to comprehend the obtained information and draw substantiated conclusions. The combination of the above methods allowed not only to evaluate, but also to confirm the advantages of the proposed optimisation strategy focused on improving the efficiency and adaptability in construction projects, especially in the construction of subways, taking into account their specificity and real-world requirements. Results. The paper notes the significant role of the graph theory in improving the efficiency of subway construction. The application of this mathematical method allows improving the project management system taking into account the limited resources. The graph theory acts as the key element that brings structure and order to complex processes, as well as ensures rapid adaptation to any changes in the course of the construction activities. The developed algorithm is special in that it contributes to a most efficient resource allocation, reduced downtime and eliminates delays at various stages of a project. Thus, a higher accuracy of planning and the overall economic efficiency of construction activities are achieved. The paper also delves into multiobjective optimisation. This method enables a perfect balance of construction time, budget, and quality, which is important for such large-scale and resource-intensive projects as subway construction. Conclusions. An efficient use of the proposed algorithm will significantly improve the quality of management, reduce the costs and time required for the delivery of construction projects, which makes the method not only relevant, but also vital to their success. That opens up new opportunities for research and application of the above methods in urban planning and deployment of major infrastructures. The paper will be of particular interest to urban planners, civil engineers, process optimisation researchers, as well as project managers responsible for the planning and successful delivery of large-scale civil engineering projects.
The relevance of the research topic is due to the need for objectively assessing the technical state of power grids as one of the effective tools used in drafting emergency response measures. The Aim of the paper is to analyse the operational condition of one of the branches of Rosseti Volga, PJSC, the Ulyanovsk Distribution Networks, in the process of transmission of electrical power throughout its business units. Given the task at hand, an objective description of the structure and balance of the company’s power supply network was defined. Methods. As the main methods of research, the author used the general scientific methods of statistical and numerical analysis, the circuit and prediction theory. Excel, MATLAB and proprietary software packages were used as calculation tools. The author analysed emergency situations in the company’s networks over a long period of observation, defined the criteria for assessing failures depending on the amount of undersupplied electrical power. The research analysed the main causes of damage to the examined power networks for the period between 2018 and 2023 and identified their share in the total number of failures for the above period. The occurrence of emergency outages in the short term was preventively assessed given the seasonality that takes into account the possible fluctuation of failures due to the particular combination of climate conditions of the territories where the electrical networks are situated. The calculated data was visualised using Excel and MATLAB. Conclusions. The paper’s findings may be of interest to the management of the Ulyanovsk Distribution Networks as the foundation of a potential set of emergency measures. The article may be also of interest to engineering services of other power grid companies and researchers involved with improving the reliability of power supply.
ДИСКУССИЯ ПО ПРОБЛЕМАМ НАДЕЖНОСТИ И БЕЗОПАСНОСТИ
Aim. To identify and analyse the shortcomings that are typical for publications on dependability in order to warn the authors of future publications against them. Methods. When writing the paper, the author critically analysed a large number of publications and compared them with the key provisions of the basic Russian and international dependability-related standards. The analysis covered Russian standards that are not part of the Dependability in Technics system, as well as course books. This choice is due to the fact that the nature of such publications leads to the replication of errors. In addition, correct and unambiguous presentation of information is especially important for them. Results. The following typical shortcomings found in many publications were identified and analysed. 1. Confusion in the basic concepts of dependability causing incorrect use of some basic terms. The most common errors of this kind are as follows: the use of the term “dependability” instead of the term “reliability” and the term “accessibility” instead of the term “availability”, which is due to the incorrect (in this context) translation of the English terms “reliability” and “availability”, respectively; the use of the term “failure” as a state of an item. 2. Errors associated with dependability measures, i.e., incorrect choice of the set of indicators and the use of incorrect names for indicators. 3. Unnecessary use of simple formulas that are valid only for the exponential distribution of time to failure, in the general case. 4. Undefined criterion of failure when setting quantitative dependability requirements. Conclusion. The paper’s findings will help the authors of future publications on dependability to improve their quality by avoiding the above shortcomings. The following measures are proposed for improving the situation. To ensure thorough and independent review of the forthcoming course books, a broad and impartial discussion by the professional community of draft standards and already published materials with the publication of the results of such discussions. The Technical Committee for Standardization 119 Dependability in Engineering should put into order the corresponding system of standards that is to become a coherent and consistent basis for other publications and documents, as well as undertake the expert assessment of all dependability-related technical standards.
МЕНЕНИЕ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА В ЗАДАЧАХ НАДЕЖНОСТИ И БЕЗОПАСНОСТИ
The Aim is to specify the concepts of “artificial intelligence” and “complex problem”, as well as to examine the state of the art in the application of artificial intelligence in solving complex problems. Methods. The author used contextual search, system analysis, and generalisation of information. Results. The paper identifies the key obstacle preventing the application of artificial intelligence in solving complex problems that consists in the lack of a conceptual and technical solution to present interdisciplinary knowledge in a form that could be processed and synthesised using artificial intelligence. Computer training that uses a variety of data sets, but does not involve an understanding of the synthesis process that the human brain so easily deals with, prohibits artificial intelligence from discovering something new, fundamentally unknown, which is imperative for solving complex problems. A common language is required that would simulate the processes of human thinking. Conclusion. The analysis and recommendations presented in this paper allow looking at the problem of artificial intelligence application as part of solving complex problems from a point of view that is different from the currently common focus on the use of fast search algorithms (the so-called large language models). The creation of a translator language between different fields of knowledge should contribute to an interdisciplinary exchange, the development of creative thinking, the emergence of new ideas and innovative solutions in various fields of human activity. An elaborate language will allow solving complex problems by combining various disciplines.
ФУНКЦИОНАЛЬНАЯ НАДЕЖНОСТЬ СИСТЕМ УПРАВЛЕНИЯ
Aim. To analyse the dependability terminology as regards embedded software and hardware systems, to develop a methodology for assessing the functional dependability of the components of embedded software and hardware computer-based control systems, and to conduct a practical assessment of the dependability of the modern software and hardware components of embedded computers and microcontrollers for the purpose of selecting the optimal control system architecture. A prototype medical robot intended for holding surgical instruments, Farabeuf retractors, etc. is used as the controllable object. The robotics system includes a microprocessor unit based on a common single-board computer that implements high-level control and voice command recognition functions, an additional microprocessor unit for controlling servo drives and receiving input signals, as well as the actuating modules, i.e., drives. Methods. The paper uses reference source analysis, analyses non-peer-reviewed collections of documents, previously restricted foreign standards and publications. Results. The author presents a method for assessing the functional dependability of the components of an embedded software and hardware control system. The probability of no failure of software and hardware components of the examined system was calculated based on statistical estimates and on the amount of code. Despite the different calculation methods and reference data, the results are generally close. The paper also estimated the probability of no software failure for an alternative control system architecture, whereas a part of important functions is shared with an additional software and hardware unit having a higher level of dependability. In this case, such is an Atmega32 microcontroller that is to directly control the drives. A comparative analysis of the results shows that the additional level with partially parallelised functions and partial control channel redundancy significantly improved the assessment of the system’s probability of no failure under predefined conditions. Based on the calculated data, the paper defines a control system architecture with two system levels that has high values of probability of no failure. Conclusion. Given the trend of growing numbers of functions being integrated within a single microprocessor-based system, improved functional dependability should be achieved through a two-level functional architectural solution, whereas the key tasks in terms of direct interaction with the hardware environment are redistributed in favour of a separate hardware module. Additionally, as regards embedded systems, such an approach often allows defining a lower, real-time system layer and an upper system layer that is responsible for highlevel functions such as speech recognition, data communication via interfaces, and artificial intelligence. The matter of practical evaluation of embedded software dependability is not yet completely resolved. Such software is characterised by the lack of virtualisation and a level of hardware abstraction, which, in turn, causes a close relationship with the hardware and peripherals. Obviously, repeating the required tests is not enough. Test combinations should include external hardware effects (signal level anomalies) and software effects on the periphery of a microcontroller.
ЗАЩИТА ИНФОРМАЦИИ
Aim. The problem of image steganalysis is especially relevant given the use of steganographical concealment in graphic files for delivering malicious code and information as part of cyber attacks. That requires improvements to the existing methods of detecting steganographically embedded information. One method is to use a comprehensive steganalysis technique that involves concluding on the detection of embedded information based on the findings of a group of steganalysis methods, as well as auxiliary calculations. Methods. It is proposed improving the accuracy of hidden information detection by using qualitative image estimation. The paper demonstrates the relationship between the estimates and the increased rate of steganalysis errors. The method of comprehensive steganalysis that involves accounting for the qualitative characteristics of images allows improving the accuracy of estimation by reducing the rate of false positives. The paper uses statistical methods for calculating the qualitative characteristics of images, Spearman correlation, and machine learning. Results. A software package has been developed that integrates elements of the comprehensive steganalysis method described in the paper that includes both a group of steganalysis methods, and a set of evaluated qualitative characteristics of an image. The author evaluates the relationship between the qualitative characteristics of an image and the steganalysis errors in the case of empty containers. Test samples have been defined and machine learning models have been built that generate a conclusion as regards the detection of hidden information in an image. Conclusion. The proposed method enables improved accuracy of hidden information detection, while taking into account the estimates of the qualitative characteristics of an image as part of steganalysis, which is confirmed experimentally.