Preview

Dependability

Advanced search
Vol 23, No 2 (2023)
View or download the full issue PDF (Russian)

УПРАВЛЕНИЕ РИСКАМИ

3-11 463
Abstract

Aim. Accidents at oil and gas facilities are among the main causes of environmental pollution and loss of life. In the interests of solving the problem of ensuring the safety of oil and gas facilities, a review of the state of oil and gas facilities in the republic of Yemen was carried out, statistical data on accidents were analysed, defined the events that cause a fire in hazardous situations and build scenarios for the occurrence and development of fires through the study of an accident in the Aden refinery, also assessment of the mass of flammable substances entering the surrounding space as a result of fire hazardous situations, construction of fields of hazardous factors of fire for various scenarios of its development, assessment of the consequences of exposure to hazardous fire factors on people, and a base was developed for risk assessment and management.

ИНТЕЛЛЕКТУАЛЬНЫЕ СИСТЕМЫ УПРАВЛЕНИЯ

12-18 414
Abstract

Random signal prediction is efficient for intelligent management and predictive diagnostics systems. Aim. The paper aims to analyse the error of random signal prediction. To develop recommendations for the selection of random signal extrapolator parameters. Methods. The paper uses the mathematics of the theory of random functions, formalization adopted in the theory of pulse systems, mathematical description of extrapolators with Chebyshev polynomials orthogonal over a set of equally spaced points. The coefficients of the predicting polynomial are selected according to the minimal least squares. Results. The paper describes the mathematical model of the extrapolator. Design ratios were obtained for prediction error assessments. The maximum and prediction interval-averaged relative mean square error of extrapolation were defined. The authors analyse the error of extrapolation of random processes defined by the sum of a centred stationary random process and a deterministic time function. Based on diverse calculations, recommendations were defined that allow selecting the parameters of the extrapolator (degree of the extrapolating polynomial, number of test points that precede the prediction interval, discretisation interval of the predicting function) under the specified input signal models. Conclusion. The use of extrapolators based on Chebyshev polynomials orthogonal on a set of equally spaced points and the least square method allows implementing a procedure for calculating predicted values of a random process with the required accuracy. Under the specified models of the predicting signal, a method was developed that allows selecting the extrapolator’s parameters (order, number of points involved in the generation of the prediction, sample spacing) for the purpose of ensuring the required accuracy.

STRUCTURAL RELIABILITY. THE THEORY AND PRACTICE

19-25 384
Abstract

The Aim of the paper is to analyse the definitions of the term “common cause failures” given in various international and Russian standards and point out their shortcomings; to identify and analyse the typical errors in the use of this notion and consideration of such failures as part of system dependability calculation. The importance of the topic is due to the fact that such failures reduce the efficiency of redundancy and must be taken into account in the process of the design of systems with high dependability requirements. Methods. The paper provides a comparative analysis of the definition of common cause failures given in Russian and international standards; analyses the methods of taking into account the effect of such failures presented in various publications; uses methods of the probability theory. Results. Differences between standards were identified in terms of the definition of the term “common cause failure”, as well as shortcomings of such definitions. Typical errors were pointed out in some publications dedicated to the methods of taking such failures into consideration. The simplest and most common beta-factor model was considered in most detail, the limits of its application were pointed out. Conclusions. It is advisable to use a single definition of common cause failures in different standards. It is to be taken from the basic terminological dependability standard with an appropriate reference. In the term itself and its definition, the word “failures” is to be in plural. The definition of this term in GOST EN 1070-2003 and GOST 34332.3-2021 are wrong, as they in no way correspond to the content of the defined notion. The conventional beta-factor model intended for taking into account common cause failures in the process of failure probability calculation can only be used in case of such probabilities being low.

26-38 424
Abstract

Aim. According to the established concept of space launch, practically each of the spacecraft in a near-Earth orbit needs to separate from the launch vehicle and deploy its folded structures (solar panels, antennas, reflectors, rods, etc.) in the operational position and only then is able to become fully functional for its intended purpose. The reliability requirements for single-operation mechanisms are so high, that any unidentified potential threat of critical failure in the course of design, development, manufacture and operation may make the creation of such spacecraft pointless, which is confirmed by the fatal results of the missions of the Sinosat- 2, Ekspress-АМ4, Kanopus-ST, Zuma, Chinasat-18 and many other satellites and space objects. The design of space mechanisms with a specified dependability is complicated by the fact that practically all of them are unique, highly critical systems that are supposed to be as reliable as possible, are unique or rare in terms of their design, are manufactured at most in small series and operate in unique environments. Statistical data on the dependability of components and elements of mechanisms that operate in the open space environment is at best insufficient for obtaining reliable dependability calculation results using the statistical methods of the modern dependability theory, while at worst they do not exist at all. In the context of constantly growing complexity of space technology and increasing costs of any in-orbit failure there is an objective necessity for developing a method for designing space mechanisms based on the evaluation of the specified dependability using the engineering solutions (without involving statistical methods of dependability) and early failure prevention procedures. Methods. The paper presents a method for designing and developing space mechanisms based on the design engineering analysis of dependability. Results. The method proposed in the paper allows modifying the analytical verification toolkit, thus migrating from design and expert methods (e.g., Stage-Gate or FMEA) of product design to purely engineering ones that are based on engineering disciplines and design engineering methods of ensuring quality and dependability. The use of design engineering analysis of dependability as part of the design and development enables dependability assurance as a natural and integral part of a designer’s work that enables engineering decision-making in accordance with the specified dependability requirements (rather than in an isolated manner).

38-48 350
Abstract

Aim. The paper, that continues [24], aims to develop an algorithm that would allow finding the required number of spare items (SPTA) for a complex system, whose elements may or may not be maintainable. Unlike in [24], as a generalisation, the paper introduces additional inoperable states. Those states are characterised by system downtime associated with the replacement of a failed element with an element from the SPTA. If the time of replacement of a failed element is not a negligibly small value as compared to other time indices of the serviced system, it becomes necessary, as suggested, to account for additional inoperable states. Methods. Markov models are used for describing the technical system under consideration. The final probabilities were obtained using a developed system of Kolmogorov equations. A stationary solution was obtained for the system of Kolmogorov equations. Classical methods of the probability theory and mathematical theory of dependability, some special functions were used. Conclusions. The paper formalizes the problem of determining the required number of SPTAs for a system with items that may fail at a random moment in time. The failures may be of two types. The first type of failures leaves an item in an inoperable repairable state. In this case, the item can be repaired in the maintenance unit of the company that operates such item. The second type of failures, a more catastrophic one, leaves the item in an inoperable non-repairable state, and it can be repaired only by the manufacturer or a specialized maintenance company. A Markov graph was built for the respective birth and death process. Equations were formalised for typical states of the Markov graph. A stationary solution was obtained for the system of Kolmogorov equations using induction. The theorem of general solution was proven for all the states of the Markov graph. In case of unlimited repair, the solution is significantly simplified. It was shown that, on an assumption of unlimited repair and momentary replacements, the solution matches the one earlier obtained in a simplified form in [24]. The limit values of the probabilities of inoperable critical and non-critical states were found. They allow concluding that, in case of unlimited repair, the growth of the size of SPTA causes the probability of a critical state of insufficient SPTA to tend to and become zero. Additionally, the probability of an inoperable state associated with the replacement of a failed item with an equivalent from the SPTA that takes a certain time is defined by a stationary unavailability of the alternating repair process. The general solution of the problem allows formalising the SPTA sufficiency coefficient. The required number of SPTAs is identified by progressively increasing the number of SPTAs until the probability of inoperable critical states is below the defined probability of SPTA shortage. An example of finding the required number of SPTAs is given

49-56 310
Abstract

Aim. Reliability evaluation of a system or component or element is very important in order to predict its availability and other relevant indices. Reliability is the parameter which tells about the availability of the system under proper working conditions for a given period of time. The study of different reliability indices are very important considering the complex and uncertain nature of the power system. Methods. The study uses classical methods of the reliability theory in respect to a system with a constant failure rate consisting of series-connected elements. Conclusions. The paper reviews literature dedicated to the reliability estimation of power supply systems. In particular, the paper examined studies that employed the Markov cut-set approach, the conditional probability approach, distribution systems simulation, probabilistic models, the Monte Carlo method, reliability network equivalent, state transition sampling, inspection repair-based availability optimisation of distribution systems, bootstrapping, fault tree analysis, Bayes networks, peak-valley partition model, demand response model, etc. The authors defined the problem and analysed input data. They showed that, in physical terms, the system configuration will be a series reliability network. Given the above, the system fails even if one component fails, and survives if all of the components survive. It is noted that, when considering the reliability of series systems, the three basic parameters are average failure rate, average annual outage time and average repair time. As the customer orientated indices associated with the research of operational reliability the system average interruption frequency index (SAIFI), the system average interruption duration index (SAIDI) and the customer average interruption duration index (CAIDI) were used. Using the example of an eight-node radial distribution system, reliability was estimated for each distribution section, as well, as at each load point. For the examined distribution sections and load points, three basic reliability parameters were also obtained, i.e., the average failure rate, average outage time and average annual outage time. For a radial distribution system, important customer-oriented indices were estimated, i.e., system average interruption frequency index, system average interruption duration index and customer average interruption duration index. The resultant data allow characterizing reliability and other associated indices, which is relevant for power distribution systems.

МЕТОДЫ И СИСТЕМЫ ЗАЩИТЫ ИНФОРМАЦИИ. ИНФОРМАЦИОННАЯ БЕЗОПАСНОСТЬ

57-63 466
Abstract

Aim. The effects of cyber attacks cause failures of network elements, theft of information and other unlawful actions. Cyber attacks are often accompanied by untypical traffic activity and anomalies. The paper aims to develop an approach to detecting anomalies in network traffic by identifying the degree of self-similarity of the traffic using fractal analysis and statistical methods. Methods. The paper uses methods of mathematical statistics, mathematical analysis, fractal analysis. Results. The paper suggests an approach to identifying anomalies in network traffic by evaluating self-similarity and using statistical methods for improving the accuracy of cyber attack detection. At the first stage, the Hurst exponent is calculated for the reference traffic. At the second stage, actual traffic is divided into optimal time intervals; for each interval, the Hurst exponent is calculated. If the identified value of the Hurst exponent differs from the one obtained for the reference traffic, it is decided that there is an anomaly. At the final stage, statistical analysis is used in order to precisely localise the anomaly. The authors analysed fractal and statistical methods that resulted in the identification of more efficient methods to be used as part of the proposed approach. For fractal analysis, the DFA method was proposed, while for statistical analysis, the ARFIMA method was proposed. Conclusion. The suggested approach allows identifying cyber attacks in real time or near-real time.

SYSTEM ANALYSIS IN DEPENDABILITY AND SAFETY

64-67 286
Abstract

Today, queueing systems that describe failures in the process of software testing are subject to extensive research. Such systems involve a dependence between the rate of an input Poisson stream and the rate of exponentially distributed handling time. Using this dependence, system load levelling procedures are defined. However, such a model and method of research are not suitable for systems with a deterministic handling time. Therefore, this paper examines the dependence of the Poisson distribution parameter of the number of requests in a system from the deterministic handling time in the presence of a peak rate of the input stream. This dependence is examined analytically and numerically. It is shown that a reduction of the handling time levels the peak number of customers in the system



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1729-2646 (Print)
ISSN 2500-3909 (Online)