STRUCTURAL RELIABILITY. THE THEORY AND PRACTICE
Aim. This paper is the continuation of [1] that proposes using the R programming language for fault tree analysis (FTA). In [1], three examples are examined: fault tree (FT) calculation per known probabilities, dynamic FT calculation per known distributions of times to failure for a system’selements. In the latter example, FTA is performed for systems with elements that are described by different functional and service models. Fault tree analysis (FTA) is one of the primary methods of dependability analysis of complex technical systems. This process often utilizes commercial software tools like Saphire, Risk Spectrum, PTC Windchill Quality, Arbitr, etc. Practically each software tool allows calculating the dependability of complex systems subject to possible common cause failures (CCF). CCF are the associated failures of a group of several elements that occur simultaneously or within a short time interval (i.e. almost simultaneously) due to one common cause (e.g. a sudden change in the climatic service conditions, flooding of the premises, etc.). An associated failure is a multiple failure of several system elements, of which the probability cannot be expressed simply as the product of the probabilities of unconditional failures of individual elements. There are several generally accepted models used in CCF probability calculation: the Greek letters model, the alpha, beta factor models, as well as their variations. The beta factor model is the most simple in terms of associated failures simulation and further dependability calculation. The other models involve combinatorial search associated events in a group of n events, that becomes labor-consuming if the number n is large. Therefore, in the above software tools there are some restrictions on the n, beyond which the probability of CCF is calculated approximately. In the current R FaultTree package version there are no above CCF models, therefore all associated failures have to be simulated manually, which is not complicated if the number of associated events is small, as well as useful in terms of understanding the various CCF models. In this paper, for the selected diagram a detailed analysis of the procedure of associated failures simulation is performed for alpha and beta factor models. The Purposeof this paper consists in the detailed analysis of the alpha and beta factor methods for a certain diagram, in the demonstration of fault tree creation procedure taking account of ССF using R’s FaultTree package.
Methods. R’s FaultTree scripts were used for the calculations and FTA capabilities demonstration.
Conclusions. Two examples are examined in detail. In the first example, for the selected block diagram that contains two groups of elements subject to associated failures, the alpha factor model is applied. In the second example, the beta factor model is applied. The deficiencies of the current version of FaultTree package are identified. Among the main drawbacks we should indicate the absence of some basic logical gates.
Today’s digital nanotechnology-based information management systems are especially sensitive to highly-energized particles during operation in irradiated areas. This sensitivity is most often manifested in the form of intermittent soft errors, i.e. distortion of information bits in the system’s memory elements with no hardware failure. The cause is in the afterpulses at the output of the logical elements that occur as the result of ionization of the gate area of the transistor’s semiconductor after it is exposed to a highly-energized particle. In order to counter soft errors the system is equipped with self-repair mechanisms that ensure regular replacement of distorted data with correct data. If this approach to design is employed, the significance of dependability analysis of the system under development increases significantly. Since regular occurrence of soft errors is essentially normal operating mode of a system in conditions of increased radiation, dependability analysis must be repeatedly conducted at the design stage, as that is the only way to duly evaluate the quality of the taken design decisions. The distinctive feature of fault-tolerant hardware and software systems that consists in the presence of nonprobabilistic recovery process limits the applicability of the known methods of dependability analysis. It is difficult to formalize the behaviour of such systems in the form of a dependability model in the context of the classic dependability theory that is geared towards the evaluation of hardware structure. As it has been found out, the application of conventional methods of dependability analysis (such as the Markovian model or probabilistic logic) requires making a number of assumptions that result in unacceptable errors in the evaluation results or its inapplicability.
Aim. Development of the model and methods of dependability analysis that would allow evaluating the dependability of hardware and software systems with periodic recovery.
Results. A simulation model was developed that is intended for dependability evaluation of complex recoverable information management systems. The model is a network of oriented state graphs that allows describing the behaviour of a recoverable system subject to the presence of computation processes and recovery processes that operate according to non-stochastic algorithms. Based on the simulation model, a software tool for dependability analysis was developed that enables probabilistic estimation of dependability characteristics of individual system units and its overall structure by means of computer simulation of failures and recoveries. This tool can be used for comprehensive dependability evaluation of hardware and software systems that involves the analysis of recoverable units with complex behaviour using the developed simulation model, and their operation along with simple hardware components, such as power supplies and fuses, using conventional analytical methods of dependability analysis. Such approach to dependability evaluation is implemented in the Digitek Reliability Analyzer dependability analysis software environment.
Practical significance.The application of the developed simulation model and dependability analysis tool at the design stage enables due evaluation of the quality of the produced fault tolerant recoverable system in terms of dependability and choose the best architectural solution, which has a high practical significance.
The problem of assignment of optimum level of dependability is not new and has not yet been solved. The requirement of complete dependability is noted to be erroneous. However, insufficient dependability of buildings is fraught with significant social and economic losses. Hence is the problem of definition of the required, optimal level of dependability. In Russia, there are no quantitative guidelines for the dependability of buildings and structures. At the same time, the strengths of the materials of ferroconcrete structures are regulated by GOST 34028-2016 for rod reinforcement and GOST 18105-2010 for concrete, as well as by building regulations SP 63.13330-2012 Concrete and ferroconcrete structures. In this paper, the dependability of the “Loads – design” construction system is suggested to be defined using the total probability formula. We assume that the mechanical characteristics of a structure’s materials and the loads are independent and joint random values: the emergence of one random value does not depend on the emergence of another one; change of load changes the stresses in the structural section. Probabilistic calculations showed that over the period of 10 years facilities designed in accordance with SP 38.13330.2012 for operation in the Gulf of Finland, will be destroyed almost with the 100% probability. For normal consequence class facilities (KS-2) the required dependability must tend to 3σ (0.99865). In order to ensure the required dependability of construction system of about 3σ, the probability of loads of 0.99865 should be attempted to be ensured. The application of SP does not always guarantee the required dependability of construction facilities. The application of probabilistic approaches in solving engineering problems can prevent emergency situations.
The paper notes that as the depths of operated wells grow, the application of cable and pulley mechanisms becomes preferable as compared to the existing pumpjacks. A generalized theoretical analysis of the kinematics of cable and pulley drives is set forth. The authors present the general theoretical analysis of the kinematics of the above mechanisms, as well as the results of computer calculations based of the developed equations for a number of cases. Further analysis of the results showed that the crank mechanisms of a rope pulley have “smooth” kinematics. The research resulted in a proposed invention of the design of mast-type oil well sucker-rod pump drive with lower steel intensity and power consumption that would allow increasing the performance of sucker-rod pumps.
The Purpose of this article consists in finding a utility model of a pump for the well rod in order to ensure the environmental safety of the equipment. That is achieved by lightening the metal structure of the pump with rotary stem and energy consumption is reduced. In the context of this problem, some calculations were performed in order to prove the system’s dependability. Based on the performed calculations it was established that the light structure can be used instead of the old heavy structure being its environmentally safe version. Experimental studies conducted by AzINMASH Research and Design Institute of Petroleum Engineering (Baku, Azerbaijan) indicate the feasibility of normal operation of sucker-rod pumps under the condition that n∙S = 54÷60 m/min. The authors examined the dependence between the peak output Q and the number of strokes n for various standard pumpjack sizes. The analysis of the parameters shown that the value of the product n∙S in the existing pumpjacks is below the recommendations based on experimental data, i.e. there is a tangible opportunity of increasing the productivity by extending the stroke of the rod hanger center, since well pump barrels may be as long as 6 to 7 meters. Estimates show that while studying the kinematics of long-stroke drives the changes in the length of the rope may be practically disregarded due to the displacement of the rope-to-pulley contact point. This simplifies the formulas that describe the kinematics of this type of long-stroke drives. Using the resulting formulas, comparative computer calculations for various cases were performed. It is shown that cable and pulley mechanisms have “softer” kinematics. The calculations confirmed the advisability of modification of the pump’s design that ensured reduced pollution of environment and energy savings. The future world will need renewable sources of energy, more power-efficient oil and gas production, minimal or zero pollution of the environment, thus the proposed solution appears to be of relevance. The authors propose a more productive design of sucker-rod pump that is easy to install and maintain at oil and gas production facilities. That can be achieved based on the calculations mentioned above.
FUNCTIONAL RELIABILITY. THE THEORY AND PRACTICE
oday’s military aviation imposes ever increasing requirements on the pilots’ professional qualities, thus complicating the problems related to the improvement of the quality of professional selection and training of military pilots. The research conducted by V.A. Ponomarenko and V.A. Bodrov introduced the term “prolonged selection” into aviation psychology, meaning professional psychological support of flight training. The forecasting of successful training at the early stage is an important part of this support and is the focus of this paper.
Methods. The aim of the study was to verify the forecast of successful flight training based on the professional psychological selection (PPS) at early stages of professional training and feasibility of such forecast in the form of integral estimation. For that purpose the authors used the academic progress estimates, the results of piloting skills development using flight simulators, the dynamics of professionally important qualities (PIQ) of cadets during the first two years of training in comparison with those indicators obtained during the PPS. The sample included 143 cadets. The test subjects were surveyed at their admission to the flight school and in the first two years of the course according to programs prescribed by the regulatory documents of the Russian Ministry of Defense and command of the Aerospace Forces. The survey is an obligatory condition for enrollment in a flight school and the subsequent flight training and does not contradict today’s ethical standards of scientific research. The surveyed cadets were distributed into two groups per categories of professional aptitude based on the results of PIQ survey conducted during the professional psychological selection: the 1-st group (55 people), the “fit” with good professional aptitude indicators, and the 2-nd group (88 people), the “conditionally fit” with acceptable professional aptitude indicators. Statistical analysis was carried out with Microsoft Office 2007 Excel descriptive statistics, Student’s t-test criterion for unpaired samples.
Results. The survey showed that the “fit” group, as compared to the “conditionally fit” are better adapted to the conditions of military service, have higher indicators of cognitive mental processes and sensorimotor abilities. They master course content and simulator training better. At the same time, in terms of their physiological and physical qualities the cadets of the two surveyed groups are indistinguishable and all show good results, which is confirmed by their grades in physical education and shows their good physical development and fitness.
Conclusions. The forecast of successful flight training made at the stage of professional psychological selection as a category of professional aptitude is confirmed at the initial stages of the cadets’ training during professional psychological support activities. The integral estimation composed of the results of academic progress, psychological, psychophysiological inspection survey data, results of simulator training can be used in subsequent flight training as the input for individual professional training programs. In order to improve the reliability of training, integrated automated methods are planned to be developed for the purpose of diagnosing current flying PIQs, as well as methods of their improvement and development [4].
FUNCTIONAL SAFETY. THE THEORY AND PRACTICE
The paper aims to examine the problem of integration of the opinions of a group of experts regarding a certain probabilistic distribution for the purpose of its evaluation by an analyst. It is implied that the decision-maker will use the result to evaluate the target risks and take according decisions. This problem may arise in many areas of risk analysis. For the purpose of this paper, the stability of various structures (buildings, railways, highways, etc.) against external mechanical effects, e.g. earthquakes, is chosen as the application object domain. As the primary research tool it is suggested to use the probabilistic method of decision-making risk calculation associated with involving experts into the analysis of risk of roadbed and other structures destruction in case of earthquakes. The evaluation of the seismic stability of rail structures using expert opinions is based on the Bayesian approach. The proposed method of estimation by analyst of the probabilistic distribution (fragility curve) on the basis of the opinions of a group of experts allows, using the obtained results, formalizing and explicitly expressing the latent risk of expert assessment. The procedure developed subject to a number of limitations allowed obtaining an explicit expression for the latent risk of expert assessment. The theoretical constructs presented in this paper can be easily implemented as software that will enable interactive input of parameters and data of the model under consideration and obtaining the desired distribution and the value of “risk in risk”. Such system, on the one hand, will allow verifying some intuitive assumptions regarding the behavior of results depending on the variation of parameters, and on the other hand, will be able to be used as the tool of expert assessment automation and analysis of its quality that helps making grounded decisions under risk. Further development of the proposed method may involve the elimination of the dependence of the value of “risk in risk” from the expert assessment. Implicitly, this dependence is present in the final expression, while ideally this risk is to be determined only by the expert ratings. The proposed approach can serve as the foundation of some practical optimization problems, e.g. the selection of the best group of involved experts from the point of view of minimization of this share of risk in cases of restricted funding of expert assessment (obviously, the higher the expert’s competence, the more accurate his/her estimates are and, subsequently, the lower is the risk, yet the higher is the cost of such expert’s participation). An associated problem can be considered as well. It consists in the optimal selection of experts for the purpose of minimization of assessment costs under the specified maximum allowable level of “risk in risk”. As a whole, the proposed method of evaluation of an unknown distribution and calculation of risk is sufficiently universal and can be used in the context of mechanical stability of structures, but also a wide class of problems that involve the assessment of a certain probabilistic distribution on the basis of subjective data about it.
Aim.Derailments of rolling stock units (cars, locomotive units) of freight trains cause damage to roadbed and rolling stock, as well as possible loss of transported cargo. Of special interest are cases when derailed rolling stock units intrude into the operational space ofan adjacent track. This, for instance, happened in the case of the Moscow – Chișinșu train at the Bekasovo I – Nara line on May 20, 2014, when as a result of the derailment of freight cars with subsequent intrusion into the operational space of an adjacent track 6 people were killed as the result of collision with an opposing train. In some cases intruding units may collide with an opposing freight train, which may cause the death of that train’s crew and derailment of its cars, which in case of transportation of hazardous loads (e.g. oil and gasoline) may have catastrophic consequences. Intrusion into the operational space ofan adjacent track also interrupts the traffic in both directions. In this context, evaluating the probability of derailed cars intruding into the operational space of an adjacent track is extremely important in order to maintain the tolerable level of risk in railway transportation, while the aim of this paper is to construct functional dependences between the probability of derailed cars intruding into the operational space of an adjacent track and various factors.
Methods. Probability theory and mathematical statistics methods were used: maximum likelihood method, logistic regression, probit regression, Cauchy regression.
Results.For each of the groups of incidents: derailments due to faulty cars/locomotive units, derailments due to faulty track, using the classic binary choice model an estimation was constructed of the probability of at least one derailed freight car intruding into the operational space of an adjacent track. This estimation turned out to be dependent upon the train loading and number of derailed units. As the number of derailed units is a priori (before the derailment) unknown, it was suggested to construct the probability of intrusion by at least one derailed freight car into the operational space of an adjacent track using a parametric model of dependence between the average number of derailed units and various traffic factors. The resulting dependences were compared. A numerical example was examined.
Conclusions.There is a significant direct correlation between the random values that characterize intrusion by at least one unit into the operational space of an adjacent track and the number of derailed freight train units. A direct dependence between the train loading and intrusion by derailed units into an adjacent track was established. In case of derailment due to faulty track, for loaded trains the probability of at least one derailed unit intruding into the operational space of an adjacent track is extremely high.
Acknowledgement: the authors express their personal gratitude to Prof. Igor B. Shubinsky, Doctor of Engineering, for his recommendations regarding the choice of the theoretical background that provided the foundation for the practical research, as well as hisadvice and valuable observations that contributed to this paper.
Abstract. Aim. Uninterrupted transportation process is ensured by the highly dependable and safe power supply system of railway transport. In addition, the railway power supply system provides power to external consumers. A risk-oriented approach to railway transportation management requires an infrastructure risk management and safety system. The main purpose of risk management in this area is to improve the dependability and safety of railway infrastructure facilities [1, 2]. Additionally, given the growing numbers of intelligent information systems, as well as automated railway transportation management systems, the task of ensuring functional safety becomes very important. In most cases this problem is solved by introducing redundancy that is understood as an exceeding complexity of the system structure compared to the minimal values required for the performance of the specified task [3]. The simplest way of ensuring redundancy is by creating backup capabilities, particularly standby duplication within the system of functional units and components. In order to evaluate the safety of the railway transportation power supply systems it is required to calculate the functional safety indicators of their components and system as a whole taking into account the factor of redundancy. This approach will enable the optimal redundancy architectures and ensure compliance with the assigned level of general system safety. That requires taking into consideration the complex structure of the evaluated facilities: presence of diagnostics systems, right-side failures, wrong-side failures, as well as their random nature. The paper aims to develop an applied algorithm of calculation and prediction of functional safety indicators using the example of railway power supply systems that can be used in both manual and automated calculation.
Methods. The power supply system evaluated for functional safety indicators is, from the functional point of view, a sequence of function implementations, while the failures of its components are random and some of them cause hazardous events. In this case, system analysis commonly involves Markovian and semi-Markovian methods, as well as graph methods. The advantage of these methods consists in the capability to evaluate the functional safety indicators of complex systems that go into many states, which is also typical for railway power supply systems.
Result. This paper examines the application of graph semiMarkovian methods for calculation of stationary and non-stationary functional safety indicators for components of power supply systems taking into account redundancy and right-side failures. This algorithm allows calculating safety indicators using the example of power supply systems and includes a set of incremental actions aimed at constructing the state graph, calculation of the initial and intermediate graph factors. An example is provided of calculation of the functional safety indicators of a graph of a traction substation power transformer.