Preview

Dependability

Advanced search
Vol 16, No 2 (2016)
View or download the full issue PDF (Russian) | PDF
https://doi.org/10.21683/1729-2646-2016-16-2

STRUCTURAL RELIABILITY. THE THEORY AND PRACTICE

3-15 2256
Abstract

Aim. The paper describes main concepts and definitions, survivability indices, methods used to estimate survivability in different external and internal conditions of application of technical systems, including the studies in the field of structural survivability obtained 30 years ago within the frames of the Soviet school of sciences. An attempt is made to overcome different technical understanding of survivability, which has been developed in the number of industries up to date – in ship industry, aviation, communication networks, energy, in defense industry. The question of succession between the properties of technical survivability and global system resilience is considered. Technical survivability is understood in two basic notions: а) as the system property to withstand negative external impacts (NI); b) as the system property to recover its operability after a failure or accident caused by external reasons. This paper considers the relation between the structural survivability when the system operability logic is binary, and is described by a logical function of operability, and the functional survivability when the operation of the system is described by the criterion of functional efficiency. Then the system failure is a decline in its efficiency below a preset value.

Methods. The technical system is considered as a controlled cybernetic system, which has specialized aids to ensure survivability (SAs). Logical and probabilistic methods and results of combinatorial theory of random placements are used in the analysis. It is supposed that: а) negative impacts (NI) are occasional and single-shot (one impact affects one element); b) each element of the system has binary logic (operability – failure) and zero resistance, i.e. it is for sure affected by one impact. Henceforth this assumption is generalized for the r-time NI and L- resistant elements. The paper also describes different variants of non-point models when the system part or the system as a whole are exposed to a group affection of the specialized type. The article also considers the variants of combination of reliability and survivability when failures due to internal and external reasons are analyzed simultaneously.

Results. Different variants of affection and functions of survivability of technical systems are reproduced. It has been educed that these distributions are based on simple and generalized Morgan numbers, as well as Stirling numbers of the second kind that can be reestablished on the basis of simplest recurrence relations. If the assumptions of a mathematical model are generalized in case of n the r-time NI and L-resistant elements, the generalized Morgan numbers used in the estimation of affection law are defined based on the theory of random placements, in the course of n-time differentiation of a generator polynomial. In this case it is not possible to set the recurrent relation between the generalized Morgan numbers. It is shown that under uniform assumptions in relation to a survivability model (equally resistant system elements, equally probable NI) in the core of relations for the function of survivability of the system, regardless of the affection law, there is a vector of structure redundancy F (u), where u is a number of affected elements, and F (u) is a number of operable states of the technical system with u failures.

Conclusions: point survivability models are a perfect tool to perform an express-analysis of structural complex systems and to obtain approximate estimates of survivability functions. Simplest assumptions of structural survivability can be generalized for the case when the logic of system operability is not binary, but is specified by the level of the system efficiency. In this case we should speak about functional survivability. PNP computational difficulty of the task of survivability estimation does not allow solving this task by means of a simple enumerating of states of the technical system and variants of NI. It is necessary to find the ways to avoid the complete search, as well by the conversion of the system operability function and its decomposition. survivability property should be designed and implemented into a technical system with consideration of how this property is ensured in biological and social systems.

16-19 586
Abstract

The purpose of this article is to offer and examine nonconventional trigonometric distributions in order to describe degradation failures of technical devices. Two methods for approximate description of reliability indices are proposed for an estimated value of mean time to failure. Firstly, as the parameter of a failure flow with operation time equal to mean time to failure tends to its stationary value equal to the opposite value of mean time to failure, it is offered to approximate the dependence of the parameter of a failure flow of the operation time by a piecewise linear function. Other reliability indices are defined using the Laplace transformation. For instance, the probability of reliable operation can be described by the cosine function, and the failure rate – by the tangent function. Secondly it is proposed to approximate the dependence of the density of failures distribution depending on the operation time by the sine function. Other reliability indices are defined using the Laplace transformation. For instance, the probability of reliable operation can be described by the function of squared cosine, and the failure rate – by the double tangent function. As a result of studies it has been concluded that since a failure rate of the offered distributions increases as the operation time increases, and the coefficient of variation is less than one, they can be used to describe degradation failures of technical devices. The obtained results have shown that reliability indices at these distributions are expressed by elementary functions and it can simplify the calculation of reliability indices of systems with different connections of their constituent elements.

20-25 1400
Abstract

Aim. When designing lifting equipment as a whole, as well as of its elements it is desirable to perform not only deterministic strength estimations, but also a probabilistic calculation of major reliability indices. Theoretical approach to the calculation of major reliability indicators of lifting equipment is described by V.I Braude. In practice the calculation of reliability of lifting equipment is usually quite difficult, because the information about values for certain indices provided in literary sources is incomplete and discordant. It causes the necessity to use average reliability indices and to introduce different assumptions to the calculation. And the calculation results turn out to be rather approximate. At the same time an approximate calculation of reliability indices allows to decide on efficient use of one or another design layout of lifting equipment and/or its structural unit.

Methods. To demonstrate the logical arguments that could be used at the calculation of reliability of lifting equipment, the article describes an example of calculation of probability of reliable operation for the lifting gear of an overhead crane, executed by a “detailed” scheme and consisting of nine elements: a three-phase induction electric motor with a short-circuit rotor; a parallel shaft double-stage gear box; a block brake with locking movement actuated by a coil spring and with breaking actuated by a shortstroke alternating electromagnet; flexible bolt coupling (with brake pulley); load drum; drum axle (or shaft); drum support; load cable and its mountings; hook assembly. Structurally, the elements of a lifting gear are connected in-series, i.e. in case of a failure of any element, the operable state of the gear is violated (a failure occurs).

Results. The known experience of operation of lifting equipment shows that the most probable failures of a lifting gear’s elements are the following failures: turn-to-turn short circuit of electric motor; wear out of bearings and gear teeth; turn-to-turn fault of a coil of a brake electromagnet; tearing up of a pulley of a flexible bolt coupling and break cheek wear out; fatigue breakdown of a drum and a bearing block, built into a drum; fatigue breakdown of a drum axle (or shaft); wear out of drum axle bearings, built into a drum; wear out (breakage) of wires and strands of a load cable; hook wear out and bearing freezing of a hook assembly. That is why the reference data used for calculation usually describe the probability of occurrence or a rate of these particular failures. Calculation was carried out with the following assumptions: Degradation (wear rout) failures were not taken into account, since they are anticipated during the phase of technical maintenance and repair; failures, caused by the violations of the rules of safe operation, were refer not to the crane failures, but to the failures of other systems. For descriptive reasons the elements of a lifting gear were chosen from the catalogue with a certain “margin” and without taking a loading mode into account.

Conclusions. The calculation results showed that neglecting various
load-bearing factors (for instance, a gear box underload by a rotation moment) may lead to excess reliability of a crane as a whole, its machinery and structure components.

26-30 674
Abstract

Aim. Fulfillment of the requirements for the reliability indices of complex technical products and systems is one of the priority tasks to be solved along the stages of development and testing. It is advisable to define the parameter values of the elements of the complex system diagram at the design stage, optimally, in terms of the minimum of an efficiency/cost criterion, ensuring the fulfillment of the requirements for the system reliability.

Methods. The main problem that impedes to solve the task of parameter optimization of a model of the reliability diagram is a significant instability of estimation of probability of reliable operation using a Monte-Carlo method (a significant dependence of the rate of estimation error of time). In such conditions an optimization search task could be solved on provision of a stepwise determination of the number of model experiments, which ensures the required accuracy of the estimation of probability of the system reliable operation, necessary for stable operation of the parameter optimization algorithm. The studies of characteristics of estimation of the system reliable operation allowed determining the interrelation of the estimation of reliable operation and the rate of estimation error, offering its approximation in form of a simple formula. The number of model experiments that ensure the required estimation accuracy, is defined using the developed formula determining the interrelation of the estimation of reliable operation and the rate of estimation error, and the known formulas determining the rate of error of the sum N of equally distributed independent random values. Use of the obtained formulas makes it possible to organize the work of the parameter optimization algorithm of the system reliability model by determining its parameters with the required accuracy using minimum computer resources in the context of instability of estimation of probability of the optimizable system reliable operation.

Results. Efficiency of the offered approach to realize parameter optimization of a statistical model of the reliability diagram is shown on the sample of estimation of optimal parameters of the system reliability diagram variant, for which there is an analytical solution for the estimation of reliable operation probability. And the results of parameter optimization with the use of analytical value of the probability of reliable operation are the basis for estimation of the accuracy of the algorithm of parameter optimization of the system reliability model operating with the use of a Monte-Carlo method. It has been shown that the offered approaches ensure the convergence of the search algorithm and the required accuracy in estimation of the parameters of the system reliability diagram that optimally ensure the fulfillment of the requirements for the system reliability.

Conclusions. The results described in the article confirm technical feasibility and economic viability of determination of optimal values of the system reliability parameters at the design stage. Obtained estimations are the basis for the system integration with required elements, or for the requirements to be set to their reliability, if the development of new elements is necessary. In case there are no elements with design characteristics of reliability, the required reliability of the system can be ensured by special technical redundancy measures and (or) by the creation of the system of technical maintenance and repair.

FUNCTIONAL RELIABILITY. THE THEORY AND PRACTICE

31-38 717
Abstract

Aim. Some of the main performance indicators of ACS application are the operational efficiency and stability of the control of the above mentioned systems. The wide application of computing techniques in ACS as well as the organization of computer networks on this basis stipulate the necessity of effective control of distributed computation processes to ensure the required level of operational efficiency and stability while solving the specified tasks. The existing methods used to organize the computation process (method of dynamic programming, branch and bound method, sequential synthesis, etc.) may turn out to be bulky or less accurate in certain situations. These methods help to find a solution in the mode of interactive choice of an optimal variant to organize a computation process, i.e. consecutive approach to the required result and do not allow getting an a priori estimation of the time of computation process in a network. Application of the specified methods when solving research tasks in the course of design of computer networks presents itself as quite difficult. This article offers the application of a geometrical method that allows estimating the minimum time necessary to solve the set of information-computing tasks as well as ensuring their optimal assignment in a computing system. Besides, the method allows finding a full set of possible variants for the organization of a computation process in a network with an a priori estimation of time of the decision for each variant. The principle of the method is to represent the sets of all possible distributions of tasks by workstations in form of a broken hypersurface. To solve the indicated task the criterion and conditions of the optimality of the time spent to solve informationcomputing tasks have been introduced.

Results and conclusions. This article describes many variants of realization of a computation process for homogeneous and non-homogeneous computing environments. Solution algorithm for a homogeneous computing environment is quite simple and makes it possible to define a minimum time necessary for a computation operations. It is based on a geometrical representation of the distribution of tasks by workstations in form of the hyperplane constructed in orthonormal space whose basis vectors are computation capacities of workstations. Besides, the algorithm for homogeneous computing environment can be successfully used for an approximate estimation of the minimum time necessary to solve a set of tasks in a network, for non-homogeneous computing environment as well. Minimum time necessary to solve functionally different tasks in a non-homogeneous computing environment is defined using a piecewise linear hypersurface that slightly complicates the algorithm, though in general, with consideration of computation capabilities of moderns computers, it is still simply realized. The estimations carried out in the course of preliminary researches, allowed concluding about the application of a geometrical method in a computer network for a large amount of workstations and informationcomputing tasks. The possibility of an a-priori estimation of the minimum time necessary to solve a set of tasks in the computer network allows using the offered method to solve research tasks at the stage of design of a computer network to estimate such indicators as operational efficiency, reliability, stability and etc. The possibility of an aprioristic assessment of the minimum time of the solution of a complex of tasks in the computer network allows to use, offered in work, a method in the solution of research tasks at a design stage of the computer network for an assessment of her such indicators as efficiency, reliability, stability, etc.

39-42 677
Abstract

Aim. The article provides a method and a formula for calculation of probability of nominal operating mode for main product pipeline (MPP) – further as the text goes, MPP availability function – with consideration of ageing of its pumping units which are periodically maintained in accordance with a normative service strategy. This availability function is determined in the following assumptions: 1. MPP is composed of two basic parts: passive part – high reliable line part; and active part including pump stations which ensure nominal operating mode for the product’s pumping-over. MPP may contain any finite number of pump stations. 2. Each pump station includes the system of main pumping units (MPU system) which are active elements of the station, instrumentation and control, pipeline accessories and shutoff valves, as well as other essential technological equipment. MPU system is the part of pump stations ensuring nominal conditions for the oil products pumping-over and which is usually consists of four homogeneous MPUs. 3. MPU arrangement makes it possible to bring each working unit into standby, and substitute it with any standby unit. 4. A required nominal mode for MPP operation is determined by hydraulic and cost calculations as the result of which a required operating mode is indicated for each pump station. For each station the number of MPU is indicated which must be in a working order, and the rest MPU shall be either in standby, or under restoring repair performed in accordance with a normative service strategy. Thus, nominal mode of MPP operation is ensured by the respective modes of pump stations, which with regard to pumping units are determined by the number of active MPUs. Analysis of statistics related to the failures of pumping units maintained in accordance with a normative service strategy makes it possible to define the units’ failure rate in each interval between overhauls. In particular, failure rates are increasing on the respective intervals which means the ageing of units with their operation. Then the method for calculation of availability function for any pumping unit within the scope of MPP is offered. Initial conditions and differential equations are written to find an availability function for each MPU system at pump stations, obtained using the “death and reproduction” scheme. Basic results of calculations per each of three sequential intervals between overhauls are represented in form of graphs that show the influence of ageing of the units on the values of MPU availability function at a pump station: values of derivatives of availability function are sequentially decreasing for the respective times counted from the start of each recurrent overhaul. The expression to calculate availability function of MPP with several pump stations is also provided. The results of calculation of the availability function can serve as the grounds for modernization of a normative periodic strategy on order to increase the probability of MPP nominal mode, as well as other technical and economic performance indicators of MPU systems, in particular, energy efficiency indicators. In particular, it is pointed out that certain types of non-periodic service strategies, built on the basis of a normative strategy may significantly increase the values of indicated.

43-48 972
Abstract

Aim. The purpose is to increase the power of the Pearson’s chi-square test so that this test will become efficient on small test samplings. . It is necessary to reduce the scope of a test sample from 200 examples to 20 examples while maintaining the probability of errors of the first and the second kind. Selection of 20 examples of biometric images is considered by users to be a comfortable level of effort. The need to select more examples is perceived by users negatively.

Methods. The article offers one more (the second) form of the Pearson test that is much less sensitive to the scope of data in a test sampling. It is shown that a traditional form of the chi-square test is more sensitive to the scope of a test sampling than the Cramer-von Mises test. The offered (second) form of the chi-square test is less sensitive to the scope of a test sampling than a classical form of the chi-square test and less sensitive than the Cramer-von Mises test as well. This effect is achieved by the transition from the space of frequency of occurrence of events and probabilities of a group of similar events occurring in the space of more accurately evaluated junior statistical moments (mean and standard deviation). The fractal dimension of the new synthetic form of chi-square test coincides with the fractal dimension of the classical form of the chi-square test.

Results. The offered second variant of the chisquare test is presumably one of the most powerful of all existing statistical tests. The analytical description of correlation of standard deviations of a classical form of the chi-square test and a new form of the chi-square test is given. The standard deviation of the second form of the chi-square test decreases by half on retention of a statistical expectation on samplings of the same scope. The latter is equivalent to a four-time reduction of the requirements to the scope of a test sampling within the interval from 16 to 20 examples. Power gain as the result of the application of a new test is growing with the growth of a test sampling scope.

Conclusions. When creating a classical chi-square test in 1900, Pearson was guided by limited computing opportunities of the existing computer facilities, and he had to rely on the analytical relations that he found. Today the situation has changed and there are no more restrictions in relation to the engaged computing resources. However we continue to rely on those created with computing resources of 1900 by inertia. Probably, we should try to consider modern opportunities of computer facilities and to build more powerful options of statistical tests. Even if new tests will require a search of large number of possible states (they will have big tables calculated in advance instead of analytical relations), it is not a constraining factor today. When data is insufficient (in biometrics, in medicine, in economy) a computing complexity of statistical tests does not play a special role if the result of estimations is more accurate.

FUNCTIONAL SAFETY. THE THEORY AND PRACTICE

49-53 827
Abstract

A measure of the safety of a system’s object can be the value of an associated risk which is based on the risks of its constituent factors (elements). The main task of the paper is the definition of the integral risk of an object and a system as a whole. This is as follows. Summing up of risks of all elements is not acceptable, since they may have, for example, different measures (the number of fatalities during a certain period of time is a social risk, and the cost of losses is an economic one). We need some other methodological tool that can transform different measures of safety of objects (elements) into a certain single integral measure of a system’s risk. Such tasks occur in medicine, food industry, in transport sector, etc. The paper offers a method to define the integral risk of a system based on the processing of a common field of the results of decisions taken on the level of risks of a system’s elements. The results of decisions are based on ALARP principle. Each of these results is one of several further probable decisions, for example, one of four decisions: intolerable risk level, undesirable level, tolerable and negligible risk level. Digitalization of these decisions of constituent elements with consideration of nonlinear growth of danger of the risk approaching to the intolerable level is made using a power function. It helps to define a numerical value equivalent to a component risk level, and then to find a weighted mean resulting numerical value equivalent to a risk level for all system components and solve an inverse task of definition of the integral risk of a system. This article describes an example of how this method could be used to solve the task of the investment priority for the works on technical maintenance of railway track. This task is limited to the ranking of track sections by priority of overhaul performance depending on the level of risks of the following factors: number of defective and flawed rails per 1 track km.; number of defective clamps per 1 track km.; number of pumping sleepers per 1 track km; number of faulty wooden sleepers per 1 track km.; number of places of temporary repair; defects of roadbed; failure rate. Based on the risk matrices constructed by the method described above in relation to each of the listed factors, an integral risk matrix is formed for the list of sections, and based on the integral estimation each section gets a priority of an overhaul performance. The given example is indicative of the efficiency and practicability of the method offered.



ISSN 1729-2646 (Print)
ISSN 2500-3909 (Online)